24 August 2017

How to Prevent a Cyberwar


By JARED COHEN

The Cyber Fusion Center at Maryville University in St. Louis, which monitors attempted computer hacking attacks around the world. CreditJ.B. Forbes/St. Louis Post-Dispatch, via Associated Press

While the specifics of Russia’s interference in the 2016 American election remain unclear, no one doubts that Moscow has built a robust technological arsenal for waging cyberattacks. And as tensions between the two countries rise, there’s a good chance President Vladimir Putin will consider using them against American interests — if he hasn’t already.

A cyberwar could quickly become a real war, with real weapons and casualties. And yet for all the destructive potential of cyberwarfare, there are precious few norms regarding how such conflict should be conducted — or better yet, avoided.

Cyberweapons won’t go away and their spread can’t be controlled. Instead, as we’ve done for other destructive technologies, the world needs to establish a set of principles to determine the proper conduct of governments regarding cyberconflict. They would dictate how to properly attribute cyberattacks, so that we know with confidence who is responsible, and they would guide how countries should respond.

Perhaps most important, world leaders should create a framework of incentives and sanctions that encourage governments to stop destructive cyberattacks in the first place.

Ideally, these principles would be enforced through a multilateral treaty, but given the disorder of the international system and the fact that countries don’t have a monopoly on the tools of cyberwar, such an approach seems unrealistic in the near future. But we can still take meaningful steps toward smaller, more tangible goals.

We could begin by working through the existing global security framework. NATO allies, for example, could collaborate by sharing forensic intelligence from cyberattacks and building better detection and response techniques.

Every weekday, get thought-provoking commentary from Op-Ed columnists, the Times editorial board and contributing writers from around the world.

Separately, countries could create international working groups to discuss how to react to attacks and what to do in the days or weeks before we know where they came from.

It’s unrealistic to expect that any single country will unilaterally disarm its cyberarsenals while the threats remains. But governments could begin to discuss what constitutes a reasonable response when one state is attacked by another in cyberspace.

Otherwise, it’s only a matter of time before a nation under cyberattack responds by bombing the likely culprit even before the evidence is conclusive.

The United States is uniquely positioned to lead this effort and point the world toward a goal of an enforceable cyberwarfare treaty. Many of the institutions that would be instrumental in informing these principles are based in the United States, including research universities and the technology industry. Part of this effort would involve leading by example, and the United States can and should establish itself as a defender of a free and open internet everywhere.

The process needs to be transparent; an effective framework to govern international behavior cannot be created or administered in secret. Since most cyberwar is conducted covertly, governments avoid any public acknowledgment of their own abilities and shy away from engaging in any sort of “cyberdiplomacy.” Statecraft conducted in secret fails to create public norms for deterrence.

The challenge of organizing such an effort in this fraught, unstable international system might seem daunting. Cyberweapons have already been used by governments to interfere with elections, steal billions of dollars, harm critical infrastructure, censor the press, manipulate public conversations about crucial issues and harass dissidents and journalists. The intensity of cyberconflict around the world is increasing, and the tools are becoming cheaper and more readily available.

The cost of inaction is severe. In her Pulitzer Prize-winning history of the outbreak of World War I, “The Guns of August,” Barbara Tuchman describes how a single catastrophic event — the assassination of the heir presumptive to the Austro-Hungarian throne — led to a chain reaction that ignited a global conflict. The assassination was the catalyst, but the ingredients for the chain reaction, in the form of complex military and diplomatic entanglements, had been in place for some time. True, it’s an imperfect analogy in a world that seems to be disentangling more by the day, but it highlights the perils of escalation when powerful countries confront new threats.

We could soon be faced with a similar moment involving cyberwar. A broad international commitment might be the only thing that can prevent the next cyberwar from becoming the next Great War. If we don’t improve our preparedness to meet the challenges of our multidimensional world, we risk proceeding so far down a path of escalation that conflict becomes inevitable.

Jared Cohen is the chief executive of the Jigsaw technology incubator, a subsidiary of Alphabet, and an adjunct senior fellow at the Council on Foreign Relations.

Follow The New York Times Opinion section on Facebook and Twitter (@NYTopinion), and sign up for the Opinion Today newsletter.

No comments: