2 August 2017

The artificial intelligence arms race


Cyberspace is now a territory where politics, economics, and foreign affairs are all contested – and Internet bots driven by artificial intelligence have emerged as key new actors, Andrej Zwitter writes.

Artificial intelligence (AI) is pervading all aspects of our lives. As one of the primary methods of analysing unstructured and messy data sets, it has become synonymous with big data. And with much of the globally produced data being transmitted via the Internet, a new cyber landscape has emerged, a parallel digital world that still requires the carving out of territories and rules.

These territories are currently dominated by large corporate actors, such as search engines and social media networks, which themselves compete over access to the new raw material – data. It is roamed by vigilantes and cyber criminals. But states also need to define their own role in this new world.

In recognition of this need, states are increasingly investing in artificial intelligence. China recently announced an IT strategy focusing on artificial intelligence, virtual reality, and robotics. In doing so, the country is trying to ride the wave of renewed interest in emergent technologies around cyberspace, hoping to gain an economic advantage and position itself as technology leader by 2030. This foresight in national strategy might give China a decisive competitive edge over other countries that neglect the importance of artificial intelligence and robotics specifically for the emerging data economy.

States are slowly coming to realise the importance and the need to become a tangible, regulatory actor in the anarchic cyberspace. For instance, while it is hard to establish criminal liability for international war crimes in the real world, cyber warfare has elevated the burden of proof to a whole new level – agents can operate from anywhere in the world, do not require large scale facilities such as military compounds, and do not even have to be human at all, but can be bots, viruses, or worms.

Microsoft recently initiated a discussion about a Digital Geneva Convention, calling also for the private sector to become involved in what used to be the exclusive domains of states (international law and warfare). But so far states have largely not succeeded in assuming a regulatory role in the cyberspace.

Being a regulative actor in cyberspace requires more than just enacting barely enforceable laws on data protection, scraping the web for intelligence purposes, and patrolling the dark web. It also requires states to develop cyber policy that takes cyberspace seriously in its own right and under its own conditions.

Cyberspace has its own ontology that does not conform to the material world. This ontology involves digital globality, because cyberspace is inherently global in nature and doesn’t lend itself to be regulated on the territorial principle; digital anarchy, because laws that try to regulate the web struggle with territorial limitations and are difficult to enforce; and digital agency, because new cyber-native actors are emerging, for example, proxies to real agents such as smart bots, worms, and viruses, are increasingly used to carry out cyber crimes and warfare increasingly.

An encompassing cyber strategy for foreign policy would, therefore, have to include at the very least the economy, the justice sector, foreign relations, and defence. Given that cyberspace operates in accordance with its very own principles of digital globality, anarchy and agency, for governments to attach passages on digital policy to already existing economic, political and military strategy would be to remain merely responsive – and would not go far enough in actually tackling the problem.

Examples of this can be found in a range of sectors. In the justice sector, in order to digitally and physically shut down two of the biggest dark net marketplaces, Alpha Bay and Hansa Market, police forces of the UK, the US, Thailand, Lithuania, Canada, France and the Netherlands had to cooperate. This is just one example of the inherently transboundary nature of cyberspace.

In the economic sector, the 2010 flash crash was allegedly conducted by quant-hackers exploiting the interaction between regulations and high frequency trading algorithms. Subsequently, strategies used to manipulate high frequency trading, such as spoofing, layering and front running have been banned. This, however, could not prevent the 2016 Pound Flash Crash, which, as analysts suggested, was caused by AI-driven algorithms going rogue in response to a press statement of Francois Holland about a hard Brexit.

In the military sector, cyber–attacks have become incredibly sophisticated, as StuxNet and the attack of the botnet Mirai against Internet of Things (IoT) devices demonstrated. Mirai almost broke the Internet by launching distributed denial of service (DDoS) attacks from more than 1.2 million infected IoT devices. Less well known is a vigilante bot, called Hajime, which was designed to counteract Mirai and similar botnet attacks by infecting IoT devices and blocking some of their ports used by Mirai for cyber attacks.

A looming fear is what bots, worms, and viruses can accomplish when enhanced by artificial intelligence. This is not limited to polymorphic viruses designed to escape virus detection.

Bots – small programmes executing tasks as virtual agents – are already responsible for more than half of all Internet traffic. The logic for their use is clear: they are cheaper and available in larger quantities than human agents, faster in specialised tasks, and navigate cyberspace natively.

Not all bots are good bots like chatbots, crawlers, or traders. In fact, more than half of them are malicious bots, such as spambots, impersonators, scrapers, and hackerbots. The utility of pattern recognition to circumvent CAPTCHA was only the beginning of AI-driven bots 10 years ago. With AI, the prospect of smart malicious bots developing ever-new trial paths might become a real problem for law enforcement and the cyber security sector alike.

At the same time, as illustrated with Hajime, bots can also patrol cyberspace for good. For example, they might scan Internet traffic for attempted cyber–attacks and autonomously launch counter measures, alerts, and investigations.

Internet Relay Chat bots (IRCbots) can, for instance, be used to interfere when malicious users try to hijack chat conversations with profanities or for violent ideological purposes, by responding to phrases in a moderating function (for example by banning users automatically). Enhanced with artificial intelligence capabilities like natural language processing, such bots can become indistinguishable from human agents and execute any kind of task. They might even be used by law enforcement to automatically investigate online criminals, which could include sexual predators, narcotics traders, and weapons traffickers.

States, the private sector and white, grey, and black hat hackers are already building bot armies. Their potential purpose is only limited by our imagination. We’ve already seen such armies undertake DDoS attacks, campaign on Twitter(disguised as humans) for President Trump, and be used for criminal financial gain.

States need to develop strategies that go further than simply artificial intelligence and virtual reality. A forward-looking cyber strategy will have to also include the new actors of cyberspace – bots – in all policy domains, including defence, justice, economics and foreign affairs.

It will, however, be not only up to states to determine whether we will soon see intelligent bots as guardians of global peace and justice, or a global bot arms race and an AI battle over who gets to rule cyberspace.

Now more than ever, the tech industry, civil society, online interest groups, and hacktivists have a say in our joint digital future, and a moral responsibility to aim for a peaceful and just digital society.

No comments: