Showing posts with label ICTEC. Show all posts
Showing posts with label ICTEC. Show all posts

23 October 2017

10 trends for the future of warfare

In the science fiction movies we were used to see  killer robots, machine augmented heroes, laser weapons  battles in space , cyber  war  etc.. Now they have started to liven up sober academic journals and government white papers. However, war is about much more than combat or how we fight. Is the sensationalism of high-tech weaponry blinding us to technology’s impact on the broader social, political and cultural context that determines why, where and when war happens, what makes it more or less likely, and who wins?

Consider artificial intelligence (AI).  As robots relieve humans of their jobs, some societies will prove better prepared than others in their use of education and infrastructures for transitioning workers into new, socially sustainable and economically productive ways to make a living. Less prepared nations could see increasingly stark inequality, with economically-excluded young people undermining social stability, losing faith with technocratic governance, and spurring the rise of leaders who aim popular anger at an external enemy.

Today, new modes and artefacts of industrial production will also change demand patterns,
empowering countries controlling supply and transit, and disempowering others. Progress in
energy production and storage efficiency is likely to have profound consequences
for the petro economies and the security challenges of their regions. In the midst of a maritime dispute with Japan in 2010, China restricted export of “rare earths” that are critical for computing, sensors, permanent magnets and energy storage. With ever more commercial and military value embedded in the technology sector, such key materials will be deemed “critical” or “strategic” in terms of national security, and be subject to political as well as market forces.

In the 20th Century, the “haves and have nots” of the nuclear weapons club membership became the major determinant of the post-war global order, and – as seen in the cases of Iran and North Korea today - this continues to be relevant. Stealth technology and precision guided missiles used to impose a “new world order” in the early 1990s showed how the gap in military capability separated the United States from others, sustaining its leadership of a “unipolar” order. However, According to the current US deputy secretary of defence Robert Work, “There’s no question that US military technological superiority is beginning to erode”.

The 10 trends for the future of warfare can be summarised as below.

Waging war may seem “easier”. If increased reliance on machines for remote killing makes combat more abstract from our everyday experience, could that make it more tolerable for our societies, and therefore make war more likely? Those who operate lethal systems are ever more distant from the battlefield and insulated from physical danger, but this sense of advantage may prove illusory. Those on the receiving end of technological asymmetries have a stronger incentive to find other ways to strike back: when you cannot compete on a traditional battlefield, you look to where your adversary is vulnerable, such as through opportunistic attacks on civilians.

Speed kills. “The speed at which machines can make decisions in the far future is likely to
challenge our ability to cope, demanding a new relationship between man and machine.” The speed of technological innovation also makes it hard to keep abreast of new military capabilities, easier to be misled on the actual balance of power, and to fall victim to a strategic miscalculation.   General Hix  at a conference on the future of the Army in October 2016 said: "A conventional conflict in the near future will be extremely lethal and fast, And we will not own the stopwatch."

Fear and uncertainty increase risk. The expectation that asymmetries could change quickly – as may be the case with new strategic capabilities in areas like artificial intelligence, space, deep sea and cyber – could incentivise risk taking and aggressive behaviour. If you are confident that you have a lead in a strategically significant but highly dynamic field of technology, but you are not confident that the lead will last, you might be more tempted to use it before a rival catches up. Under these conditions, war by mistake either through over confidence in your ability to win, or because of exaggerated threat perception  becomes more likely.

Deterrence and preemption. When new capabilities cause a shift in the balance between
offensive and defensive advantage – or even the perception of such a shift -, it could increase the incentives for aggression. For example, one of the pillars of nuclear deterrence is the “second strike” capability, which puts the following thought into the mind of an actor contemplating a nuclear attack: “even if I destroy my opponent’s country totally, their submarines will still be around to take revenge”. But suppose swarms of undersea drones were able to track and neutralize the submarines that launch nuclear missiles? Such capabilities make it possible in theory for an actor to escape the fear of second strike retaliation, and feel safer in launching a pre emptive strike  Cyberattacks on banks, power stations and government institutions have demonstrated that it is no longer necessary to fly bombers around the world to reach a distant enemy’s critical infrastructure without early warning. 

The new arms race is harder to control. One of the mechanisms for strategic stability is arms control agreements, which have served to limit the use of nuclear, biological and chemical weapons. When it comes to the multiple combinations of technology we see as a hallmark of the Fourth Industrial Revolution, one of the obstacles to international agreement is caused by uncertainty about how strategic benefits will be distributed. For instance, the international community is currently debating both the ethics and practicality of a ban on the development of lethal autonomous weapons systems. One of the factors holding this debate back from a conclusion is a lack of consensus among experts about whether such systems would give an advantage to the defender or the attacker, and hence be more likely to deter or incentivize the escalation of conflict. Where you stand on the issue may depend on whether you see yourself as a master of the technology, or a victim. 

A wider cast of players. As cutting-edge technology becomes cheaper, it spreads to a wider range of actors. Consider the development of nuclear bombs – the last breakthrough in weapons technology that re-wrote the rules of international security. Although the potential for a fission bomb was understood in terms of theoretical physics, putting it into practice involved thousands of scientists and billions of dollars – resources on a scale only a few nations could muster. Over 70 years later, the club of nuclear weapons states remains exclusively small, and no non-state actor has succeeded in acquiring nuclear capability.
In contrast, there are more than 70 nations operating earth-orbiting satellites today. Nanosatellites are launched by Universities and Corporations. These days, even a committed enthusiast can now feasibly do genetic engineering in their basement. Other examples of dual purpose technologies include encryption, surveillance, drones, AI and genomics. With commercial availability, proliferation of these technologies becomes wider and faster, creating more peer competitors on the state level and among non-state actors, and making it harder to broker agreements to stop them falling into the wrong hands.

The grey zone. The democratisation of weaponisable technology empowers non state actors and individuals to create havoc on a massive scale. It also threatens stability by offering states more options in the form of “hybrid” warfare and the use of proxies to create plausible deniability and strategic ambiguity. When it is technically difficult to attribute an attack – already true with cyber, and becoming an issue with autonomous drones – conflicts can become more prone to escalation and unintended consequences.

Pushing the moral boundaries. Institutions governing legal and moral restraints on the
conduct of war or controlling proliferation date from an era when massively destructive
technology was reserved to a small, distinct set of actors – mostly states or people acting under state sponsorship. Today militaries are no longer necessarily at the cutting edge of technology: most of the talent driving research and development in today’s transformative dual use technologies is privately employed, in part because the private sector simply has access to more money. For example, the private sector has invested more in AI research and development in five years than governments have since AI research first started. Diminishing state control of talent is epitomised by Uber`s recruitment of a team of robotics researchers from Carnegie Mellon University in 2015, which decimated the research effort they had had been working on for the United States department of Defence.  State centric institutions for maintaining international security have failed to develop a systematic approach to address the possible long term security implications of advances in areas as diverse as nanotechnology, synthetic biology, big data and machine learning. 

Expanding domains of conflict. Domains of potential conflict such as outer space, the deep oceans, and the Arctic – all perceived as gateways to economic and strategic advantage – are expanding via new technologies and materials that can overcome inhospitable conditions. Like cyberspace, these are less well-governed than the familiar domains of land, sea and air: their lack of natural borders can make them difficult to reconcile with existing international legal frameworks.  Technological development is both rapid and private sector driven, which makes it hard for governance institutions to keep up. 

What is physically possible becomes likely. Political conflict is the “realm of exception” in all sorts of ways that make the morally unthinkable not only possible, but more likely. Professor Ole Wæver and the Copenhagen School of international relations developed the concept of “securitisation” to describe how a security actor invokes the principle of necessity as a way of getting around legal or moral restraints. Policy-makers can argue that because non-state actors, terrorist and criminal groups can access new technology, they are obliged to pursue weaponization, in order to prepare an adequate defence. Public disquiet can also be bypassed by conducting research in secret.

4th Industrial Revolution is empowering the individual through technology, and the way that blurs the lines between war and peace, military and civilian, domestic and foreign, public and private, and physical and digital. Nonstate groups’ leveraging of global social media - whether to gain support, undermine the morale of opponents, sow confusion or provoke a response that will create an advantage – has increased the strategic importance of shaping perceptions and narratives about international security. ISIS’s use of online videos provide an extreme example of a non state actor using social media to drive recruitment, while state security services in select countries employ online “trolls” on a large scale. Consider the implications for democratic control over armed force when technologies like big data analytics, machine learning, behavioural science and chatbots are fully enlisted in the battle over perceptions and control of the narrative.

Little by little, the responsibility for defending citizens is effectively shifting away from the state and towards the private sector. It is, for example, your bank’s security chief who bears responsibility for protecting your money from international cyber theft, whether it comes from straightforward criminal groups or those acting under the sponsorship of sovereign states. A report by Internet security company McAfee and the think-tank CSIS estimated the likely annual cost to the global economy from cybercrime at more than $400 billion – roughly equivalent to the combined defence spending of the European Union, or the Asia region.

According to 17th century political theorist Thomas Hobbes, the citizen agrees to give up some freedom and render loyalty in exchange for protection and to escape the “natural condition” of life, which was otherwise “solitary, poor, nasty, brutish, and short”. In return, the state expects respect for its laws. But if citizens lose confidence in the state’s capacity to guarantee their security, be it through military protection or domestic justice and policing or social safety nets, they may also feel less of an obligation to be loyal to the state in return.  
Could the relative loss of state power fatally undermine the system of international security?

As attitudes adapt to the new distribution of security responsibility between individuals,
companies and institutions of governance, there is a need for a new approach to international security. There is plenty of room for debate about how that approach should look, but the baseline can be drawn through three points: 

It will need to be able to think long-term, 

Adapt rapidly to the implications of technological advances, 

Work in a spirit of partnership with a wide range of stakeholders. 

Institutional barriers between civilian and military spheres are being torn down. Outreach to Silicon Valley is a feature of current US Defence policy, for example, as are invitations to hackers to help the Department of Defence to maintain its advantage in the digital domain. The “third offset strategy” promoted by the last US Defence Secretary Ashton Carter is based on a recognition that private sector innovation has outstripped that of military institutions in the post-Cold War era, and a more open relationship with business as well as with academic and science institutions could prove vital to maintaining the dominance of US military capabilities.

States and other security actors need to start exploring with each other some of the concepts and modes of operation that would make such a networked approach sustainable, legitimate and fit for the ultimate purpose of maintaining stability and promoting peaceful coexistence in the emerging international security landscape. Instead of meeting each other in court, as the FBI met the Apple Corporation to settle their dispute about encryption, security providers could meet across a table, under new forms of public oversight and agile governance, as partners in a common endeavour. Some of the questions that need to be answered are : What cast of actors populate this wider security ecosystem? What are shared priorities in terms of risks? What are some of the potential models for peer to peer security? How can the 4th Industrial Revolution be used to give citizens a stronger sense of control over choices of governance, or to deny space to criminal organizations and corrupt practices? Can smart contracts using block chain technology be applied to build confidence in financial transactions and peace agreements? Can defensive alliances be expanded to include or even consist entirely of non-state actors? Should international law extend the right to use proportionate force in self defence in cyber conflict to commercial actors? What aspects of these challenges are a matter for legal instruments and regulation, and what aspects will require a new approach?

The answers that may emerge to these questions are unpredictable – but what is clear is the need to have a conversation that reaches across generations and across disciplines. This conversation has to be global. International security is threatened by a loss of trust, in particular between those who drew power from the last industrial revolution and those whose power is rising within a fluid and complex environment. The conversation needs to foster mutual understanding, dispel unjustified fears, and revive public confidence in new forms of responsive leadership that manifestly serve the common good.

[ This article is based on a World Economic Forum project on the relationship between the Fourth Industrial Revolution and International Security, drawing on conversations at a number of World Economic Forum events in 2015 and 2016. ]

In my next paper I shall discuss Defence Implications of Emerging Technologies


A ‘World Without Mind’: Big Tech’s Dangerous Influence


French philosopher Rene Descartes famously said “I think, therefore I am.” But in the digital age, what we think and how we live are being influenced in a big way by just a handful of tech firms: We are informed by Google and entertained by Apple; we socialize on Facebook and shop on Amazon. It’s time to reclaim our identities and reassert our intellectual independence, according to Franklin Foer, a national correspondent for The Atlantic and former editor of The New Republic, in his book, World Without Mind: The Existential Threat of Big Tech. He recently joined the Knowledge@Wharton show, which airs on SiriusXM channel 111, to explain why these firms’ hold on society is a cautionary tale for the future.

A peek inside Army cyber protection teams

By: Mark Pomerleau 

Of the four types of teams that make up the cyber mission force — the 133-team cadre of cyber warriors the four service branches provide to U.S. Cyber Command — cyber protection teams (CPTs) serve as the quick reaction defensive force responding to network intrusions.

Rumbles of the Quantum Computing Revolution in Security


FRITZ LODGE 

Imagine a sensor that could instantly detect nuclear submarines deep underwater, a supercomputer that can break the strongest encryption in the blink of an eye, or a worldwide satellite network of theoretically unbreakable communications. These are just a few of the capabilities promised by quantum physics, a century-old science, which found that particles have unique and unexpected properties at the smallest scale. Scientists have long theorized that these properties could revolutionize computing, sensing and a host of other technologies.

The End of Internet Exceptionalism?

Jeremy White

Long accustomed to lauding technology companies as paragons of American creativity and entrepreneurship, legislators sifting through evidence of Russian election influence are turning their attention to how the freewheeling world of online speech has permeated our politics. 


Understanding Disinformation


Disinformation is a relatively new word. Most observers trace it back to the Russian word dezinformatsiya, which Soviet planners in the 1950s defined as “dissemination (in the press, on the radio, etc.) of false reports intended to mislead public opinion.” Others suggest that the earliest use of the term originated in 1930s Nazi Germany. In either case, it is much younger (and less commonly used) than ‘propaganda,' which originated in the 1600s and generally connotes the selective use of information for political effect.

22 October 2017

DoD says it shouldn’t protect homeland from cyberthreats; McCain disagrees

By: Mark Pomerleau  

In a heated exchange before the Senate Armed Services Committee on Oct. 19, the committee’s chairman sparred with the Department of Defense’s principle cyber adviser over the Pentagon’s roles in protecting the nation in cyberspace. “Although DoD has built capacity and unique capabilities, for a number of reasons, I would caution against ending the current framework and against reassigning more responsibility for incident response to the Department of Defense,” Kenneth Rapuano, assistant secretary of defense for homeland defense and global security and a principle cyber adviser, wrote in his prepared testimony.

Marines Sent Team to Middle East to Test Cyber Vulnerabilities

BY: HOPE HODGE SECK

A sailor with 553 Cyber Protection Team opens a network monitoring program during I Marine Expeditionary Force's Large Scale Exercise 2016 at Marine Corps Air Station Miramar, Calif., on Aug. 22, 2016. The Corps deployed a first-of-its-kind cyber protection team to the Middle East to help crisis response troops shore up communications and patch vulnerabilities to a growing range of cyber threats. Cpl. Garrett White/Marine Corps 

Protect your privacy: A hacker’s guide to being cyber-safe

By: Timothy Summers

Protecting individual privacy from government intrusion is older than American democracy. In 1604, the attorney general of England, Sir Edward Coke, ruled that a man’s house is his castle. This was the official declaration that a homeowner could protect himself and his privacy from the king’s agents. That lesson carried into today’s America, thanks to our Founding Fathers’ abhorrence for imperialist Great Britain’s unwarranted search and seizure of personal documents.

DoD still working toward CYBERCOM elevation

By: Mark Pomerleau

President Donald Trump, in accordance with Congressional decree, directed Cyber Command to elevate to a full unified combatant command out from under Strategic Command in August.The Defense Department is “in the throes” of making this happen right now, Maj. Gen. Burke “Ed” Wilson, acting deputy assistant secretary of defense for cyber policy and deputy principal cyber adviser to the secretary of defense, said during a media roundtable Oct. 16.


21 October 2017

Why governments should protect us from barely-taxed tech monopolies

by Franklin Foer

In our day, we can’t quite see anything wrong with monopoly. We’re certain that our tech giants achieved their dominance fairly and squarely through the free market, by dint of technical genius.

To conjure this image of meritocratic triumph requires overlooking several pungent truths about the nature of these new monopolies. Their dominance is less than pure.

They owe their dominance to innovation, but also to tax avoidance.

DoD still working toward CYBERCOM elevation

By: Mark Pomerleau 

The Defense Department is “in the throes” of making this happen right now, Maj. Gen. Burke “Ed” Wilson, acting deputy assistant secretary of defense for cyber policy and deputy principal cyber adviser to the secretary of defense, said during a media roundtable Oct. 16.

Wilson explained they’re in the final stages of trying to get through the elevation process with a working group now up and running.

20 October 2017

Raising the Consequences of Hacking American Companies


In early October, lawmakers were attempting to glean information from Facebook and Twitter about Russia-backed bot accounts deployed to interfere in the 2016 U.S. election. At the same time, U.S. businesses and critical infrastructure face a distinctive state-cyber interference threat of their own. In May of this year, the “WannaCry” cyber-attack took the world by storm. For many ordinary people, it was their first encounter with the phenomenon known as ransomware. The hackers hijacked computers across the globe—from Britain’s National Health Service (NHS) to FedEx—and demanded that the owners pay to recover their data. Perhaps the most noteworthy aspect of the attack was WannaCry’s source, which the UK’s National Cyber Security Centre and private U.S. cybersecurity researchers have suggested is North Korea.1 A few weeks later, another purported ransomware attack named NotPetya emerged, this time mostly affecting Ukrainian computer networks. Through NotPetya ostensibly sought to extort its victims, some researchers quickly concluded that the malware’s true purpose was to harm the devices it infected. The Ukrainian government blames Russia for the hack, which Ukraine claims was politically motivated.2

Why the world should worry about North Korea's cyber weapons

By Joshua Berlinger

North Korea's hackers have been accused of carrying out some of the most audacious cyber attacks of the past few years, from siphoning millions of dollars to stealing state secrets.

Analysts say cyber capabilities have become a key asset in North Korea's war chest, used for a wide range of purposes including hacking adversaries like South Korea and pilfering money.

Massive drill validates Israel’s cyber-secure C4I network

By: Barbara Opall-Rome 

A two-week drill of the Israel Defense Forces’ Northern Corps ― nearly a year in the making ― involved some 20 brigades, air power from all Israeli Air Force bases, the bulk of the Israeli Navy surface and submarine force, and more.

After-action analysis from last month’s massive drill at Israel’s northern border has validated, with very few exceptions, more than a decade worth of development, deployment and operational procedures associated with the military’s cyber-secure, C4I-operational network, the military’s chief signal officer said.

Data Bust: Created to Help Counter the Threat of IEDS, the Pentagon’s JIEDDO Turned Out to be a Huge and Very Costly Flop



Kelsey Atherton
Source Link

On Oct. 1, 2017, a roadside bomb northwest of Baghdad killed Spc. Alexander W. Missildine. The 20-year-old was the latest American soldier to die in a war that had lasted, in some form or another, since he was in kindergarten. And as much as the Iraq War had changed over the past 14 years, the weapon that killed Missildine—the improvised explosive device, or IED—remains just as potent, and just as vexing, as it was when the U.S. originally invaded Iraq.

‘Safe Cities Index’ highlights paradox of tech advancement, cyber vulnerability

By: Brad D. Williams 

The Economist Intelligence Unit released its 2017 Safe Cities Index, a biennial study that ranks 60 global cities using 49 indicators of safety across four categories, including digital security, health security, infrastructure security and personal security.

Tokyo, Singapore, Osaka, Toronto and Melbourne top the global index using all four categories of indicators. At number 15, San Francisco ranked as the safest U.S. city, with Los Angeles (18), Chicago (19), New York (21) and Washington, D.C., (23) rounding out the U.S.‘s top five in the index.

Cyber Command stands up planning cells at combatant commands

By: Mark Pomerleau

Cyber Command has stood up forward-deployed planning cells within the combatant command staffs to help better coordinate offensive and defensive cyber effects.

The entities, called Cyber Operations-Integrated Planning Elements, or CO-IPE, are weeks-old. They will do all the planning for Department of Defense Information Network operations, defensive cyber operations, internal defensive measures and offensive cyber operations, said Army. Col. Paul Craft, the director of operations J3 at Joint Force Headquarters-DoDIN, who spoke Tuesday during a presentation at the Cyber Pavilion of the annual Association of the U.S. Army conference.

Army doubles down on WIN-T’s ‘fight tonight’ problem

By: Amber Corrin 
Source Link

Less than two weeks after Army officials announced they plan to nix the service’s $6 billion battlefield network backbone, leaders are emphasizing the need to immediately move forward with alternate solutions in order to save troops’ lives.

Amid congressional inquiry, officials said Sept. 27 that the Warfighter Information Network-Tactical, or WIN-T, program would end as Army leaders reroute funding to alternate capabilities that are more agile, secure and threat-responsive. The move involves shifting nearly half a billion dollars in funding in order to better secure communications in the theater.

4 areas where military cyber forces should focus in cyberspace

By: Mark Pomerleau

The cyber domain as an operational environment is still relatively new, and the Department of Defense is still working out tactics, techniques, procedures and authorities in cyberspace for military operations.

But despite the DoD and NATO declaring cyberspace a domain of warfare, “nobody has defined what that means,” said Alex Crowther, of the National Defense University.