DO ANDROIDS DREAM OF TECH REVIEWS: THE FUTURE OF WEAPONS TECHNOLOGY AND HUMANITARIAN LAW

On July 3 1988, somewhere in the Persian Gulf, there is an exchange of fire between Iranian Revolutionary Guard gun-boats and the guided missile cruiser USS Vincennes. Expecting a retaliatory attack, Vincennes’ crew are on high alert. Aegis, Vincennes’ on-board, state-of-the-art, automated weapons system, detects an F-14A Tomcat Fighter flying over nearby Iranian airspace. It seeks, and is granted, permission to fire. Such is a normal occurrence in the age of increasingly mechanised, increasingly automated warfare.

Only the plane Aegis identified as a Tomcat Fighter wasn’t a military vehicle at all, but the civilian airbus Iran Air Flight 655. Vincennes crew did not realise Aegis’ mistake until two radar-guided missiles had destroyed the civilian carrier, killing all 290 on-board. Not one of Vincennes’ eighteen officers questioned Aegis’ determination, despite readily available data that contradicted it.

The fate of Flight 655 raises serious questions about how international law is guiding the development of future weapons technology such as automated weapons systems. Technologies in the nebulous stages of development do not have crystallised norms to guide their creation and use. We need clear boundaries and guidelines to help determine which kinds of weapons are legally and morally acceptable, and which are not.

 

WHAT IS THE CURRENT STATE OF THE LAW?

Despite the increasing sophistication, complexity and additional challenges posed by future weapons technology, WEAPONS REVIEWS, in the form of Additional Protocol I Article 36 weapons reviews, remain the key control international humanitarian law exerts over tech development. This is concerning for a number of reasons.

Weapons reviews by their very nature are retroactive in scope, meaning even the most carefully executed and authoritative weapons reviews do not ‘control’ weapons development in the sense that might be hoped for. Further, article 36 weapons reviews lack authority aside from the authority a state affords them, and even then, they only assist in ensuring the minimum standards required by humanitarian law. These standards require that a weapon must not:

  1. Target indiscriminately;
  2. Cause superfluous injury or suffering, or;
  3. Cause widespread or permanent damage to the environment.

Weapons are banned if their very nature is repugnant to these rules, or specific uses of the weapon may be restricted. Weapons may also be banned on an individual, ad-hoc basis, but this is usually unhelpful for weapons that have not been developed or invented yet.

The different rules are of more or less concern depending on the technology in question. The most pertinent issue related to automated weapons systems is that of indiscriminate targeting—whether the automated weapons system can correctly identify viable targets as opposed to civilians or hors de combat. Aegis’ attack on Flight 655 is an example of an automated weapons system’s failure to adhere to this law. But what exactly is an automated weapons system, and what are the unique legal challenges they pose on developers?

 

NEW FRONTIERS: AUTOMATED WEAPONS SYSTEMS

An autonomous weapons system is a robotic weapons system that acts with an awareness of its environment and an understanding of higher-level intent and direction. For example, an automated weapons system may, through its sensors and in accordance with a computer algorithm, identify a target and evaluate whether an attack would bring about a state in the battlefield congruous with a higher-level directive. An automated weapons system does not merely respond to input, but can choose a course of action from alternatives without human oversight or control, in order to achieve objectives commensurate with programmed directives. Drones, ships and tanks are commonly equipped with automated weapons systems.

Despite the fact that technology has advanced greatly since USS Vincennes’ time, automated weapons system technologies are still not capable of determining the legality of an attack where collateral damage is expected. This necessitates continued human input in strategic decisions before automated weapons systems act, but this presents a problem; though there are no truly ‘autonomous’ weapons currently in use, the speed, durability, efficiency and computational power of autonomous weapons systems have rendered the human body and brain obsolete in the context of increasingly fast-paced and increasingly mechanised warfare. For example in predicted warfare scenarios where automated weapons systems engage each other in battle, the requirement of human input represents such a serious disadvantage that it will likely be forgone altogether. Think super-smart, militarised Siris deciding to strike key enemy infrastructure as casually as scheduling your next coffee date, without even asking you.

The trend of edging out direct human control is one that will continue, perhaps surreptitiously, which raises serious questions about liability for breaches of the law. Humans might soon occupy a purely ‘supervisory’ role in combat, controlling multiple automated weapons systems at once. ‘Supervisors’ would delegate tasks to the systems and act as a ‘veto’ power in the instance that an automated weapons system incorrectly identifies targets, or is unaware of a greater directive that precludes an attack. This is problematic considering the complex psychological relationship that is formed between human and machine when interacting with automated weapons system—USS Vincennes illustrates that humans may defer to the findings of the automated weapons system despite having made different conclusions regarding the same data. Alternatively, if operators are controlling multiple automated weapons systems they may fail to pick up on such discrepancies in the first place. Though technical improvements in automated weapons systems may be improving the ability to appear to adhere to humanitarian law, there is a human element inherent in our relationship with these systems that is not being accounted for in the weapons review process.

This is concerning given current technical limitations, as well as the next logical progression of automated weapons systems into…

 

ARTIFICIAL INTELLIGENCE

Autonomy should be distinguished from artificial intelligence (AI), however sophisticated automated weapons systems are encroaching the AI threshold. Decisions by artificially intelligent computers would be made according to adaptive situational understanding rather than pre-set rules. There would be no limit to a computer’s knowledge of higher-level directives. Artificially intelligent computers would be capable of innumerable autonomous calculations. This, coupled with the superior processing speed and database memory of a computer, amounts to an entity far more capable of making effective strategic decisions than any human. As in, forget the coffee date, this super-powered Siri has already penned-in your wedding and the honeymoon, and also planned a transnational assault on a troublesome non-state terrorist group at the same time. Don’t like your new spouse? Too bad—this computer is smarter than you and knows it, and isn’t going to put up with your human dissent.

The notion of artificial intelligence is usually dismissed as science-fiction nonsense, or a problem reserved for the far-flung future, but institutions and guidelines as to the use and development of artificially intelligent machines must be created now, because, as has been observed, it may be impossible to limit the actions of a truly intelligent machine once it is created. Article 36 weapons reviews, as the sole legal mechanism aimed at controlling the development of future weapons technologies, are clearly not capable of controlling the development of AI since they are purely retroactive in scope and do not consider the far-reaching effect of tech development beyond military use.

 

WHAT NOW?

Emerging technologies such as automated weapons systems have clearly outgrown blanket humanitarian law standards. Drastic but careful reform in this area is required. New standards, specific to the development of individual technologies, must be promulgated long before tech development begins. Weapons reviews, which are purely reactive in scope, are not up to this task. So what should be done? This must be determined by examining the unique characteristics of the technology in question. Treaties banning specific uses of technologies provide clear signals as to acceptability, but may not be politically viable in many instances. Aspirational treaties and promulgations are useful, but may not go far enough. In any case a relationship founded on the terms of humanitarian law is short-sighted and archaic. What is needed is a concerted effort from states to come to agreement on the relationship that we want to have with technology. It will not be an easy task, but it is an essential one, because the potential cost to human kind is far too steep to ignore.

 

Mara Papavassiliou is a Juris Doctor student who is terrified at the thought of summarising her existence in a two-sentence by-line.  

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s