Our Better [AI] Angels: A Deontological Defense of Artificial Intelligence Decision-Making in War

X
Story Stream
recent articles

The moral decision was a terrible one due to the target’s cowardly use of human shields.  Every time he left the house on his motorbike, a young boy sat on his lap.  Important meetings, where multiple high-value targets plotted the death of hundreds of innocent civilians, were held in madrassa schools full of children.  Every opportunity to snuff out the threat of continued terror was accompanied by heart-breaking collateral damage.  Finally, on a clear morning, an opportunity presented itself due to the incredible evolution in the speed of drone technology.  The window for a strike was almost momentary but sufficient.  The target approached another high-value target with only a single military-age teenager in the vicinity.  Collateral damage estimates showed a high likelihood of death to the teenager, but there also existed a possibility the teenager was engaged in activity to support the terrorist network managed by the primary target.  It was a close call, but the value of the target made the level of collateral damage acceptable.  The loss of the teenager’s life was ethically acceptable.  The decision was made in milliseconds.  The strike unfolded as models predicted: primary target eliminated; collateral damage unfortunate yet acceptable.  Here is the catch: suppose the decision to strike was made solely by Artificial Intelligence (hereinafter “AI”).  Untethered from human operators, AI may enable speed, flexibility, and efficiency on the battlefield previously unimaginable.  Yet every human being naturally recoils at the notion of a robot deciding life and death.

AI increasingly influences the everyday life of first-world citizens.  Despite society’s general acceptance of AI, we maintain a collective uneasiness with AI's employment on the battlefield.  From the movie “Eye in the Sky” to the real-life Google employee revolt over working on military AI, our society struggles with the ethics of AI making battlefield decisions.[i]  In Outsourcing War to Machines, Paul Springer locates the crux of the ethical dilemma with the powerful question of whether it is ethical for robots to utilize “a powerful artificial intelligence to interpret moral guidelines . . .” and operate as independent decision-makers on the battlefield.[ii]  Utilizing a deontological model of ethical decision-making, this Article argues it is appropriate and desirable to utilize AI with predefined systems to make “moral” targeting decisions on the battlefield.    

Deontological ethics is most commonly associated with the Christian worldview.  It teaches in absolute moral rights and wrongs to be followed regardless of practical consequences.  Thus, even if a morally wrong killing appears to bring about a morally good result, deontological ethics teach that killing should not be done.  The United States remains primarily a Judeo-Christian culture, making the deontological lens the most practical to utilize in wrestling with the ethical issues presented. 

The Law of Armed Conflict (hereinafter “LOAC”) principle of proportionality is the most helpful application of AI decision-making to the battlefield in understanding the ethical dilemma.  The rule of proportionality is defined in the Geneva Conventions as outlawing an attack that will “be expected to cause incidental loss of life, injury to civilians, [or] damage to civilian objects . . . which would be excessive in relation to the concrete and direct military advantage anticipated.”[iii] Simply put, proportionality asks whether the collateral damage inflicted is excessive, considering the military advantage gained.  Such an analysis defies simple equational logic.  A high-value target may allow for a greater degree of collateral damage.  In contrast, a low-value target may imply no level of collateral damage as acceptable.  Quantifying collateral damage is a sophisticated moral exercise.  Is a child’s life more valuable than an adult’s?  What if the child is aiding the enemy and the adult is a purely innocent bystander?  Empowering AI to make such nuanced moral judgments entails deep ethical concerns.  

In considering the ethical pitfalls of empowering AI with predefined “moral” systems, it is important to note the conversation can easily become bogged down in a discussion of what is technologically plausible.  The debate quickly devolves to the issue of whether AI can ever be capable of such advanced decision-making.[iv]  No serious scholar is suggesting the military utilize AI to make “moral” decisions where AI does not demonstrate such a capability.  Instead, the ethical options at hand are to reject wholesale the use of AI as an autonomous decision-maker or to work within the confines of technological capabilities.  The best analogy for the latter option could be termed the “police dog” option.  Canines are not moral actors yet are trusted to engage in certain law enforcement functions.  Utilizing canines to identify and attack assailants does not entail dogs will also decide whether a victim is being truthful.  Just as we utilize the canine within their constraints, we may do the same with AI.   

Similarly, there is the somewhat related issue of the "AI control problem," concerned with AI's potential to displace humans as the dominant power.[v]  This issue is only tangentially related to the problem of AI decision-making in war.  Empowering AI to the extent of lethal autonomous decisions could arguably bring AI one step closer to dominance.  Grossly simplifying, from a purely philosophical standpoint, the Christian worldview teaches this is not a future to fear.  Atheists are often the academic wing preaching the dangers of AI control, and for good reason.  If the world is seen as an evolutionary struggle between species whereby humanity has risen to the top of the heap, there is nothing to preclude some new being from claiming the mantle.  However, if one believes that evil in the world is unique to human nature, the only thing we have to fear from AI is the imprint of ourselves.  From a Christian deontological worldview, AI is no different from nuclear weapons, automatic weapons, or any other technological advancement in warfare: each could be used by the evil in human nature for wanton destruction.   

After untangling these corollary issues, the issue of preprogrammed AI making battlefield decisions becomes far simpler.  Utilizing a deontological ethical model, lethal autonomous decision-making should be viewed as an opportunity to guarantee that absolute moral precepts will be followed, securing our better angels on the battlefield.  Numerous scholars point to the fact that AI is bound by its programmers, meaning it can exhibit clear and absolute moral principles.[vi]  Working within technological constraints, AI could be utilized in the “police dog” function discussed above to make straight-forward moral decisions: self-defense, defense of another, and strikes without collateral damage.  Even where collateral damage is expected, technology may allow for some basic proportionality reasoning.  Recalling the “deck of cards” utilized in Iraq, where differing values were assigned based on a target’s military necessity, AI may be programmed to accept certain levels of collateral damage.  Essentially, the “Two of Spades” may disallow collateral damage while the “Ace of Hearts” allows certain levels.  Human programmers may set collateral damage at very conservative levels to safeguard moral precepts.  Similarly, close calls of proportionality should be elevated to human decision-makers, while more clear-cut strikes might be carried out with the greater speed and flexibility afforded by AI decision-making. 

For the deontologist, the greatest advantage of AI on the battlefield may be to take humanity out of the equation.  In the realm of autonomous cars, we fret over the potential for AI making mistakes while ignoring the human potential for evil in the form of drunk driving.  Similarly, on the battlefield, we obsess over what AI might decide on a difficult issue of collateral damage while seemingly disregarding the My Lai Massacre or the Kunduz hospital airstrike, instances where AI might save countless innocents.  Similarly, in a “Lone Survivor” ethical dilemma,[vii] Humankind's sense of self-preservation may override the deontological principle not to execute the unarmed. AI possesses no such qualms to do what is right. 

Now consider a new hypothetical: a weapons system comes under fire from its hostile target that is utilizing human shields.  Within milliseconds it processes the impossibility to engage in self-defense without unacceptable loss of noncombatant human life.  Without the temptation of accomplishing the mission regardless of the means or even the instinct of self-preservation, it accepts mission failure and potential destruction. AI has secured our “better angels” on the battlefield.                 


Major John Reid is a student at Air Command and Staff College, Maxwell Air Force Base, AL.

The views in this article reflect that of the author and not that of the Deparment of Defense or the U.S. Air Force.

Notes:

[i] Eye in the Sky, directed by Gavin Hood (Toronto, Canada: Entertainment One, 2015); Drew Harwell, "Google to drop Pentagon AI contract after employee objections to the 'business of war,'" The Washington Post, June 1, 2018, https://www.washingtonpost.com/news/the-switch/wp/2018/06/01/google-to-drop-pentagon-ai-contract-after-employees-called-it-the-business-of-war/.

[ii] Paul J Springer, Outsourcing War to Machines; The Military Robotics Revolution (Santa Barbara, CA: Praeger Security International, 2018), 149.

[iii] Geneva Conventions, Additional Protocol I, Article 57.2(b).

[iv] Springer, 168. 

[v] Nick Bostrom, “The superintelligent will: Motivation and instrumental rationality in advanced artificial agents,” Minds and Machines 22.2 (2012); Rory Cellan-Jones, "Stephen Hawking warns artificial intelligence could end mankind," BBC, December 2, 2014, https://www.bbc.com/news/technology-30290540

[vi] JN Hooker and Tae Wan Kim, "Toward Non-Intuition-Based Machine and Artificial Intelligence Ethics: A Deontological Approach Based on Modal Logic,” Artificial Intelligence, Ethics, and Society (February 2018), 1.

[vii] This ethical dilemma is portrayed in the Hollywood film “Lone Survivor” where a group of Navy Seals face the decision whether to allow civilians to live who will report their movements to hostiles, meaning near-certain death, or summarily execute the civilians.   



Comment
Show comments Hide Comments