Re-published from a 2019 Medium post:

“Now I am become Death, the destroyer of worlds.”

A mushroom cloud
A mushroom cloud

Since 1945, these words have haunted some, yet they have fallen on others’ deaf ears. When Oppenheimer quoted the Bhagavad Gita, he had realized the impact of the greatest inventions in the history of the human race, one of his own creation: the atomic bomb. This advancement had come out of necessity to outpace wartime enemies, as much of our technological evolution has. The outcome of war is oftentimes decided by technological advancement, but at a certain point, we must consider if we have gone too far in the vicious cycle of one-upmanship. In 1925, before Oppenheimer’s realization, the Geneva Protocol prohibited the use of chemical and biological weapons, showing that there is some technology deemed too inhumane to use in warfare. Our current time is no exception to this trend. We make technological innovation after innovation in the name of warfare, yet there is no modern parallel to the Geneva Protocol. The weaponization of drones has wholly transformed the face of interstate warfare, and government agencies across the world are developing advanced artificial intelligence systems that will soon add autonomous weapons to technological arsenals.

As the wheel of technological advancement continues to roll, it is imperative that we stop to think about the ramifications of using such powerful inventions to more efficiently carry out the decisions made by military leadership (e.g., large-scale killing of enemies, hacking civilian-owned systems). There are critical ethical issues we must consider in the government’s relationship with its citizens as it relates to the link between technology and warfare. Namely, we must ask if increasing the potency of our weapons will lead to increased harm — whether we intend the consequences or not. Moreover, computer scientists should (in addition to considering factors past the technical challenges put in front of them) diffuse easily-digestible, accurate information on the danger of readily-weaponized algorithms, so the public understands their gravity in the scope of warfare.

All this being said, the civilian work that led up to his discovery at Los Alamos was just as vital as the eventual creation of the atomic bomb. There is, and always has been, an interesting relationship between a government’s efforts in warfare and its civilian population. Nowadays, Silicon Valley is home to many such civilians. These developers are not inventing bombs. Instead, many are making advancements in artificial intelligence. For the most part, Silicon Valley developers have entirely non-governmental goals. Nevertheless, technology being developed by an autonomous driving software company could be repurposed for autonomous warfare. For the most part, developers do not realize that their inventions (or more commonly, smaller pieces of these inventions) could be used to kill huge numbers of people. This problem seems insurmountable initially, but we could partially mitigate it by (1) ensuring we evaluate a piece of technology’s potential for misuse, and (2) creating regulation for human rights issues before they arrive. Both of these are lofty goals, especially when their targets are private citizens, but they are necessary if we wish to prevent negative ethical consequences.

Furthermore, those who are repurposing the technology (likely military personnel) have less of a psychological toll of killing when added barriers enter the moral feedback loop. To illustrate this simply, we might define a first-order weight on one’s conscience as the act of personally killing another human being. Consider heat-seeking missiles or weaponized drones. People remotely operate these drones, but the violent acts carried out by the drones are not seen with the operator’s own eyes. As a result, this could be considered a second-order moral weight. A third-order moral weight would be the removal of complete removal of humans from the loop. Consider the creation of an autonomous weapon — the creator would never have to witness the ramifications of their actions. The shift in our warfare from first to second-order has already taken place, and we have already seen great tragedy caused by strategies like the ‘double-tap’ drone strike, which has killed numerous civilians and first responders. However, the shift from second to third-order moral weights is far more worrying, even if it has not occurred yet. Reducing the act of killing to the incrementation of an int i would have far-reaching and lasting effects.

General Atomics MQ-1 Predator, the most commonly used drone to carry out American drone strikes
General Atomics MQ-1 Predator

The world needs to know how much destruction not having to face the guilt of the effects of war can cause. At this point, we are beginning to realize how ruthlessly powerful algorithms can be on a large-scale, but what we have not yet realized, as a global society, is how easy it is to change the target of this powerful technology. After all, with enough layers of abstraction, how different is a video game aim-bot from something that shoots real guns and kills real people? In my opinion, someone who realizes how similar these systems are to each other has a duty to those who don’t. Computer scientists need to stop the power of AI-driven killing from being unleashed on the masses, and we should disperse our knowledge to the rest of the public. Apprehension might need to start outside the realm of computer science, but it is absolutely essential that this issue is embraced and magnified by those who realize its gravity most.