Nanoweapons The automation of war
According to the National Nanotechnology Initiative website - nanotechnology is helping computers to become faster, smaller and more portable (Anonymous, N/A). The lethal weaponization of these revolutionary computers would be an example of using nanotechnology to develop new killing systems. As a result, such weapons would count as nanoweapons. As computing becomes more advanced - so does the potential for automation in warfare. Robots for killing and strategic decision making are likely to become common aspects of life on the battlefield. As AI leverages a greater influence - the significance of the human presence on the battlefield will diminish. Accordingly - “The operation of armed forces will be characterized by many autonomous decisions on all levels. Uninhabited vehicles and robots of macro- and microscopic sizes could become routine” (Altmann, 2006, pp. 75).
The automation of war will result in fewer direct experiences of being a soldier on the battlefield. Several results might follow. One is that fewer soldiers would experience post-traumatic stress disorder due to fewer battlefield experiences. Another result is that soldiers might have a diminished understanding of the horrors of war. Along with this diminished understanding - pro-war attitudes might become more common within military and veterans' organizations. To witness bloodshed first hand creates a stronger understanding of the real human cost of initiating a war.
Politicians have a vested interest in protecting their own soldiers. An example of this tendency in action would be the presidency of Barack Obama in the United States - who dramatically escalated the number of lethal drone strikes compared to his predecessor George W. Bush. According to Purkiss and Serle (2017) under President Bush - 57 drone strikes were authorized; in contrast, a total of 563 drone strikes were authorized under President Obama (para 2).
Drone strikes remove human combatants from proximity to the target. As the power of AI increases - there will be more and more automation in warfare. And if AI systems misunderstand or misinterpret a military drill from a foreign nation - this could lead to AI systems starting a new war without human consent. Many other forms of error within military AI systems might also lead to tragedy and catastrophe. According to Knight (2021) - there are basic errors in the data that is used for training and testing AI (para 6).
If the horror of war is exported away from soldiers - it might be experienced more often by civilians. Imagine a dystopian future in which war is fought only by robots - and all the death and injury is experienced only by civilians. In this scenario - the rights of civilians would be severely diminished. Additionally, civilians would have little to no say over whether their neighborhood becomes a robotic battlefield. Civilians should have the right to legally defend themselves from the damages of war. If there are human soldiers on the battlefield - then these soldiers can be held accountable to the eye witness accounts of civilian victims. Therefore - with no human soldiers present - civilians as an interest group would likely have a diminished capacity for self-protection. In other words, it might become more difficult for civilians to demand justice.