• 04 November, 2024
Foreign Affairs, Geopolitics & National Security
MENU

Robotic Weapons – A Moral Dilemma

PTI and Chanakya Forum Mon, 25 Oct 2021   |  Reading Time: 6 minutes

Autonomous weapon systems – commonly known as killer robots – may have killed human beings for the first time ever last year, according to a recent United Nations Security Council report on the Libyan civil war. History could well identify this as the starting point of the next major arms race, one that has the potential to be humanity’s final one.

US-based military robot manufacturer Ghost Robotics had strapped a sniper rifle to a robotic dog, in the latest step towards autonomous weaponry. Some people have reacted with moral outrage at the prospect of making a killer robot in the image of our loyal best friend. But if this development makes us pause for thought, in a way that existing robot weapons don’t, then perhaps it serves a useful purpose after all. The response to Ghost Robotics’ latest creation is reminiscent of an incident involving Boston Dynamics, another maker of doglike robots (which, in contrast, strongly frowns on the idea of weaponising them).

  Image Source: newscientist.com

Autonomous weapon systems are robots with lethal weapons that can operate independently, selecting and attacking targets without a human weighing in on those decisions. Militaries around the world are investing heavily in autonomous weapons research and development. The U.S. alone budgeted US$18 billion for autonomous weapons between 2016 and 2020.

Autonomous Weapons

Autonomous weapon systems are not exactly new to military or warfare. The evolution of Unmanned Aerial Vehicles (UAV) or drones has been very rapid in the last decade and these have been put to use in various military applications. Images of terrorists being neutralised in Afghanistan or Iraq by combat UAVs became common, though the latest attacks of oilfields in Saudi Arabia by drone swarms by terror groups made the world leaders sit up and take notice of the evolving threats. While the UAVs and drones used to operate under human control / command or on a pre fed mission profile, the disruptions caused by development of Artificial Intelligence (AI) / Machine Language (ML) opened up the possibility of making these autonomous. Close on the heels of UAVs, Unmanned Ground Vehicles (UGV) also gained prominence, both for combat as well as logistic uses in battlefields. These platforms mounted with guns and sensors integrated with AI modules make them autonomous and capable of making decisions as per scenarios presented.

Why Autonomous Weapons Pose a Human Rights Dilemma?

Human rights and humanitarian organizations are racing to establish regulations and prohibitions on such weapons development. Without such checks, foreign policy experts warn that disruptive autonomous weapons technologies will dangerously destabilize current nuclear strategies, both because they could radically change perceptions of strategic dominance, increasing the risk of pre-emptive attacks, and because they could become combined with chemical, biological, radiological and nuclear weapons themselves.

The main problems associated with a robotic or autonomous weapon system can be summarised as:

  • Problem of Misidentification. When selecting a target, will autonomous weapons be able to distinguish between hostile soldiers and 12-year-olds playing with toy guns? Between civilians fleeing a conflict site and insurgents making a tactical retreat? The problem here is not that these machines will make such errors and humans won’t. The scale, scope and speed of killer robot systems – ruled by one targeting algorithm, deployed across an entire continent – could make misidentifications by individual humans like a recent U.S. drone strike in Afghanistan seem like mere rounding errors by comparison. The problem is not just that when AI systems err, they err in bulk. It is that when they err, their makers often don’t know why they did and, therefore, how to correct them.
  • Low End Proliferation. The militaries developing autonomous weapons are proceeding on the assumption that they will be able to contain and control the use of autonomous weapons. But if the history of weapons technology has taught the world anything, it’s this: Weapons spread. Market pressures could result in the creation and widespread sale of what can be thought of as the autonomous weapon equivalent of the Kalashnikov assault rifle: killer robots that are cheap, effective and almost impossible to contain as they circulate around the globe. “Kalashnikov” autonomous weapons could get into the hands of people outside of government control, including international and domestic terrorists.
  • High End Proliferation. Nations could compete to develop increasingly devastating versions of autonomous weapons, including ones capable of mounting chemical, biological, radiological and nuclear arms. The moral dangers of escalating weapon lethality would be amplified by escalating weapon use. High-end autonomous weapons are likely to lead to more frequent wars because they will decrease two of the primary forces that have historically prevented and shortened wars: concern for civilians abroad and concern for one’s own soldiers. Autonomous weapons will also reduce both the need for and risk to one’s own soldiers, dramatically altering the cost-benefit analysis that nations undergo while launching and maintaining wars.
  • Laws of Armed Conflict (LOAC). Autonomous weapons will undermine humanity’s final stopgap against war crimes and atrocities: the LOAC. These laws, codified in treaties reaching as far back as the 1864 Geneva Convention, are the international thin blue line separating war with honour from massacre. They are premised on the idea that people can be held accountable for their actions even during wartime, that the right to kill other soldiers during combat does not give the right to murder civilians. But how can autonomous weapons be held accountable? Who is to blame for a robot that commits war crimes? Who would be put on trial? The weapon? The soldier? The soldier’s commanders? The corporation that made the weapon? Non-governmental organizations and experts in international law worry that autonomous weapons will lead to a serious accountability gap.

AI – The Double-Edged Sword

Artificial Intelligence is the branch of computer science concerned with making computers behave like humans. According to a recent United Nations report, Libyan government forces hunted down rebel forces using “lethal autonomous weapons systems” that were “programmed to attack targets without requiring data connectivity between the operator and the munition”. The deadly drones were Turkish-made quadcopters about the size of a dinner plate, capable of delivering a warhead weighing a kilogram or so.

Artificial intelligence researchers have been warning of the advent of such lethal autonomous weapons systems, which can make life-or-death decisions without human intervention, for years. Well before the first test of a nuclear bomb, many scientists working on the Manhattan Project were concerned about future of nuclear weapons. A secret petition was sent to President Harry S. Truman in July 1945. It accurately predicted the future scenario. The threat comes this time from artificial intelligence, and in particular the development of lethal autonomous weapons: weapons that can identify, track and destroy targets without human intervention. The media often like to call them “killer robots”.

The regulation of AI must consider the harms and the benefits of the technology. Harms that regulation might seek to legislate against include the potential for AI to discriminate against disadvantaged communities and the uncontrolled development of autonomous weapons. Sensible AI regulation would maximise its benefits and mitigate its harms.

The Future

If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable. The endpoint of such a technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.

Strategically, autonomous weapons are a military dream. They let a military scale its operations unhindered by manpower constraints. One programmer can command hundreds of autonomous weapons. An army can take on the riskiest of missions without endangering its own soldiers. Beyond the moral arguments, there are many technical and legal reasons to be concerned about killer robots. One of the strongest is that they will revolutionise warfare. Autonomous weapons will be weapons of immense destruction. Previously, nations had to have an army of soldiers to wage war. An army needs to be persuaded to follow orders, trained, paid, fed and maintained. Now just one programmer could control hundreds of weapons.

Asymmetric wars – that is, wars waged on the soil of nations that lack competing technology – are likely to become more common. An analogy could be the global instability caused by Soviet and U.S. military interventions during the Cold War, from the first proxy war to the blowback experienced around the world today, multiply that by every country currently aiming for high-end autonomous weapons.

World stands at a crossroads on this issue. It needs to be seen as morally unacceptable for machines to decide who lives and who dies. The diplomats at the UN have to negotiate a treaty limiting their use, just as we have treaties to limit chemical, biological and other weapons to prevent potentially disastrous situations. With the US competing with China and Russia to achieve “AI supremacy” – a clear technological advantage over rivals – regulations have thus far taken a back seat.

Conclusion

In an era where advances in weapon technology are taking place at breakneck speed and where any such technological edge can exponentially increase or bolster a Nation’s chances of success in conflict/ at the bargaining table,  the advent of robotic weapons represents a ‘redshift’ in combat capabilities, with the caveat that such disruptive technology needs to be kept out of the hands of irresponsible players represented by ‘rogue nations or non-state elements,  in order that such technology does not point towards Armageddon, in any sense of the word.

(This article is a compilation from various conversations on Reuters Connect by Team Chanakya Forum)

Bibliography

 The Conversation via Reuters Connect

  • com,2021:newsml_CNVRST000LH2RAX:438693260
  • com,2021:newsml_CNVRST000LGO12Q:968169975
  • com,2021:newsml_CNVRST000LGM9ZS:343714239

 


Author
PTI and Chanakya Forum

Disclaimer

The opinions expressed in this article are the author’s own and do not reflect the views of Chanakya Forum. All information provided in this article including timeliness, completeness, accuracy, suitability or validity of information referenced therein, is the sole responsibility of the author. www.chanakyaforum.com does not assume any responsibility for the same.


Chanakya Forum is now on . Click here to join our channel (@ChanakyaForum) and stay updated with the latest headlines and articles.

Important

We work round the clock to bring you the finest articles and updates from around the world. There is a team that works tirelessly to ensure that you have a seamless reading experience. But all this costs money. Please support us so that we keep doing what we do best. Happy Reading

Support Us
Or
9289230333
Or

POST COMMENTS (2)

Mukesh.Naik

Oct 31, 2021
It may look totally "Insane" but AI and Robotics is the future and how far it will effect, compete and challenge humans intelligence and also how far it will benefit or be a threat for entire human beings as a Species , only time will disclose it in future but it's coming for sure. Jai Hind

Prakash

Oct 26, 2021
This is inevitable. If countries find more benefits than losses then these bots will adopted. Rightly said they will become the Kalashnikov of tomorrow.

Leave a Comment