Autonomous weapon systems – commonly known as killer robots – may have killed human beings for the first time ever last year, according to a recent United Nations Security Council report on the Libyan civil war. History could well identify this as the starting point of the next major arms race, one that has the potential to be humanity’s final one.
US-based military robot manufacturer Ghost Robotics had strapped a sniper rifle to a robotic dog, in the latest step towards autonomous weaponry. Some people have reacted with moral outrage at the prospect of making a killer robot in the image of our loyal best friend. But if this development makes us pause for thought, in a way that existing robot weapons don’t, then perhaps it serves a useful purpose after all. The response to Ghost Robotics’ latest creation is reminiscent of an incident involving Boston Dynamics, another maker of doglike robots (which, in contrast, strongly frowns on the idea of weaponising them).
Image Source: newscientist.com
Autonomous weapon systems are robots with lethal weapons that can operate independently, selecting and attacking targets without a human weighing in on those decisions. Militaries around the world are investing heavily in autonomous weapons research and development. The U.S. alone budgeted US$18 billion for autonomous weapons between 2016 and 2020.
Autonomous Weapons
Autonomous weapon systems are not exactly new to military or warfare. The evolution of Unmanned Aerial Vehicles (UAV) or drones has been very rapid in the last decade and these have been put to use in various military applications. Images of terrorists being neutralised in Afghanistan or Iraq by combat UAVs became common, though the latest attacks of oilfields in Saudi Arabia by drone swarms by terror groups made the world leaders sit up and take notice of the evolving threats. While the UAVs and drones used to operate under human control / command or on a pre fed mission profile, the disruptions caused by development of Artificial Intelligence (AI) / Machine Language (ML) opened up the possibility of making these autonomous. Close on the heels of UAVs, Unmanned Ground Vehicles (UGV) also gained prominence, both for combat as well as logistic uses in battlefields. These platforms mounted with guns and sensors integrated with AI modules make them autonomous and capable of making decisions as per scenarios presented.
Why Autonomous Weapons Pose a Human Rights Dilemma?
Human rights and humanitarian organizations are racing to establish regulations and prohibitions on such weapons development. Without such checks, foreign policy experts warn that disruptive autonomous weapons technologies will dangerously destabilize current nuclear strategies, both because they could radically change perceptions of strategic dominance, increasing the risk of pre-emptive attacks, and because they could become combined with chemical, biological, radiological and nuclear weapons themselves.
The main problems associated with a robotic or autonomous weapon system can be summarised as:
AI – The Double-Edged Sword
Artificial Intelligence is the branch of computer science concerned with making computers behave like humans. According to a recent United Nations report, Libyan government forces hunted down rebel forces using “lethal autonomous weapons systems” that were “programmed to attack targets without requiring data connectivity between the operator and the munition”. The deadly drones were Turkish-made quadcopters about the size of a dinner plate, capable of delivering a warhead weighing a kilogram or so.
Artificial intelligence researchers have been warning of the advent of such lethal autonomous weapons systems, which can make life-or-death decisions without human intervention, for years. Well before the first test of a nuclear bomb, many scientists working on the Manhattan Project were concerned about future of nuclear weapons. A secret petition was sent to President Harry S. Truman in July 1945. It accurately predicted the future scenario. The threat comes this time from artificial intelligence, and in particular the development of lethal autonomous weapons: weapons that can identify, track and destroy targets without human intervention. The media often like to call them “killer robots”.
The regulation of AI must consider the harms and the benefits of the technology. Harms that regulation might seek to legislate against include the potential for AI to discriminate against disadvantaged communities and the uncontrolled development of autonomous weapons. Sensible AI regulation would maximise its benefits and mitigate its harms.
The Future
If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable. The endpoint of such a technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.
Strategically, autonomous weapons are a military dream. They let a military scale its operations unhindered by manpower constraints. One programmer can command hundreds of autonomous weapons. An army can take on the riskiest of missions without endangering its own soldiers. Beyond the moral arguments, there are many technical and legal reasons to be concerned about killer robots. One of the strongest is that they will revolutionise warfare. Autonomous weapons will be weapons of immense destruction. Previously, nations had to have an army of soldiers to wage war. An army needs to be persuaded to follow orders, trained, paid, fed and maintained. Now just one programmer could control hundreds of weapons.
Asymmetric wars – that is, wars waged on the soil of nations that lack competing technology – are likely to become more common. An analogy could be the global instability caused by Soviet and U.S. military interventions during the Cold War, from the first proxy war to the blowback experienced around the world today, multiply that by every country currently aiming for high-end autonomous weapons.
World stands at a crossroads on this issue. It needs to be seen as morally unacceptable for machines to decide who lives and who dies. The diplomats at the UN have to negotiate a treaty limiting their use, just as we have treaties to limit chemical, biological and other weapons to prevent potentially disastrous situations. With the US competing with China and Russia to achieve “AI supremacy” – a clear technological advantage over rivals – regulations have thus far taken a back seat.
Conclusion
In an era where advances in weapon technology are taking place at breakneck speed and where any such technological edge can exponentially increase or bolster a Nation’s chances of success in conflict/ at the bargaining table, the advent of robotic weapons represents a ‘redshift’ in combat capabilities, with the caveat that such disruptive technology needs to be kept out of the hands of irresponsible players represented by ‘rogue nations or non-state elements, in order that such technology does not point towards Armageddon, in any sense of the word.
(This article is a compilation from various conversations on Reuters Connect by Team Chanakya Forum)
Bibliography
The Conversation via Reuters Connect
The opinions expressed in this article are the author’s own and do not reflect the views of Chanakya Forum. All information provided in this article including timeliness, completeness, accuracy, suitability or validity of information referenced therein, is the sole responsibility of the author. www.chanakyaforum.com does not assume any responsibility for the same.
We work round the clock to bring you the finest articles and updates from around the world. There is a team that works tirelessly to ensure that you have a seamless reading experience. But all this costs money. Please support us so that we keep doing what we do best. Happy Reading
Support Us
POST COMMENTS (2)
Mukesh.Naik
Prakash