top of page
  • Writer's pictureIRALR


Updated: May 3, 2021

This article has been authored by Shruti Saxena, a fourth year student Amity Law School, Amity University Lucknow Campus.


The world is still fighting the damages done after the world wars, and surely is always on its toes for preventing another.

We have quite settled with an unstable political environment continuously existing in some or the other part of the world, to an extent that we as an international community openly interact with only such issues which indirectly or directly carry a likelihood of affecting our geopolitics. Many instances exist of powerful nations deploying their military in turbulent regions on the pretext of solving the issue and preventing them from escalating further, but over the time one realises the case is not as it seems to be.

The reduction of human causalities has been a persistent concern for the international communities and organisations since time immemorial. A lot many developments as well as prohibitions have been brought in place to give effect to the same. But the recent development has opened pandora’s box and brought back the horrors of the world wars.

Rise of Robotic Warfare

One thing that has defined the world war eras has been the co-existence of discriminate as well as indiscriminate killings. Suffering at the forefront has always been the vulnerable groups irrespective of their identity and nationality at large.

To bring in weapons that can minimise human engagement seems absurd in concept. Robots, as we have been seeing in movies are made to succumb to happy endings. Therefore, it seems as an easy concept to engage in and to employ as well. However, no matter how autonomous they may seem to be, they are always driven by the pen of the scriptwriter. They are envisioned in a way, to bring the horrors of technology to the forefront but also give a happy ending.

But the reality is that the decision of killing by a robot in an actual warzone needs immediate intervention, perhaps within seconds. There is no pre-planned turn of events where the humans would come out victorious. There does exist probability of the latter, but quite insignificant one.

Thus, “Robots”, a term which was once used in context of children’s toys and often carried a disclaimer of fiction in the context of Cinema and theatre, has in no time become a reality- that to a deadly one. The argument of reducing human causalities is not as peaceful as it seems to be. The conversation really gets uncomfortable when one dwells deep into the question of reducing human causalities, but of which side?

If we assume, even for once, that allowing the use of Artificial Intelligence weapons is necessary from military and security point of view, and that they will be operated in well supervised and regulated environments, how can we shy away from the facts that even though they are made to be purchased by governments, their sale in black market would be unpredictable and unwarranted. Can the supporters of such inclusion of weapons to upgrade the military, take responsibility of ensuring that none of the terrorist groups would have access to these lethal weapons? The answer is No.

Terrorist groups and underground societies are way more well interconnected than the governments and their intelligence agencies.

Range of Setting for the employment of Autonomous Weapons

The new machines or better known as autonomous weapons that are often referred to as Killer Robots could operate on land, in the air or at sea, and are threatening to revolutionise armed conflicts in an alarming way. This poses a question of what future wars will be fought with enemy combatants who wear no uniform, defend no territory, protect no population, and feel no pity, no remorse, or fear.

These new weapons have prompted a debate among military developers, robotic scientist, human rights activists, legal scholars, and all who are directly or indirectly wrestling with the fundamental question which boils down to whether machines should be allowed to make life and death decisions outside of human control. Artificial Intelligence is emerging as a powerful technology. It is inevitable and will be used in war. The question of how must it be utilised, depends on us. Should we use it to make warfare more humane and precise? Can we do so without loosing our humanity in the process and do we control our creation or do they control us? These are some important questions that require elaborate solutions.

In the world today, robotic system is widely used by militaries around the world, as well as many non-state actors, such as the Islamic States in Iraq and Syria (ISIS) already have access to autonomous drones.

The ISIS have built crude homemade drones that they are using in Iraq and Syria for attacks as improvised explosive devices on official troops deployed there as neutral parties.

With every step taken towards technological advancement and development, the meaning of autonomous weapon is getting clearer. With every new development phase, the technological advancement is getting independent and autonomous. But to give the same autonomy to weapons in order to make their own decisions of going around finding the enemy, and then planning an attack on the same, is though not a current reality on ground but, is an inevitable future. The technology is going to take us there.

So, what started as a concept of automobiles, to save lives on road, is now being deliberated upon for the inclusion in warfare, to take lives.

Challenges posed

The challenge is that in each contested environment adversaries will be much more equal, as they might have the ability to do things that include not only target things with radar and shooting them down, but also jam their communication links. Now, this may seem just another problem of technology, but this issue could put the entire human race to shame as well as extinction. The current claims involved in the employment of autonomous weapons is that the decision to attack target, is still under the control of human supervision. However, once these weapons become fully autonomous, and their communication links attacked and jammed with the help of cyber warfare, the link between humans and robots gets infringed and removed, and this machine will be on its own to make decisions of life and death. This will abolish the chain of accountability which is one of the main principles of Law of War.

The concept of unmanned autonomous weapons definitely is a fancy development in the history of technology and warfare, and was no doubt developed in good faith, but the 2016 Chinese takeover of United States deployed unmanned underwater drone in the South China Sea, just refutes the entire claim of them being the only solution to protecting borders.

We are seeing increased autonomy not just in vehicles but also in advanced missiles. This fundamentally changes the human’s relation with what is happening in war. With the employment of fully autonomous weapons, humans don’t need to know the full particulars of that situation, and depend entirely on the wits of this Artificial Intelligence laden warfare system.

Are these weapons legal?

The ethicality and legality of these weapons is at the heart of the current movement for ban on autonomous weapons. Now the Law of War at present is not much vocal on the usage of technologically advanced weapons, especially those not requiring human intervention. It does take into consideration that when in war minimal damage to civilian object is likely to happen as collateral damage but it cannot be disproportionate to the military necessity to attack. So, the main question of perplexity towards the approach of such weapons is that could they be advanced to such levels where they are proficient enough to act without human intervention in a change of setting, ranging from civilian areas to military areas?

The Laws of War is binding in nature and has defined human accountability. In cases of unmanned autonomous weapons the bar of accountability simply gets removed, in case of communications being jammed, it gets worse.

Another important question at hand is whether these weapons are moral or not? Some argue that these weapons are necessary because they could save civilian lives, and the other side of the argument is that not only would they cause more harm, but there is something fundamentally wrong with machines making these decisions.


The rate of development over the inclusion of autonomous weapons in the United Nations Convention on Certain Conventional Weapons is slower than the rate of development of these weapons both by governmental military organisations as well as the commercial organisations.

The dilemma of whether a conformity will be reached between the legality of these weapons as well as their necessity is surely keeping the world on the edge of its seat, with the former Vice Chairman of Joint Chiefs of Staff, Ge. Paul Selva, rightly saying, “I think we should all be advocates for keeping the ethical rules of war in place, lest we unleash on humanity a set of robots that we don’t know how to control.”

We need to find ways of using this technology in way that not only enables armed forces to save civilian lives but also without losing humans’ control over what happens in warfare. Artificial intelligence is, no doubt, a breakthrough in scientific development, but can never equal human intelligence when it comes to morality, empathy and ethical values.

After all, when Pandora’s box was opened the only thing left in it was hope, and we are clinging tightly on to this hope of not losing humanity in the wake of military and technological advancement.

bottom of page
ga('require', 'ipMeta', { serviceProvider: 'dimension1', networkDomain: 'dimension2', networkType: 'dimension3', }); ga('ipMeta:loadNetworkFields'); ga('send', 'pageview');