For some time, the development of increasingly autonomous weapons has raised deep concerns , triggering an international debate. In particular, autonomous attack systems, known as killer robots , would be able to select and attack individual targets without human control.
A report, published by Pax, a Dutch non-profit organisation, analyses the sharply increasing trend in the defence industry towards the automation of weapons systems. And it raises an alarm: the governments of the countries must adopt laws on the subject before weapons without human control become a reality.
Indeed, the study underlines that “in the last ten years there has been a large increase in the number of countries and companies working on these technologies“. In fact, killer robots are nothing more than military robots that would replace soldiers on the battlefield, on the ground and in the air. Thanks to artificial intelligence they would have autonomous decision-making abilities radically changing the way of making war. Indeed, one wonders if they would be able to distinguish between soldiers and civilians, if they would opt for proportionate or disproportionate actions.
“The emergence of this type of weapon would have an enormous impact on the dynamics of war, it is no coincidence that there is talk of a third revolution in this field after gunpowder and the atomic bomb” reads the report.
Not only that, even more important, according to Pax, is that these weapons” would violate the fundamental legal and ethical principles, destabilising international peace and security“. As Quartz reports , for the realisation of this study the Dutch organisation sent questionnaires to 50 military weapons manufacturers, asking if they had policies regarding artificial intelligence. Only eight responded by saying that they had systems of this kind in the pipeline, while the rest was silent.
This, according to Pax, is a cause for concern, as is another question: the legal responsibility of these killer robots. In fact, if an offence is committed to those who would be responsible, if the machines act without human control? This question, closely connected to the real possibility of error, shifts the field of action also to the wider application of the right to artificial intelligence.
What killer robots are
In short, killer robots could lead to a real war revolution on all fronts. The report states that if there are already “armed drones” – at least 24 states possess them and have been used to fight in 13 countries – the number of ground-based robots that are similar to unmanned machines are able to move is also growing and identify the goal. But the real novelty would be the testing of autonomous marine weapons systems. As early as 2018, a US company, Pax specifies, is doing research on mini-robotic military ships that automatically explore, detect and neutralise mines.
A dense network of appeals
Pax is not the only organisation that has so far tried to intervene to adopt legislation that prevents the spread of this type of robot. Already in 2017 a group of scientists met under the slogan of the “Stop Killer Robots” campaign, bringing the problem to the UN to veto murderous robots. Some countries, such as Russia, the United States and Israel, have hindered requests. The same year also the open letter of some scientists always to the UN.
In the same vein, there are also warnings from some of the big names in contemporary innovation: among these are the founder of Alibaba Jack Ma, who has repeatedly indicated the risk that artificial intelligence could cause a “third world war”and Tesla CEO Elon Musk, according to which the machines could become” an immortal dictator“. Not least the case of a former Google employee, Laura Nolan, who left her work project related to the application of artificial intelligence to the military field precisely because she feared to contribute to the onset of mass atrocities.