In today’s world, the emergence of Artificial Intelligence (AI) has been phenomenal. It will in future become a major part of human life. In certain spheres, it is performing better than humans. Future wars, based on AI, will have disastrous results and must, therefore, be curbed. In computer science, AI sometimes called ‘machine intelligence’ is actually intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans. Colloquially, the term “artificial intelligence” is often used to describe machines (or computers) that mimic “cognitive” functions that humans associate with the human mind, such as “learning” and “problem solving” like computer science, information engineering, mathematics, psychology, linguistics, philosophy and many other fields.
There is nothing new about artificial intelligence. The ancient Hindu scriptures have mentioned in Bhagwata Purana bout Pushpak Vimana, Saubha Vimana, Sudarshan Chakra, Agni Bann, Brahma Astra and so on. These have been used since the days of the Ramayana and Mahabharata. The sages have said, “What can be achieved in 100 years in Satya/Krita Yuga, takes 10 years in Treta Yuga, takes one year in Dvarpara Yuga, and one day in Kali Yuga”. One can clearly see it happening now. One of the most alarming and least understood features is the race towards artificial-intelligence- enabled warfare.
Two super powers, America and China are investing huge sums in militarised artificial intelligence, from autonomous robots to software that gives Generals rapid tactical advice in the heat of battle. China frets that America has an edge thanks to the breakthroughs of Western companies, such as their successes in sophisticated strategic games. America fears that China’s autocrats have free access to copious data and can enlist local tech firms on national service. Both sides are engaged in a rat race. AI-enabled weapons may offer superhuman speed and precision. But they also have the potential to upset the balance of power. In order to gain a military advantage, the temptation for armies will be to allow them not only to recommend decisionmaking but also to give orders.
That could have worrying consequences. Able to think faster than humans, an AI-enabled command system might cue up missile strikes on aircraft carriers and airbases at a pace that leaves no time for diplomacy and in ways that are not fully understood by its operators. On top of that, AI systems can be hacked, and tricked with manipulated data. During the 20th century the world eventually found a way to manage a paradigm shift in military technology, the emergence of the nuclear bomb. A global disaster was avoided through a combination of three approaches: deterrence, arms control and safety measures. Many are looking to this template for AI. Unfortunately it is only of limited use, and not just because the technology is new.
The principles that these rules must embody are straightforward. AI will have to reflect human values, such as fairness, and be resilient to attempts to fool it. Crucially, to be safe, AI weapons will have to be as open to explanation as possible so that humans can understand how they take decisions. Many Western companies developing AI for commercial purposes, including self-driven cars and facial-recognition software, are already testing their AI systems to ensure that they exhibit some of these characteristics. The stakes are higher in the military sphere, where deception is routine and the pace is frenzied.
Amidst a confrontation between the world’s two big powers, the temptation will be to cut corners for temporary advantage. So far there is little sign that the dangers have been taken seriously enough, although the Pentagon’s AI centre is hiring an ethicist. Leaving warfare to computers will make the world more dangerous place. There could be no more consequential decision than launching atomic weapons and possibly triggering a nuclear holocaust. President John F. Kennedy faced just such a moment during the Cuban Missile Crisis of 1962 and, after envisioning the catastrophic outcome of a US-Soviet nuclear exchange, he came to the conclusion that the atomic powers should impose tough barriers on the precipitous use of such weaponry.
Among the measures he and other global leaders adopted were guidelines requiring that senior officials, not just military personnel, have a role in any nuclear-launch decision. As the Pentagon and the military commands of the other great powers look to the future, what they see is a highly contested battlefield. Some have called it a “hyperwar” environment, where vast swarms of AI-guided robotic weapons will fight each other at speeds far exceeding the ability of human commanders to follow the course of a battle. At such a time, commanders might increasingly be forced to rely on ever more intelligent machines to make decisions on what weaponry to employ when and where. At first, this may not extend to nuclear weapons, but as the speed of battle increases and the “firebreak” between them and conventional weaponry shrinks, it may prove impossible to prevent the creeping automatisation of even nuclearlaunch decision-making.
Such an outcome can only grow more likely as the US military completes a top-to-bottom realignment intended to transform it from a fundamentally small-war, counter-terrorist organization back into one focused on peer-against-peer combat with China and Russia. This shift was mandated by the Department of Defence in its December 2017 National Security Strategy. Rather than focusing mainly on weaponry and tactics aimed at combating poorly armed insurgents in never-ending small-scale conflicts, the American military is now being redesigned to fight increasingly well-equipped Chinese and Russian forces in multi-dimensional (air, sea, land, space, cyberspace) engagements involving multiple attack systems (tanks, planes, missiles, rockets) operating with minimal human oversight.
“The major effect/result of all these capabilities coming together will be an innovation warfare has never seen before: the minimization of human decision- making in the vast majority of processes traditionally required to wage war,” observed retired Marine General John Allen and AI entrepreneur Amir Hussain. That “minimization of human decision-making” will have profound implications for the future of combat. Ordinarily, national leaders seek to control the pace and direction of battle to ensure the best possible outcome, even if that means halting the fighting to avoid greater losses or prevent humanitarian disaster.
In a bid to increase coastal security near the sea border with Pakistan, the Indian Navy commissioned its 4th generation Dornier aircraft Squadron 314 (Raptors) in strategically located Porbandar, Gujarat on 29 November. The Dorniers are deployed for maritime surveillance, pollution prevention, troop transport, aerial survey, search and rescue, evacuation of casualties, and cargo and logistics support. It has also deployed K-4 nuclear capable Intermediate- range submarine-launched ballistic missile under development by Defence Research and Development Organisation to arm the Arihant-class submarines. These are part of AI.
(The writer is a defence analyst and commentator)