Talos, he was called: born of the race of Bronze, the men sprung from ash-trees; last among the sons of the gods; the Guardian of Crete, who would run tirelessly around the island, circling it three times each day, hurling boulders at all who sought to enter. Then, Medea, granddaughter of the sun god Helios, invoked the Spirits of Death and the hounds of Hades who feed on the souls of the death, leading Talos to tear his ankle on a sharp rock, and his ichor, or life blood, leaking out of his single vein.
“Is it true then, Father Zeus, that people are not killed only by disease or wounds, but can be struck down by a distant enemy?” asked Apollonius Rhodius, in his BCE epic, Argonautica. “The thought appals me. Yet it was thus that Talos, for all his brazen frame, was brought down”.
Earlier this week, Remin University scholar Jin Canrong ignited global sensation, claiming that the People’s Liberation Army had used directed-energy weapons to evict Indian troops from positions they had captured on the south bank of Pangong Lake in Ladakh. The Indian government—and Twitter—dismissed his assertion, insisting they were mere propaganda.
Like the legend of Talos, the microwave-beam claims might well be pure fantasy, but it’s a fantasy Indian defence planners ought to be paying close attention to.
For the most part, discussions around laser, microwave and, particle-beam weapons have centred around their use to bring down ballistic missiles, or knock out satellites. Each of the hundreds of thousands of troops India commits to guarding the Line of Control and Line of Actual Control, though, as well as the gargantuan logistics chains needed to sustain them, is an argument for their use in more mundane contexts.
These deployments have leeched badly needed resources from military modernisation. Experts like Abijnan Rej have long been warning that salaries and pensions alone far exceed spending on technology. Directed-energy weapons, coupled with new Artificial Intelligence-driven control systems, could offer India a way to reduce spending on its most manpower-intensive tasks, the physical occupation of the LAC and LOC.
In some parts of the world, the future has already begun to arrive. Guarding the fraught Demilitarised Zone, the SGR-A1 robot stands shoulder-to-shoulder with South Korean troops, using three low-light cameras, as well as heat and movement sensors, linked to pattern recognition software, to detect and interdict intruders. In the event that an intruder refuses to respond to its verbal warnings and raise his hands in surrender, SGR-A1 can open fire on them with a light machine gun, from up to 800 metres away, though the decision has to be cleared by a human supervisor.
Forces across the world are finding ever-new applications for AI systems, from the analysis of electronic intelligence to logistics and maintenance. China is marketing fully-autonomous drones, claimed to be capable of recognising and attacking targets; Russia is also conducting extensive work in the field.
The argument against AI control of lethal weapons systems aren’t hard to come by. There are substantial ethical issues involved in handing life-and-death decisions to machines. Last year, United Nations Secretary-General António Guterres asserted that “machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law”. Experts have warned that AI-driven weapons will, inevitably, proliferate—making the world even more insecure,
There are, moreover, significant problems with the technology. In an authoritative 2017 study, for example, scholar Andrew Ilchanski demonstrated a state-of-the-art AI system mis-recognised an image of a Panda, with imperceptible digital alterations, as a Gibbon. “The problem”, Aarti Prabhakar, former director of the United States’ cutting-edge Defence Advanced Research Projects Agency pithily observed of AI in 2016, “is that when it’s wrong, it’s wrong in ways that no human would ever be wrong”.
Problems like these, though, have to be balanced against the fact that human judgment is also imperfect: by some estimates, human pattern recognition fails some 5 times out of a 100, twice as often as AI. Killings of civilians, and fellow soldiers, are common in combat, even when soldiers are highly trained. Humans, unlike machines, are capable of fatigue, laziness and carelessness—all of which can and do kill.
Fear—that most foundational of human emotions in the battlefield—makes the problem worse: In a 2011 study, the United States’ Government Accountability Office estimated that 250,000 rounds of ammunition were being fired for every insurgent killed in Afghanistan,
Less-lethal weapons like microwave beams, though could offer a way out of these conundrums. Even though AI systems might mis-identify innocent villagers who have strayed into prohibited zones on the Line of Actual Control as targets, for example, the consequences would be less tragic than with live fire. Even the potential for small-scale intrusions to escalate into skirmishes, of the kind seen across the Line of Control through this autumn, would be mitigated.
The tragic consequences of human misjudgment were underlined in the Galwan slaughter; technology could make such outcomes less probable, and thus diminish the risk of war.
Even though fully implementable technological solutions lie in the future, it isn’t as far away as we might imagine. In 2012, the United States armed forces had spent $25 million on directed-energy microwave weapons mounted on medium trucks, intending to use them against mobs in countries like Afghanistan and Iraq. Raytheon’s Silent Guardian was designed to inflict extreme pain on anyone in the path of its beams, at distances of several hundred metres—a relatively humane alternative, the argument went, to the use of live fire.
Though Silent Guardian was never used, among concerns that its indiscriminate impact might amount to torture, smaller variants have been acquired by law enforcement in the United States. A committee set up by the Ministry of Home Affairs in 2016 had listed them for consideration among alternatives to the use of birdshot against rioters in Kashmir.
Many parts of the world have already deployed such systems—Indeed, Turkey is reported to have fired one, successfully, against a drone in Libya last year, while even Iran claims to have deployed similar, home-grown systems.
Lasers, microwaves, particular beam and acoustic weapons are attractive to militaries for obvious reasons: they are invisible, noiseless, and have flat trajectories, unaffected by wind or gravity. In some contexts, they may be cheaper to operate than conventional weapons, doing away with the need for complex logistics chains to provide ammunition to conventional weapons.
From an Indian point of view, there’s an even more important reason for a relentless pursuit of these technological capabilities: directed energy weapons, controlled by AI, offer the prospect of saving lives in the highly militarised environments on the country’s borders, both of non-combatants who have strayed into harm’s way,
The new world isn’t one to be unequivocally welcomed. As AI embeds itself more deeply in the world’s militaries, its sheer speed will put defenders at an inherent disadvantage, creating perverse incentives to strike first and with maximum force. In an AI-dominated battlefield, the kinds of long, slow stand-offs that have unfolded along the Line of Actual Control this year, giving leaders in both countries time to contain the scale of the crisis, may prove to be things of the past.
Like it or not, though, the world is in the midst of the most significant revolution of military technology since the invention of nuclear weapons. The myth of Talos should teach us that the idea that territory can be protected by phalanxes of metal is an illusion: technology, summoned by Medea through her spells, will render traditional militaries ever-more irrelevant.