ATLAS: Killer Robot? No. Virtual Crewman? Yes.
Posted on
No, the Army isn’t turning tanks into robotic “killing machines,” as some excited headlines put it last week. Instead, as part of the military’s urgent push for artificial intelligence, an Army program called ATLAS is developing an AI that acts like a virtual soldier in the turret, one designed to assist the human crew.
[Click here to read the entire series: The ‘Killer Robots’ Debate]
Despite an alarming name, Advanced Targeting & Lethality Automated System, ATLAS is actually meant to help the humans spot threats they might have missed, prioritize potential targets, and even bring the gun to bear — but it will be physically incapable of pulling the trigger itself.
“Envision it as a second set of eyes that’s just really fast,” Army engineer Don Reago told me Friday, “[like] an extra soldier in the tank.”
Reago is director of Night Vision & Electronic Sensors at Fort Belvoir (part of the newly reorganized Combat Capabilities Development Command), and he’s worked for 30 years on what the Army calls Assisted Target Recognition. The service has always avoided the more common term, Automated Target Recognition, he told me, because it deemphasizes the role of humans. Whatever you call it, ATLAS’s target recognition system will use machine learning algorithms, trained on vast databases the Army is going to have to put together, protect against misinformation and keep updated. Silicon Valley is great at analyzing Facebook photos and funny cat GIFs; not so much at telling a T-90 from a Leopard II.
(UPDATE: The first demonstration will be on a 50 mm autocannon, smaller than an M1’s 120 mm main gun but, interestingly, the exact same XM913 gun that the Army wants for its future Optionally Manned Fighting Vehicle.)
Once ATLAS is fielded, the AI on any given armored vehicle will compare data from multiple sensors, perhaps even from multiple vehicles, to defeat camouflage and spoofing. Then it will identify potential targets and point them out to the crew. How much detail ATLAS provides the user will depend on how good the algorithms get after being trained. But the AI will leave it to the human to determine hostile intent, which the law of war requires before opening fire in self-defense.
“ATLAS … might be saying ‘this is a human who appears to be carrying a weapon,'” Reago said. “The algorithm isn’t really making the judgment about whether something is hostile or not hostile. It’s simply alerting the soldier, [and] they have to use their training and their understanding to make that final determination.”
Above all, ATLAS will not be a second finger on the trigger. “The soldier would have to depress the palm switch to initiate firing,” said Bob Stephan, who’s worked on tanks for years and is now ATLAS project officer at Picatinny Arsenal in New Jersey. “If that is never pulled down, the firing pin will never get to the weapon…. That’s how we will make sure ATLAS never is allowed to fire autonomously.”
Pitfalls & Safeguards
That said, while the Army has no intention of giving ATLAS the ability to fire on its own, it would be physically possible to make it so. Even for purely mechanical firing systems with no software component, such as the tank gun’s palm switch and trigger, you could rig a AI-controlled servo to press them instead of a human hand.
That’s what really worries Stuart Russell, a Berkeley AI. scientist and activist, who’s been ATLAS’s most prominent public critic so far. “Even if the human is ‘in the loop’ [currently], the ‘approval’ step could easily be eliminated,” Russell told me, “meaning that this is lethal autonomy in all but name.”
Even if the Army does keep its promise to keep a human in the loop, Russell went on, there’s potential for what’s called “automation bias.” That’s a form of artificial stupidity that occurs when a poorly designed interface or poorly trained operators reduce the human role to pushing whatever button the AI recommends — like Pavlov’s dog, only with tank cannons.
As the Army explained it, ATLAS will present a list of “objects of interest” from which the human operator can choose. ATLAS then brings the gun to bear on the chosen “object” — even the engineers had to stop themselves from saying “target” — so the human can look through the gun’s sights, the most detailed close-up sensor available, to make the final call.
But it’s easy to imagine a soldier skipping the eyes-on check and just firing blindly, especially in a high-stress situation with threats on all sides. After all, Tesla drivers are supposed to keep their eyes on the road and hands on the wheel when the car’s infamous Autopilot is engaged, but at least some of them didn’t, leading to fatal accidents. And that’s with a 5,000-pound automobile, not a 70-ton tank firing 50-pound shells.
So you don’t want your automated system doing 99 percent of the work and only asking the human for input at the end. “If the human is fully engaged in the decision process, they should be actively identifying targets rather than waiting for the machine to do it,” Russell told me. “The human has a much better understanding of the overall situation, the potential for misidentification, etc. Imagine how well the AI system is going to do in a truck stop parking lot, for example,” with different vehicles and people moving unpredictably and in close proximity.
Even subtle details of the user interface or training can induce automation bias, said Paul Scharre, an Army Ranger turned thinktank analyst who’s generally bullish on new technology. “It’s not good enough for the engineers who build it to understand what it’s going to do,” Scharre told me. “The person pushing the button to launch it has to understand.”
For example, Scharre said, if the ATLAS screen shows an “object of interest” outlined in bright red cross-hairs, or operators are trained to trust the automation over mere human judgment, people are probably going to shoot when they shouldn’t. But if the system labels potential targets in, say, yellow — like a traffic light — and operators are trained that they’ll be held accountable by court-martial for every dead civilian or friendly soldier, that will predispose troops to caution.
The Army’s well aware of these issues, the ATLAS engineers told me. “An important part of the strategy is to bring the soldier into the loop at multiple points of the process,” Reago told me, “so they are fully a part of the chain of the events,” not just pressing a button at the end.
- The humans can always turn ATLAS off. Using it or not will be a tactical choice.
- Even with ATLAS on, the tank crew will still primarily be trained to use their own eyes, ears, and judgment to scan their surroundings. They’ll be able to look at the same sensor feeds ATLAS is analyzing or — something ATLAS can’t do — stick their heads out of the hatches to look and listen.
- When ATLAS proposes potential targets, the crew can add “objects of interest” to ATLAS’s list, delete things they don’t think are threats,or even turn ATLAS off.
- Finally, once ATLAS has brought the weapon to bear, it’s up to the gunner to look through the gunsights, identify the target and open fire or not.
That’s similar to current hunter-killer controls, in which the tank commander can slew the turret around to aim at a target he’s spotted but the gunner hasn’t yet. Just as hunter-killer systems let the (human) commander help the (human) gunner without taking over the gunner’s job, ATLAS is meant to help the human crew without the automation taking over. The idea is to blend the best of human and machine, a synergy then-deputy defense secretary Robert Work likened to the mythical centaur.
“We are not building killer robots”
“Our goal is, has always been, to help the soldier,” Reago told me. “We’re not taking away their current capability for acting on their own. It’s more of an assistant.”
“We are not building killer robots,” he told me. “I am not interested in that whatsoever.”
Is anyone else in the Army interesting in taking the human out of the loop, though? In 30 years working on assisted target recognition, Reago replied, “I have had the good fortune …. of talking to many, almost all, the senior Army leadership, and none of them has ever said anything like that.”
“They have a strong faith in the ability [and] the skill of our soldiers,” Reago told me. “It’s one of our crown jewels, [and] I don’t see how any machine’s going to replace that.”
“Our policy is having a human in the loop because a human can understand context,” said Army Undersecretary Ryan McCarthy, himself a veteran of Ranger operations in Afghanistan, when I asked him about AI at a public event this afternoon. As for machines, he said, “they can analyze the data within a nanosecond or however fast they do it, but you have to have a human being to understand the context, to put that guidance in there about whether or not to shoot, and to take the shot.”
“I would not foresee that to be any different in the near future,” McCarthy said.
But if the Army’s intentions are so modest, why did they inspire such anxious headlines? If ATLAS is so benign, why does it stand for “Advanced Targeting & Lethality Automated System”?
The answer lies in both the potential applications of the technology, as Russell points out, and the US Army’s longstanding difficulty explaining itself to the tech sector, the scientific community, or the American people. That’s the topic for Part II of this story, out Wednesday.
Updated 8:35 am Tuesday to add detail about caliber of weapon in demonstration and correct details of Mr. Reago’s title and Mr. Stephan’s background.
[Click here to read the entire series: The ‘Killer Robots’ Debate]
Subscribe to our newsletter
Promotions, new products and sales. Directly to your inbox.