Faster Than Thought: DARPA, Artificial Intelligence, & The Third Offset Strategy
Posted on
ARLINGTON: The Defense Advanced Research Projects Agency (DARPA) is developing artificial intelligence that can help humans understand the floods of data they unleashed 50 years ago with the Internet and make better decisions, even in the heat of battle. Such “human-machine collaboration” — informally known as the centaur model — is the high-tech holy grail of the Defense Department’s plan to counter Russian and Chinese advances, known as the Third Offset Strategy.
“We’ve had some great conversations with the deputy,” said DARPA director Arati Prabhakar, referring to the chief architect of Offset, Deputy Defense Secretary Bob Work. “In many of our programs you’ll see some of the technology components” of the strategy. But it’s more than specific technologies, however exotic: It’s about a new approach to technology.
“Fundamentally, what’s behind the push of the Third Offset Strategy is this idea that the department needs to reinvigorate our ability to develop these advanced technologies,” Prabhakar said. “If we do that at the same old pace in the same old way, there’s a strong recognition that we’re just not going to get there.”
“We build monolithic systems today with every subsystem hardwired to each other… making it hard to even figure out where the problems are,” she said. Such monoliths take too long to develop, too long to troubleshoot, and too long to update: They can’t keep up with rapidly advancing adversaries. So DARPA has an initiative to “rethink complex military systems” in a fundamental way.
A traditional weapons program, like a fighter or a ship, spends years or decades packaging all sorts of customized software and hardware together in a tightly integrated system. Every piece depends on every other, often in unforeseen ways, which makes debugging software (for example) into a nightmare. Figuring out a fault can be like unraveling “spaghetti,” said Prabhakar.
Instead of such custom-tailored, tightly integrated systems, you want a modular and open architecture where you can easily replace a component — hardware or software — without disrupting the rest of the system.
Instead of a relatively small number of pricey manned platforms, you want a “heterogeneous” mix of manned and unmanned vehicles of all kinds, from 130-foot robotic ships to disposable handheld drones. Instead of architectures designed for a specific kind and size of force, you want systems that can scale up and down as the force changes. And instead of brittle networks dependent on a few means of transmission and a few central nodes, you want a highly distributed network that stays up despite physical attack, jamming, and hacking.
Jamming and hacking are hard to combat, however. The more you network, the more easily a cyberattack can spread throughout your force. The more you network wirelessly, the more easily electronic warfare can detect your transmissions and exploit them or shut them down. DARPA is applying cutting-edge research to both these problems.
A project called HACMS — High Assurance Cyber Military Systems — applies a class of mathematics called “formal methods” to finding and closing cyber vulnerabilities. In one recent experiment, Prabhakar said, the HACMS team took the mission computer for a Special Operations helicopter, an AH-6 Little Bird, and rebuilt the software, creating a new “kernel” on top of which the AH-6’s existing programs could run.
When a Red Team of expert hackers tried to break in, they couldn’t. Even when the Red Team was given some of the HACMS source code, they couldn’t find a hole. In fact, Prabhakar said proudly, the test at one point gave the Red Team control of one of the AH-6’s onboard programs, one that runs a camera — but then the attackers couldn’t get out of the camera software when they tried to penetrate the rest of the mission computer and get at the flight controls.
“These systems are not ‘unhackable’ completely,” Prabhakar cautioned, “[but] the obvious pathways for attackers have all been shut down in a way that’s mathematically provable.”
DARPA’s also applying new methods to the old problem of electronic warfare. Currently, when an aircraft encounters a new kind of signal — an enemy radar, for example, or a mysterious radio message — it records the data and brings it back to base. Then experts may take months or years to understand the enemy system and how to counter it. That was adequate when radars and radars were hardwired and hard to modify — but modern transmitters are digital, so changing the waveform is a simple matter of software. To keep up with these ever-mutating signals, “cognitive electronic warfare” aims to use artificial intelligence to detect, catalog, and counter transmissions in real time.
“We want to get to where we respond and react faster than human timescales,” Prabhakar said. “The way we do that is by, first of all, scouring the spectrum in real time and, secondly, applying some of the most amazing frontiers of artificial intelligence and machine learning, techniques like reinforcement learning. [Then we] use those to build systems, onboard systems, that can learn what the adversary is doing in the electromagnetic spectrum, start making predictions about what they’re going to do next, and then adapt the onboard jammer to be where the adversary’s going before they get there.”
Automated defenses actually exist today in one arena of physical combat: air and missile defense, where the Navy’s Aegis ships can start firing on automatic when too many threats are coming in too fast for human brains to handle.
So what do the humans do? No one’s proposing that machines should make decisions about the use of lethal force — at least, no one in the US — but if the battle is too fast and too complex for the human brain to handle, how do commanders command?
“You don’t want to overload the human with all that information. You want to give them exactly what he or she needs to make the decision,” said Prabhakar’s deputy, Steven Walker. You want the computer to keep track of all the complex actions by manned and unmanned systems, friendly and adversary; do an analysis; and present “two or three courses of action” for the human to choose among.
Wait a minute, I said. If you’ve been around Washington long enough, you know that the “decisionmakers” are often puppets of their own staff, who determine which options make it to the boss’s desk in the first place and then put a heavy thumb on the scales of which is best. In this paradigm you’re proposing, the computer plays the role of the savvy staff, the human the hapless principal. How do you make sure the commander isn’t just a rubber stamp for the computer?
“You’ve put your finger on one of the biggest issues,” Prabhakar said frankly. “As we enhance the abilities of these machine systems, [it] is about our trust and confidence in what they tell us, about what they think is happening, or what courses of actions they’re proposing.”
“There’s this powerful new wave that’s happening today in AI,” she continued, and the Pentagon needs to exploit it, “but I think it’s really important to just put on the table the fact that a lot of what’s happening in deep learning doesn’t yet have [a] rigorous theoretical foundation….We all see these systems come up with solutions that violate common sense because they lack the context.”
A small-scale example from everyday life is Apple’s Siri voice-activated software. “At first Siri is amazing,” Prabhakar said. “After three questions” — not so much. (My children delight in getting nonsensical answers out of Siri).
DARPA already has some programs tackling this problem, Prabhakar said, but “you’ll see more, I think, in that area as we start developing this next foundation for AI.”
A new foundation for artificial intelligence? That’s no small goal. But then, this is DARPA. “The bar is really, really high,” Prabhakar said with a chuckle. “One of our program managers likes to say… ‘This is a place where, if you don’t invent the Internet, you get a B.'”
Subscribe to our newsletter
Promotions, new products and sales. Directly to your inbox.