Teaching Drones How To See: Fire Scout & Kestrel
Posted on
The military is drowning in video. Figuring out what’s worth watching can literally be a matter of life and death.
The standard technique today is to sit young servicemembers down at screens to stare at live feeds or archived video — from drones, from satellites, from static cameras — until their eyes glaze over. But that’s labor-intensive and unreliable: Human beings aren’t good at paying close attention to the same thing for hours on end. So the military will pay good money for software that can automatically detect patterns and tell the humans when and where to focus their attention.
This is the big picture behind the US Navy’s recent contract with Australia-based Sentient to install “automated detection software” on the MQ-8 Fire Scout drone. An unmanned helicopter available in two sizes, the Fire Scout has already deployed as a reconnaissance asset on Navy frigates and will go on the controversial Littoral Combat Ship. But as with other drones, having no human being aboard hardly means there’s no human controller required. Different unmanned vehicles have different degrees of autonomy: The Predator requires human hands constantly on its controls, while the Global Hawk can fly itself from point to point. But all of them need a human to go through the megabytes of data they beam back.
Sentient’s software, called Kestrel, automatically picks out objects from the background and highlights them for the human operator, in real time. Video on the company’s website shows the program highlighting tiny specks in the distance or through cloud — things I certainly wouldn’t have noticed amidst the whitecaps — that turn out on closer inspection to be boats. The company claims it can pick up small wooden boats and even individual humans cast overboard.
“Kestrel relieves the operator from searching a video stream for object of interest and instead cues the operator to objects for evaluation, reducing workload and improving detection,” said Rob Murphy, who works on Fire Scout for Naval Air Systems Command. “This capability improves any type of search effort by allowing a larger area to be searched in a given amount of time.”
“Automatic detection is very difficult,” Murphy noted. Computers are great at crunching numbers, not so great at navigating the physical world. Millions of years of evolution have honed the human eye and brain to compartmentalize the flood of incoming photons into discrete objects: I can distinguish the laptop I’m writing on from the table, the table from the floor, the floor from the walls. To a computer, they’re all just pixels.
Even the most modern software doesn’t really recognize objects. “It’s simply picking up pixels that are moving,” said Doug Rombough, a retired Army colonel who’s now head of business development at Virginia-based Logos Technologies. If a cluster of pixels above a certain size starts moving in a consistent direction, the software identifies that cluster as a vehicle and tracks it, highlighting it for the operator. “The challenge,” he said, “if that car comes to a stop sign and sits there for a little bit, your [highlight] box may go away”: The software can’t pick up on stationary objects.
Logos makes wide-area surveillance systems for use on aerostats (aka blimps): a larger system with night-vision capabilities used in Afghanistan, also called Kestrel — apparently there aren’t enough sharp-eyed bird species to name surveillance systems after anymore — and a smaller daytime-only model called Simera that the State Department has approved for foreign buyers.
Both systems can track every vehicle-sized object in about a 37-square-mile area, and human-sized objects over a closer distance. The real value is not tracking everything for the sake of tracking everything, Rombough told me: It’s picking out what might matter so humans can zoom in with other sensors.
“It is a very powerful sensor,” he said, “but it doesn’t have a great resolution: It’s not going to pick up license place numbers or people’s faces.” Instead, it tells the human operator where to point a higher-resolution but narrower-view sensor, one that wouldn’t be able to search a large area on its but can get a close look at something once it knows where to look.
“Ideally we would like to take the man out of the loop,” Rombough went on. “If a guy’s driving a joystick [to aim a sensor], that can be very slow. We would rather have the computer and the software drive it.” Then the wide-area sensor would be talking directly to the high-resolution sensor, allowing each to do what it does best without a human having to bring them together.
That’s a concept called “automatic cuing,” and it’s tremendously attractive to the military and intelligence communities. As my colleague Colin Clark wrote last year, “Imagine a satellite has been tasked to watch a village with several high value targets in residence. The satellite, probably working with other assets such as Global Hawks and Predators, would perform what is today known as change detection. For example, the three people under surveillance etch the same rough pattern in the village for several weeks, going to the mosque, visiting a tea house and sleeping in several different houses. One day, two of the men go outside, get on scooters and drive in opposite directions. The ground station receiving the data would automatically note the shift in behavior and alert analysts or even special operations troops on standby.”
That level of sophistication is a long way from what’s going on the Navy’s Fire Scout now — but in the future it would allow the military to make the most of all the data it’s accumulating.
Subscribe to our newsletter
Promotions, new products and sales. Directly to your inbox.