Ethics At Heart Of Thales’ AI Strategy
Posted on
PARIS AIR SHOW: The use of Artificial Intelligence (AI) needs to be “transparent, understandable and ethical,” says David Sadek, director of AI research at Thales, the French aerospace and defense electronics giant. Speaking to reporters here today, Sadek stressed that “ethical principles are very important” to the company’s overall AI research strategy, adding that “while it is not up to Thales to decide what is or is not ethical, it is up to us to develop technologies that enable our customers to implement ethical principles.”
In a rare effort by a defense firm to explain its activities to an audience outside of its often secretive government customers, French avionics/electronics giant Thales is using the Paris Air Show to tout its corporate approach to AI research and development, called “Thales TrUE AI.” The acronym stands for: “Transparent AI, where users can see the data used to arrive at a conclusion; Understandable AI, that can explain and justify the results; and finally Ethical AI, that follows objective standards protocols, laws, and human rights,” according to a company fact sheet.
In a unique marketing and outreach campaign, Thales is showing a series of videos every day of the Paris Air Show that explain how it is inserting these principles into its long-term strategic research, and how its AI-enabled systems can help improve decision-making — featuring case studies from the view of a senior Air Force general to a fighter pilot to a civilian traveler plotting an airline journey.
The use of AI and machine learning to develop autonomous weapon systems, especially lethal systems, is a controversial topic — perhaps more so in Europe than in the United States. Non-governmental organizations (NGOs), academia and international organizations have for nearly a decade been working to develop a set of rules for the development and use of autonomous weapons, centered around the concept of ensuring ‘meaningful human control’ — that is, not simply someone clicking ‘yes’ on a checklist.
As long ago as 2013, the United Nations Institute for Disarmament Research (UNIDIR, which I used to head) began a project to help national governments grapple with the ethical implications of autonomous weapons. In 2016, the parties (including the US) to the 1980 Convention on Certain Conventional Weapons that bars use of indiscriminate weapons (such as blinding lasers) created an open-ended Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons (GGE LAWS). GGE LAWS works to establish international consensus on the applicability of existing legal regimes, including the Geneva Conventions, and the possible need for new legal constraints. The GGE LAWS last met in March 2019 in Geneva. Further, most of the 28 countries that have endorsed the NGO-led Campaign to Stop Killer Robots are in Europe. While China also has endorsed the campaign, the US and Russia have not.
Thales, which saw revenues of $17.8 billion in 2017, is actively seeking to integrate AI-enabled operations and data analytics into “all of its vertical product streams: space, aerospace, cyber, avionics and defense,” Sadek told reporters. Currently, the French Defense Ministry’s procurement agency DGA is testing out an AI-driven target recognition and acquisition system for the next-generation Rafale F4, he said. The French government in January announced its investment of 1.9 billion Euro ($2.1 billion) for development of Dassault Aviation’s Rafale F4 model, with the Air Force planning to require 30 of the new planes between 2027 and 2030.
Sadek noted that one of the key aims of Thales’ AI research strategy is to marry up what he called “statistical or data-driven AI” — that is, basic machine learning that is data driven — and “symbolic AI or model-based AI” that aims to reproduce basic cognitive capabilities. “We are addressing … mainly what we call the hybrid AI technologies, which are a combination between statistical AI and symbolic AI. This is probably the virtuous trajectory for the future of AI technologies,” he said.
He noted that Thales is matching its research on technical challenges faces the AI field as a whole to “use cases” within its own business lines. One such challenge, he said, is something called “frugal learning” — that is, enabling machine learning when the available data sets for the system to use a a baseline are very small. “In many cases, big data are rather small; in many domains we don’t have much data.”
To improve its AI capabilities, the company further has been buying up and/or partnering with small, specialized start-ups. For example, Thales announced on June 7 acquisition of Cincinnati, Ohio-based Psibernetix to help create Certifiable AI. Psibernetix, along with the US Air Force Research Laboratory, created the ALPHA aerial combat application for unmanned aerial combat vehicles being used in simulators to train US Air Force pilots. Breaking D readers, of course, know about this since we broke the story.
Subscribe to our newsletter
Promotions, new products and sales. Directly to your inbox.