AI Ethics: Silicon Valley Should Take A Seat At The DoD Table
Posted on
In the wake of objections from employees, Google withdrew from its Project Maven contract with the Pentagon, designing software to improve the analysis of drone imagery.
Worried that it might lose access to critical technology, part of the military’s response to Silicon Valley is creation of a new Joint Artificial Intelligence Center (JAIC), which includes a focus on “ethics, humanitarian considerations, and both short-term and long-term A.I. safety.”
The ethics of weapons and their use isn’t limited to A.I., nor is it new. Albert Einstein regretted his role in encouraging President Roosevelt to create the Manhattan Project, which designed and built the first nuclear weapons. Defense funding of university-based research was a flashpoint in anti-war protests in the late 1960s. In my book Mind Wars I recount a 2003 debate in the pages of the prestigious science journal Nature spurred by an editorial called “The Silence of the Neuroengineers.” The journal urged neuroscientists to be more aware of the implications of work that might be funded by agencies like DARPA, sparking an angry response by a senior member of the Defense Science Office noting the benefits to civilians of their work, including in medical care.
The relationship between the science and security establishments is complex and longstanding. The National Academies have their origins in President Lincoln’s need to screen new warfighting technologies. Psychiatrists and psychologists assessed intelligence, personality and small group dynamics in the world wars, and responses to the atomic field tests in the 1950s. Social scientists were sought after to help understand communist subversive movements in the developing world during the 1960s. Modern U.S. military superiority would be literally unthinkable without the massive financing of the academic world since Sputnik.
As a philosopher and historian who has had a modest role in national security discussions for 25 years – I sometimes call myself an ethnographer who approaches the security world the way an anthropologist studies a culture – I’ve given a lot of thought to the ethical responsibilities of scientists and technologists. I believe there are ways for individuals and organizations to think through these seemingly conflicting responsibilities. In a June 2018 op ed in The Atlantic former Secretary of State Henry Kissinger called on technologists and humanists to join together in leading the way toward a philosophical framework for the ethically challenging new era of AI. It’s a fine goal, one made far more complex with the addition of the introduction of defense planners to the conversation. In my experience, a first step is trust. Academics may be surprised that ethics is written into the DNA of our military culture. For their part, military planners generally put aside any stereotypes about pointy-headed radical peaceniks run amok on campus. (That doesn’t mean there aren’t exceptions in both cases.)
From the technologists’ side the problem seems daunting at first. How we do we reconcile personal ethics and a desire to expand knowledge and contribute to human flourishing with the relentless demands of national security? Both involve the nature of responsibility and the realities of an increasingly competitive and often violent world, one in which the international arrangements that have prevented global catastrophe for 70 years seem now to be more under stress than ever. Most of all, scientists and engineers don’t want to be in the position satirized in 1960s comedian Tom Lehrer’s ditty, “if the rockets go up who cares where they come down? That’s not my department, says Wernher von Braun.”
I suggest a framework based on the principles of proximity and engagement. Proximity refers to the known role that one’s own work would have in causing death or injury. The more proximate the role, the more reasonable it is to ask questions about the use of the technology, such as how it would be governed by the laws of armed conflict and command structures. Admittedly, basic science can also end up in applications with unintended effects. Those can’t always be gamed out in advance, but scenarios can at least sometimes be imagined and they, too, can raise appropriate questions. Another aspect of proximity is errors of omission: What harm will be done if I don’t undertake this work?
The latter question leads to the principle of engagement. As an ethicist and historian my work is unlikely to have concrete effects on the battlespace but I could inadvertently play a sanitizing “witness” role. As well as their role in the work, scientists and engineers need to consider the consequences of their deliberate absence from a conversation. If they don’t insist on building acceptable and verifiable safeguards for their work into a system someone else will, and not necessarily in a form they would endorse. To have a voice at the table, you need to have a seat at the table.
Jonathan Moreno is the David and Lyn Silfen University Professor at the University of Pennsylvania. At Penn he is also professor of medical ethics and health policy, of history and sociology of science, and of philosophy. The American Journal of Bioethics has called him “the most interesting bioethicist of our time.” Moreno is the U.S. member of the UNESCO International Bioethics Committee.
Subscribe to our newsletter
Promotions, new products and sales. Directly to your inbox.