The New Mil-Tech Industrial Complex

Deep Linking
March 10, 2020
Prioritizing Accountability and Trust
March 10, 2020

The New Mil-Tech Industrial Complex

Some of the biggest A.I. companies in the U.S. started partnering with the military to advance research.

In the past few years, some of the biggest A.I. companies in the U.S. started partnering with the military to advance research, find efficiencies and develop new technological systems that can be deployed under a variety of circumstances.

The reason: The public sector cannot advance its technology without help from outside companies. Plus, there is a lot of money to be made. Both Amazon and Microsoft made headlines over a $10 billion, 10-year government tech contract called the Joint Enterprise Defense Infrastructure, or JEDI. In addition to Amazon and Microsoft, several others, including IBM, Oracle and Google, competed to transform the military’s cloud computing systems.

Meanwhile, the Central Intelligence Agency awarded Amazon a $600 million cloud services contract, while Microsoft won a $480 million contract to build HoloLens headsets for the Army. The contracts prompted employee protests. In 2017, the Department of Defense established an Algorithmic Warfare Cross-Functional Team to work on something called Project Maven—a computer vision and deep-learning system that autonomously recognizes objects from still images and videos. The team didn’t have the necessary A.I. capabilities, so the DOD contracted with Google for help training A.I. systems to analyze drone footage. But the Google employees assigned to the project didn’t know they’d actually been working on a military project, and that resulted in high-profile backlash.

As many as 4,000 Google employees signed a petition objecting to Project Maven. They took out a full-page ad in The New York Times, and ultimately dozens of employees resigned.

Eventually, Google said it wouldn’t renew its contract with the DOD. Google eventually launched a set of ethical principles governing its development and use of A.I., including a provision that prohibits any systems from being used for “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.”