Sue Gordon: Silicon Valley Should Work With the Government

Credit to Author: Emily Dreyfuss| Date: Fri, 09 Nov 2018 17:05:33 +0000

Sue Gordon, the principal deputy director of national intelligence, wakes up every day at 3 am, jumps on a Peloton, and reads up on all the ways the world is trying to destroy the United States. By the afternoon noon, she has usually visited the Oval Office and met with the heads of the 17 intelligence agencies to get threat reports. The self-described “chief operating officer of the intelligence community” has a lot to worry about, but the 37-year veteran is generally optimistic about America’s future. Now, she says, she just needs Silicon Valley to realize that tech and government don’t have to be opposed.

On a recent trip to Silicon Valley, Gordon sat down with WIRED to talk about how much government needs Silicon Valley to join the fight to keep the US safe. She was in town to speak at conference at Stanford, but also to convince tech industry leaders industry that despite increasing employee concerns, the government and tech have a lot of shared goals.

“I had a meeting with Google where my opening bid was: ‘We're in the same business’. And they're like ‘What?’ And I said: ‘Using information for good,’” Gordon says.

That’s a hard sell in Silicon Valley, especially in the post-Snowden years. After Snowden’s leaks, tech companies and tech workers didn’t want to be seen as complicit with a government that spied on its own people—a fact Gordon disputes, saying that any collection of citizen’s information was incidental and purged by their systems. This led to a much-publicized disconnect between the two power centers, one that has only grown more entrenched and public in 2018, as Silicon Valley has undergone something of an ethical awakening.

Gordon agrees with and supports a broader awareness that technology can be abused, but came to Silicon Valley to explain why government and tech should solve those problems hand in hand.

Gordon knows from public-private partnerships. The CIA’s venture capital accelerator In-Q-Tel—which for nearly 20 years has invested in everything from malware-detection software to biochemical sensors to micro-batteries—was Gordon’s idea. Groundbreaking at its conception, In-Q-Tel directly funds startups that could be of interest to national security, without limits on how that money can be used, and without owning the intellectual property. Among other successful investments, In-Q-Tel backed a company called Keyhole, which Google would go on to acquire and turn into Google Earth.

"You don't become lawless just because you have technology."

Principle Deputy DNI Sue Gordon

Now, Gordon says, the time is ripe for a new partnership with the intelligence agencies and Silicon Valley. Artificial intelligence, she says, presents a huge opportunity for the government and the private sector, but the risks of its being abused, biased, or deployed by foreign adversaries is so real that the government and tech companies should be collaborate to secure it.

Some in tech openly agree with that notion—Bezos told the audience at WIRED 25 last month that “if big tech companies are going to turn their back on US Department of Defense, this country is going to be in a lot of trouble”—much of the rank and file are uneasy or flat-out hostile to the idea of working with the government on matters of war.

Google, in particular, has had a recently rocky relationship. In June, pressure from within its ranks led the company not to renew its contract with the Pentagon to help develop AI that would identify drone targets. Gordon expressed dismay over the decision, emphasizing that pattern recognition work is vital to intelligence gathering, and that it’s in the country's best interests to develop the best systems to get it done.

“I'm afraid some of the folks at Google probably think that when they're working on Project Maven, which is about computer vision, that some automatic device is going to make the decision about sending a weapon system,” she says. But Gordon contends that a human still ultimately makes that decision, and moreover anything to do with war is governed by the rules of engagement, whether it’s a human identifying a target or a machine alerting that human to a potential target. “You don't become lawless just because you have technology,” Gordon says. “We're a nation of laws.”

The risks of AI and how its potential for abuse is top of mind for technologists, policymakers, and ethicists. Just this week Microsoft’s President Brad Smith reiterated his repeated calls for facial recognition technology to be regulated “before the year 2024 looks like the book 1984.” Tech workers have objected to their companies—including Microsoft—working with the government. They’ve said they don’t want their technology being used by the government until and unless there are laws specifically tailored to make sure the technology is not abused. Case in point: a recent internal meeting at Amazon, which employees raised fears over its the company’s facial recognition being used by Immigration and Customs Enforcement.

Gordon agrees on the risks, but thinks cutting off collaboration is the exact wrong way to fix them. “There are so many bad things that can happen when you rely on algorithms to make decisions for you,” she says, noting that the government is highly incentivized to figure out how to make AI auditable and secure, an issue equally pressing for the private sector. “If we're using AI/ML to go through and look for a lot of images about, say, suspected terrorists, if an adversary were to change that algorithm so that we drew the wrong conclusion you could see that that would be bad,” she says.

“AI security? That's something that we both need. Advances us both," Gordon says. "The government has something cool to add, because we have a really particular view of the threats we face. And it will benefit us in terms of national security but it equally benefits every aspect of American life,” she says, whether that’s self-driving cars or algorithms that help guide medical care. Gordon thinks AI needs to be developed responsibly from the ground up, and argues that doing so requires the private sector and the government to work together in what she calls “shared creation.”

Beyond just private-public cooperation, Gordon envisions a new paradigm for sharing talented workers between the government and the private sector. She disputes the idea that the best engineers don’t want to work for the government, saying that people who want to work on important matters they know have purpose are still drawn to federal jobs, like she was. But she thinks ideally tech workers would start their career in government, where, she says, “we have the hardest problems and we give [people] more responsibility younger,” and then from there, leave. She wants government-trained techies to go into the private sector, bringing with them what they learned and innovating, and then when they are ready to slow down and leave the rat race, as she calls it, they can return to government.

"There are so many bad things that can happen when you rely on algorithms to make decisions for you."

Sue Gordon

Gordon hopes that more of a revolving door would lead to a little less mistrust and misunderstanding. “I think there's a lot of misconception about those of us who work in national security and intelligence,” she says. “We swear to uphold and defend the Constitution of the United States. That means we believe in and swear to uphold privacy and civil liberties.”

Silicon Valley has a long history of working with the government, and of using government-created tech, a tradition that does continue today. Collaborations and talent-sharing rubrics like the Defense Digital Service, which techies can join for tours of service, already allow for some of that cross-pollination Gordon advocates. As AI advances and becomes more important to the military and intelligence community, and as Silicon Valley continues its reckoning with the real-world uses and impacts of its products, it's an open question whether those partnerships can continue to grow.

“One of the key things about Google is I think it's adorable that they have morals now when they're using technology that the department built for them. That's cute,” she says, “But we've always done this together.”

https://www.wired.com/category/security/feed/