{"id":14158,"date":"2018-12-21T10:45:12","date_gmt":"2018-12-21T18:45:12","guid":{"rendered":"https:\/\/www.palada.net\/index.php\/2018\/12\/21\/news-7924\/"},"modified":"2018-12-21T10:45:12","modified_gmt":"2018-12-21T18:45:12","slug":"news-7924","status":"publish","type":"post","link":"https:\/\/www.palada.net\/index.php\/2018\/12\/21\/news-7924\/","title":{"rendered":"In Project Maven&#8217;s Wake, the Pentagon Seeks AI Tech Talent"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/media.wired.com\/photos\/5c1c383529ff200b2600066b\/master\/pass\/ai-pentagon-elena-lacey-wired.gif\"\/><\/p>\n<p><strong>Credit to Author: Zachary Fryer-Biggs| Date: Fri, 21 Dec 2018 12:26:14 +0000<\/strong><\/p>\n<p><span class=\"lede\">The American military <\/span>is desperately trying to get a leg up in the field of artificial intelligence, which top officials are convinced will deliver victory in future warfare. But internal Pentagon documents and interviews with senior officials make clear that the Defense Department is reeling from being spurned by a tech giant and struggling to develop a plan that might work in a new sort of battle\u2014for hearts and minds in Silicon Valley.<\/p>\n<p>The battle began with an unexpected loss. In June, Google announced it was <a href=\"https:\/\/www.wired.com\/story\/google-wont-renew-controversial-pentagon-ai-project\/\">pulling out<\/a> of a Pentagon program\u2014the much-discussed <a href=\"https:\/\/www.wired.com\/tag\/project-maven\/\">Project Maven<\/a>\u2014that used the tech giant\u2019s artificial intelligence software. Thousands of the company\u2019s employees had signed a <a href=\"https:\/\/www.wired.com\/story\/why-tech-worker-dissent-is-going-viral\/\">petition<\/a> two months earlier calling for an end to its work on the project, an effort to create algorithms that could help intelligence analysts pick out military targets from video footage.<\/p>\n<p>Inside the Pentagon, Google\u2019s withdrawal brought a combination of frustration and distress\u2014even anger\u2014that has percolated ever since, according to five sources familiar with internal discussions on Maven, the military\u2019s first big effort to utilize AI in warfare.<\/p>\n<p name=\"inset-left\" class=\"inset-left-component__el\">This article was produced in partnership with the <a href=\"https:\/\/publicintegrity.org\/\" target=\"_blank\">Center for Public Integrity<\/a>, a nonprofit, nonpartisan news organization.<\/p>\n<p>\u201cWe have stumbled unprepared into a contest over the strategic narrative,\u201d said an internal Pentagon memo circulated to roughly 50 defense officials on June 28. The memo depicted a department caught flat-footed and newly at risk of alienating experts critical to the military\u2019s artificial intelligence development plans.<\/p>\n<p>\u201cWe will not compete effectively against our adversaries if we do not win the \u2018hearts and minds\u2019 of the key supporters,\u201d it warned.<\/p>\n<p>Maven was actually far from complete and cost only about $70 million in 2017, a molecule of water in the Pentagon\u2019s oceanic $600 billion budget that year. But Google\u2019s announcement exemplified a larger public relations and scientific challenge the department is still wrestling with. It has responded so far by trying to create a new public image for its AI work and by seeking a review of the department\u2019s AI policy by an advisory board of top executives from tech companies.<\/p>\n<p>The reason for the Pentagon\u2019s anxiety is clear: It wants a smooth path to use artificial intelligence in weaponry of the future, a desire already backed by the promise of <a href=\"https:\/\/publicintegrity.org\/national-security\/the-pentagon-plans-to-spend-2-billion-to-help-inject-more-artificial-intelligence-into-its-weaponry\/\" target=\"_blank\">several billion dollars<\/a> to try to ensure such systems are trusted and accepted by military commanders, plus billions more in expenditures on the technologies themselves.<\/p>\n<p><span class=\"lede\">The exact role <\/span>that AI will wind up playing in warfare remains unclear. Many weapons with AI will not involve decision-making by machine algorithms, but the potential for them to do so will exist. As a Pentagon strategy document said in August: \u201cTechnologies underpinning unmanned systems would make it possible to develop and deploy autonomous systems that could independently select and attack targets with lethal force.\u201d<\/p>\n<p>Developing artificial intelligence, officials say, is unlike creating other military technologies. While the military can easily turn to big defense contractors for cutting-edge work on fighter jets and bombs, the heart of innovation in AI and machine learning resides among the non-defense tech giants of Silicon Valley. Without their help, officials worry, they could lose an escalating global arms race in which AI will play an increasingly important role, something top officials say they are unwilling to accept.<\/p>\n<p>\u201cIf you decide not to work on Maven, you\u2019re not actually having a discussion on if artificial intelligence or machine learning are going to be used for military operations,\u201d Chris Lynch, a former tech entrepreneur who now runs the Pentagon\u2019s Defense Digital Service, said in an interview. AI is coming to warfare, he says, so the question is, which American technologists are going to engineer it?<\/p>\n<p>Lynch, who recruits technical experts to spend several years working on Pentagon problems before returning to the private sector, said that AI technology is too important, and that the agency will proceed even if it has to rely on lesser experts. But without the help of the industry\u2019s best minds, Lynch added, \u201cwe\u2019re going to pay somebody who is far less capable to go build a far less capable product that may put young men and women in dangerous positions, and there may be mistakes because of it.\u201d<\/p>\n<p>Google isn\u2019t likely to shift gears soon. Less than a week after announcing that the company would not seek to renew the Maven contract in June, Google released a set of <a href=\"https:\/\/www.wired.com\/story\/google-sets-limits-on-its-use-of-ai-but-allows-defense-work\/\">AI principles<\/a> which specified that the company would not use AI for \u201cweapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.\u201d<\/p>\n<p>Some defense officials have complained since then that Google was being <a href=\"https:\/\/www.politico.com\/story\/2018\/11\/02\/google-washington-head-stepping-aside-911804\" target=\"_blank\">unpatriotic<\/a>, noting that the company was still pursuing work with the Chinese government, the top US competitor in artificial intelligence technology.<\/p>\n<p>\u201cI have a hard time with companies that are working very hard to engage in the market inside of China, and engaging in projects where intellectual property is shared with the Chinese, which is synonymous with sharing it with the Chinese military, and then don&#x27;t want to work for the US military,\u201d General Joe Dunford, chairman of the Joint Chiefs of Staff, commented while speaking at a conference in November.<\/p>\n<p>In December <a href=\"https:\/\/www.wired.com\/story\/congress-sundar-pichai-google-ceo-hearing\/\">testimony<\/a> before congress, Google CEO Sundar Pichai acknowledged that Google had experimented with a program involving China, <a href=\"https:\/\/www.wired.com\/story\/congress-google-project-dragonfly-questions\/\">Project Dragonfly<\/a>, aimed at developing a model of what government-censored search results would look like in China. However, Pichai testified that Google currently \u201chas no plans to launch in China.\u201d<\/p>\n<p>Project Maven\u2019s aim was to simplify work for intelligence analysts by tagging object types in video footage from drones and other platforms, helping analysts gather information and narrow their focus on potential targets, according to sources familiar with the partly classified program. But the algorithms did not select the targets or order strikes, a longtime fear of those worried about the intersection of advanced computing and new forms of lethal violence.<\/p>\n<p>Many at Google nonetheless saw the program in alarming terms.<\/p>\n<p>\u201cThey immediately heard drones and then they thought machine learning and automatic target recognition, and I think it escalated for them pretty quickly about enabling targeted killing, enabling targeted warfare,\u201d said a former Google employee familiar with the internal discussions.<\/p>\n<p>Google is just one of the tech giants that the Pentagon has sought to enlist in its effort to inject AI into modern warfare technology. Among the others: Microsoft and Amazon. After Google\u2019s announcement in June more than a dozen large defense firms approached defense officials, offering to take over the work, according to current and former Pentagon officials.<\/p>\n<p>But Silicon Valley activists also say the industry cannot easily ignore the ethical qualms of tech workers. \u201cThere\u2019s a division between those who answer to shareholders, who want to get access to Defense Department contracts worth multimillions of dollars, and the rank and file who have to build the things and who feel morally complicit for things they don\u2019t agree with,\u201d the former Google employee said.<\/p>\n<p><span class=\"lede\">In an effort <\/span>to bridge this gulf and dampen hard-edged opposition from AI engineers, the Defense Department has so far undertaken two initiatives.<\/p>\n<p>The first, formally begun in late June, was to create a Joint Artificial Intelligence Center meant to oversee and manage all of the military\u2019s AI efforts, with an initial focus on PR-friendly humanitarian missions. It\u2019s set to be run by Lieutenant General Jack Shanahan, whose last major assignment was running Project Maven. In a politically shrewd decision, its first major initiative is to figure out a way to use AI to help organize the military\u2019s search and rescue response to natural disasters.<\/p>\n<p>\u201cOur goal is to save lives,\u201d Brendan McCord, one of the chief architects of the Pentagon\u2019s AI strategy, said while speaking at a technical conference in October. \u201cOur military\u2019s fundamental role, its mission, is to keep the peace. It is to deter war and protect our country. It is to improve global stability, and it\u2019s to ultimately protect the set of values that came out of the Enlightenment.\u201d<\/p>\n<p>The second initiative is to order a new review of AI ethics by an advisory panel of tech experts, the Defense Innovation Board, which includes former Google CEO Eric Schmidt and LinkedIn cofounder Reid Hoffman.<\/p>\n<p>That review, designed to develop principles for the use of AI by the military, is being managed by Joshua Marcuse, a former adviser to the secretary of defense on innovation issues who is now executive director of the board. Set to take about nine months, the advisory panel will hold public meetings with AI experts, while an internal Pentagon group also considers questions. Then it will forward recommendations to secretary of defense James Mattis about the ways that AI should or should not be injected into weapons programs.<\/p>\n<p>\u201cThis has got to be about actually looking in the mirror and being willing to impose some constraints on what we will do, on what we won\u2019t do, knowing what the boundaries are,\u201d Marcuse said in an interview.<\/p>\n<p>To make sure the debate is robust, Marcuse said that the board is seeking out critics of the military\u2019s role in AI.<\/p>\n<p>\u201cThey have a set of concerns, I think really valid and legitimate concerns, about how the Department of Defense is going to apply these technologies, because we have legal authority to invade people\u2019s privacy in certain circumstances, we have legal authority to commit violence, we have legal authority to wage war,\u201d he said.<\/p>\n<p>Resolving those concerns is critical, officials say, because of the difference in how Washington and Beijing manage AI talent. China can conscript experts to work on military problems, whereas the United States has to find a way to interest and attract outside experts.<\/p>\n<p>\u201cThey have to choose to work with us, so we need to offer them a meaningful, verifiable commitment that there are real opportunities to work with us where they can feel confident that they\u2019re the good guys,\u201d Marcuse said.<\/p>\n<p>Despite his willingness to discuss potential future constraints on AI usage, Marcuse said he didn\u2019t think the board would try to change the Pentagon\u2019s existing policy on autonomous weapons that depend on AI, which was put in place by the Obama administration in 2012.<\/p>\n<p>That policy, which underwent a minor technical revision by the Trump administration in May 2017, doesn\u2019t prevent the military from using artificial intelligence in any of its weapons systems. It mandates that commanders have \u201cappropriate levels of human judgment\u201d over any AI-infused weapons systems, although the phrase isn\u2019t further defined and remains a source of confusion within the Pentagon, according to multiple officials there.<\/p>\n<p>It does, however, require that before a computer could be programmed to initiate deadly action, the weapons system that contains it must undergo special review by three senior Pentagon officials\u2014in advance of its purchase. To date that special review hasn\u2019t been undertaken.<\/p>\n<p>In late 2016, during the waning days of the Obama administration, the Pentagon took a new look at the 2012 policy and decided in a classified report that no major change was needed, according to a former defense official familiar with the details. \u201cThere was nothing that was held up, there was no one who thought, \u2018Oh we have to update the directives,\u2019\u201d the former official said.<\/p>\n<p>The Trump administration nonetheless has internally discussed making it clearer to weapons engineers within the military\u2014who it fears have been reluctant to inject AI into their designs\u2014that the policy doesn\u2019t ban the use of autonomy in weapons systems. The contretemps in Silicon Valley over Project Maven at least temporarily halted that discussion, prompting the department\u2019s leaders to try first to win the support of the Defense Innovation Board.<\/p>\n<p>But one way or another, the Pentagon intends to integrate more AI into its weaponry. \u201cWe\u2019re not going to sit on the sidelines as a new technology revolutionizes the battlefield,\u201d Marcuse said. \u201cIt\u2019s not fair to the American people, it\u2019s not fair to our service members who we send into harm\u2019s way, and it\u2019s not fair to our allies who depend on us.\u201d<\/p>\n<p><em>The Center for Public Integrity is a nonprofit, nonpartisan, investigative newsroom in Washington, DC. More of its national security reporting can be found <a href=\"https:\/\/publicintegrity.org\/topics\/national-security\/\" target=\"_blank\">here<\/a>.<\/em><\/p>\n<p class=\"related-cne-video-component__dek\">James Vlahos&#39; father was dying, so he set out to save his dad&#39;s memories and code them into a &#39;Dadbot&#39; that lives on his phone.<\/p>\n<p><a href=\"https:\/\/www.wired.com\/story\/inside-the-pentagons-plan-to-win-over-silicon-valleys-ai-experts\" target=\"bwo\" >https:\/\/www.wired.com\/category\/security\/feed\/<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/media.wired.com\/photos\/5c1c383529ff200b2600066b\/master\/pass\/ai-pentagon-elena-lacey-wired.gif\"\/><\/p>\n<p><strong>Credit to Author: Zachary Fryer-Biggs| Date: Fri, 21 Dec 2018 12:26:14 +0000<\/strong><\/p>\n<p>The Defense Department wants to use AI in warfare. In the aftermath of Project Maven, it still needs Big Tech\u2019s help.<\/p>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[10378,10607],"tags":[17573,714],"class_list":["post-14158","post","type-post","status-publish","format-standard","hentry","category-security","category-wired","tag-backchannel","tag-security"],"_links":{"self":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/14158","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/comments?post=14158"}],"version-history":[{"count":0,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/14158\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/media?parent=14158"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/categories?post=14158"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/tags?post=14158"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}