{"id":17255,"date":"2019-12-19T10:45:18","date_gmt":"2019-12-19T18:45:18","guid":{"rendered":"http:\/\/www.palada.net\/index.php\/2019\/12\/19\/news-10991\/"},"modified":"2019-12-19T10:45:18","modified_gmt":"2019-12-19T18:45:18","slug":"news-10991","status":"publish","type":"post","link":"https:\/\/www.palada.net\/index.php\/2019\/12\/19\/news-10991\/","title":{"rendered":"The Pentagon&#8217;s AI Chief Prepares for Battle"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/media.wired.com\/photos\/5df994d491848b0008e657c6\/master\/pass\/Security_Shanahan_RTC9J6.jpg\"\/><\/p>\n<p><strong>Credit to Author: Elias Groll| Date: Wed, 18 Dec 2019 20:34:49 +0000<\/strong><\/p>\n<p class=\"byline bylines__byline byline--author\" itemprop=\"author\" itemtype=\"http:\/\/schema.org\/Person\"><span itemprop=\"name\"><span class=\"byline__name byline--with-bg\"><a class=\"byline__name-link\" href=\"\/contributor\/elias-groll\">Elias Grol<span class=\"link__last-letter-spacing\">l<\/span><\/a><\/span> <\/span><\/p>\n<p class=\"content-header__row content-header__dek\">Lt. Gen. Jack Shanahan doesn&#39;t want killer robots&#8212;but he does want artificial intelligence to occupy a central role in warfighting.<\/p>\n<p>Nearly every day, in war zones around the world, American military forces request fire support. By radioing coordinates to a howitzer miles away, infantrymen can deliver the awful ruin of a 155-mm artillery shell on opposing forces. If defense officials in Washington have their way, <a href=\"https:\/\/www.wired.com\/story\/pentagon-doubles-down-ai-wants-help-big-tech\/\">artificial intelligence is about to make<\/a> that process a whole lot faster.<\/p>\n<p>The effort to speed up fire support is one of a handful initiatives that Lt. Gen. Jack Shanahan describes as the \u201clower consequence missions\u201d that the Pentagon is using to demonstrate how it can integrate artificial intelligence into its weapons systems. As the head of the Joint Artificial Intelligence Center, a 140-person clearinghouse within the Department of Defense focused on speeding up AI adoption, Shanahan and his team are building applications in well-established AI domains\u2014tools for predictive maintenance and health record analysis\u2014but also venturing into the more exotic, pursuing AI capabilities that would make the technology a centerpiece of American warfighting.<\/p>\n<p>Shanahan envisions an American military that uses AI to move much faster. Where once human intelligence analysts might have stared at a screen to identify and track a target, a computer would do that task. Today, a human officer might present options for what weapons to employ against an enemy; within 20 years or so, a computer could present \u201crecommendations as fast as possible to a human to make decisions about employing weapons,\u201d Shanahan told <em>WIRED<\/em> in an interview this month. Multiple command and control systems that track battlefield conditions are to be unified into one.<\/p>\n<p>It\u2019s not a vision for killer robots deciding who lives and dies. It\u2019s more like Waze, but for war. Or as Shanahan put it: \u201cAs much machine-to-machine interaction as is possible to allow humans to be presented with various courses of actions for decision.\u201d<\/p>\n<p>The hurdles for implementing that plan are legion. The massive data sets needed to build those computer vision and decisionmaking algorithms are rarely of the necessary quality. And algorithms are only as good as the data sets upon which they are built.<\/p>\n<p>Perhaps more profoundly, the military integration of intelligent computer systems raises questions about whether some realms of human life, such as the violent taking of it, should be computer-enabled. \u201cThat loss of human control moves us into questions of authorization and accountability we haven&#x27;t worked out yet,\u201d says Peter Singer, a defense analyst and coauthor of the forthcoming techno-thriller <em><a class=\"external-link\" data-event-click=\"{&quot;element&quot;:&quot;ExternalLink&quot;,&quot;outgoingURL&quot;:&quot;https:\/\/www.goodreads.com\/book\/show\/47572772-burn-in&quot;}\" href=\"https:\/\/www.goodreads.com\/book\/show\/47572772-burn-in\" rel=\"nofollow noopener noreferrer\" target=\"_blank\">Burn-In<\/a><\/em>.<\/p>\n<p>&quot;Twenty years from now we&#x27;ll be looking at algorithms versus algorithms.&quot;<\/p>\n<p>Jack Shanahan, JAIC<\/p>\n<p>These ethical questions have exposed a divide within Silicon Valley about working with the Pentagon on artificial intelligence initiatives. Before he headed up the JAIC, Shanahan <a href=\"https:\/\/www.wired.com\/story\/googles-contentious-pentagon-project-is-likely-to-expand\/\">ran Project Maven<\/a>, the computer vision project that aimed to take reams of aerial surveillance footage and automate the detection of enemy forces. Facing an employee uproar, Google pulled out of that project in 2018, but that hasn\u2019t stopped the initiative from moving forward. Just last week, Business Insider <a class=\"external-link\" data-event-click=\"{&quot;element&quot;:&quot;ExternalLink&quot;,&quot;outgoingURL&quot;:&quot;https:\/\/www.businessinsider.com\/palantir-took-over-from-google-on-project-maven-2019-12&quot;}\" href=\"https:\/\/www.businessinsider.com\/palantir-took-over-from-google-on-project-maven-2019-12\" rel=\"nofollow noopener noreferrer\" target=\"_blank\">reported<\/a> that Palantir, Peter Thiel\u2019s data analytics company, has taken over the contract.<\/p>\n<p>The sheer size of Pentagon spending on AI\u2014difficult to determine exactly but <a class=\"external-link\" data-event-click=\"{&quot;element&quot;:&quot;ExternalLink&quot;,&quot;outgoingURL&quot;:&quot;https:\/\/about.bgov.com\/news\/finding-artificial-intelligence-money-fiscal-2020-budget\/&quot;}\" href=\"https:\/\/about.bgov.com\/news\/finding-artificial-intelligence-money-fiscal-2020-budget\/\" rel=\"nofollow noopener noreferrer\" target=\"_blank\">estimated<\/a> at $4 billion for fiscal year 2020\u2014makes it unlikely any of the tech giants will stay away for long. Despite having pulled out of Maven, Google executives maintain that their company would very much like to work with the Pentagon. \u201cWe are eager to do more,\u201d Google senior vice president Kent Walker <a class=\"external-link\" data-event-click=\"{&quot;element&quot;:&quot;ExternalLink&quot;,&quot;outgoingURL&quot;:&quot;https:\/\/www.defenseone.com\/technology\/2019\/11\/google-we-want-more-work-defense-department\/161133\/&quot;}\" href=\"https:\/\/www.defenseone.com\/technology\/2019\/11\/google-we-want-more-work-defense-department\/161133\/\" rel=\"nofollow noopener noreferrer\" target=\"_blank\">told<\/a> a National Security Commission on Artificial Intelligence conference last month. Meanwhile, Amazon CEO Jeff Bezos is using the issue to distinguish his company as one that won\u2019t shy from the controversy of taking on military work. \u201cIf Big Tech is going to turn their backs on the Department of Defense, this country is in trouble,\u201d he <a class=\"external-link\" data-event-click=\"{&quot;element&quot;:&quot;ExternalLink&quot;,&quot;outgoingURL&quot;:&quot;https:\/\/www.cnbc.com\/2019\/12\/07\/bezos-says-country-in-trouble-if-big-tech-turns-its-back-on-the-pentagon.html&quot;}\" href=\"https:\/\/www.cnbc.com\/2019\/12\/07\/bezos-says-country-in-trouble-if-big-tech-turns-its-back-on-the-pentagon.html\" rel=\"nofollow noopener noreferrer\" target=\"_blank\">said<\/a> during remarks at the Reagan National Defense Forum earlier this month.<\/p>\n<p>Bezos\u2019s public embrace of the Pentagon comes as Amazon is challenging the <a href=\"https:\/\/www.wired.com\/story\/microsoft-surprise-winner-dollar10b-pentagon-contract\/\">award of a $10 billion cloud computing contract<\/a> called JEDI, or the Joint Enterprise Defense Infrastructure, to Microsoft. That system will be key to Shanahan\u2019s AI ambitions, giving him the computing power and the shared infrastructure to crunch massive data sets and unify disparate systems.<\/p>\n<p>It was the lack of such a cloud system that convinced Shanahan of its importance. When he ran Maven, he couldn\u2019t digitally access the surveillance footage he needed, instead having to dispatch his subordinates to fetch it. \u201cWe had cases where we had trucks going around and picking up tapes of full-motion video,\u201d Shanahan says. \u201cThat would have been a hell of a lot easier had there been an enterprise cloud solution.\u201d<\/p>\n<p>To push updates to the system, Shanahan\u2019s team similarly had to travel to physically install newer versions at military installations. Today, Maven is getting software updates every month or so\u2014fast for government work, but still not fast enough, he adds.<\/p>\n<p>But JEDI isn\u2019t going to solve all of Shanahan\u2019s problems, chief among them the poor quality of data. Take just one JAIC project, a predictive maintenance tool for the military\u2019s ubiquitous UH-60 Black Hawk helicopter that tries to figure out when key components are about to break. When they started collecting data from across the various branches, Shanahan\u2019s team discovered that the Army\u2019s Black Hawk was instrumented slightly differently than a version used by Special Operations Command, generating different data for machines that are essentially identical.<\/p>\n<p>\u201cIn every single instance the data is never quite in the quality that you\u2019re looking for,\u201d he says. \u201cIf it exists, I have not seen a pristine set of data yet.\u201d<\/p>\n<p>Data quality is one of the chief pitfalls in applying artificial intelligence to military systems; a computer will never know what it doesn\u2019t know. \u201cThere are risks that algorithms trained on historical data might face battlefield conditions that are different than the one it trained on,\u201d says Michael Horowitz, a professor at the University of Pennsylvania.<\/p>\n<p>Shanahan argues a rigorous testing and evaluation program will mitigate that risk, and it might very well be manageable when trying to predict the moment an engine blade will crack. But it becomes a different question entirely in a shooting war at the scale and speed of which the AI has never seen.<\/p>\n<p>The at times unpredictable nature of computer reasoning reasoning presents a thorny problem when paired with the mind of a human being. A computer may reach a baffling conclusion, one that the human who has been teamed with it has to decide whether to trust. When <a href=\"https:\/\/www.wired.com\/2016\/03\/two-moves-alphago-lee-sedol-redefined-future\/\">Google\u2019s AlphaGo defeated Lee Sedol<\/a>, the world\u2019s best Go player, in 2016, there was a moment in the match when Lee simply stood up from his chair and left the room. His computer adversary had made such an ingenious and unexpected move (from a human perspective) that Lee was flummoxed. \u201cI\u2019ve never seen a human play this move,\u201d one observer <a class=\"external-link\" data-event-click=\"{&quot;element&quot;:&quot;ExternalLink&quot;,&quot;outgoingURL&quot;:&quot;https:\/\/www.theatlantic.com\/technology\/archive\/2016\/03\/the-invisible-opponent\/475611\/&quot;}\" href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2016\/03\/the-invisible-opponent\/475611\/\" rel=\"nofollow noopener noreferrer\" target=\"_blank\">said<\/a> of the move. \u201cSo beautiful.\u201d<\/p>\n<p>Imagine a weapons system giving a human commander a similarly incomprehensible course of action in the heat of a high-stakes conflict. It\u2019s a problem the US military is actively working on, but one for which it doesn\u2019t have a ready solution. The Defense Advanced Research Projects Agency is working on <a class=\"external-link\" data-event-click=\"{&quot;element&quot;:&quot;ExternalLink&quot;,&quot;outgoingURL&quot;:&quot;https:\/\/www.darpa.mil\/program\/explainable-artificial-intelligence&quot;}\" href=\"https:\/\/www.darpa.mil\/program\/explainable-artificial-intelligence\" rel=\"nofollow noopener noreferrer\" target=\"_blank\">a program<\/a> to come up with \u201cexplainable AI,\u201d which aims to turn the black box of a machine-learning system into one that can provide the reasoning for the decisions it makes.<\/p>\n<p>To build that trust, Shanahan notes commanders need to be educated in the technology early on. Projects using computer vision and satellite imagery to understand flooding and wildfire risks allow his team to learn by doing and build up expertise. \u201cYou have to understand the art of the possible or else it&#x27;s all science fiction,\u201d he says.<\/p>\n<p>But key bureaucratic hurdles also stand in Shanahan\u2019s way. A congressionally mandated report on the Pentagon\u2019s AI initiatives released this week finds that the DoD lacks \u201cbaselines and metrics\u201d to assess progress, that the JAIC\u2019s role within the DoD ecosystem remains unclear, and that the JAIC lacks the authority to deliver on its goals. It also offers a dismal assessment of the Pentagon\u2019s testing and verification regime as \u201cnowhere close to ensuring the performance and safety of AI applications, particularly where safety-critical systems are concerned.\u201d<\/p>\n<p>In a statement, the Pentagon welcomed the report, which speaks to the immense challenges facing the US military in embracing a technology that it sees as integral to a possible conflict with Russia or China. \u201cThe speed, the op tempo of that conflict will be so fast,\u201d Shanahan says. \u201cTwenty years from now we&#x27;ll be looking at algorithms versus algorithms.\u201d<\/p>\n<p>The US response to Beijing relies in part on automation. The Army <a class=\"external-link\" data-event-click=\"{&quot;element&quot;:&quot;ExternalLink&quot;,&quot;outgoingURL&quot;:&quot;https:\/\/breakingdefense.com\/2019\/06\/army-to-test-robotic-gun-bruce-jette\/&quot;}\" href=\"https:\/\/breakingdefense.com\/2019\/06\/army-to-test-robotic-gun-bruce-jette\/\" rel=\"nofollow noopener noreferrer\" target=\"_blank\">is testing<\/a> an automated gun turret. The Air Force <a class=\"external-link\" data-event-click=\"{&quot;element&quot;:&quot;ExternalLink&quot;,&quot;outgoingURL&quot;:&quot;https:\/\/nationalinterest.org\/blog\/buzz\/air-forces-mysterious-xq-58-valkyrie-drone-almost-ready-93401&quot;}\" href=\"https:\/\/nationalinterest.org\/blog\/buzz\/air-forces-mysterious-xq-58-valkyrie-drone-almost-ready-93401\" rel=\"nofollow noopener noreferrer\" target=\"_blank\">is developing<\/a> a drone wingman. The Navy\u2019s \u201cGhost Fleet\u201d <a class=\"external-link\" data-event-click=\"{&quot;element&quot;:&quot;ExternalLink&quot;,&quot;outgoingURL&quot;:&quot;https:\/\/news.usni.org\/2019\/03\/13\/navy-wants-ten-ship-3b-unmanned-experimental-ghost-fleet&quot;}\" href=\"https:\/\/news.usni.org\/2019\/03\/13\/navy-wants-ten-ship-3b-unmanned-experimental-ghost-fleet\" rel=\"nofollow noopener noreferrer\" target=\"_blank\">concept<\/a> is looking into unmanned surface vessels. To get faster, the Pentagon is once again turning to computers.<\/p>\n<p>\u201cThe ultimate question we have to ask ourselves is what level of accuracy is acceptable for software,\u201d says Martijn Rasser, a former CIA analyst and a fellow at the Center for a New American Security. \u201cLet\u2019s say a human being is correct 99.99 percent of the time. Is it fine for the software to be the same, or does it need to be an order of magnitude better?\u201d<\/p>\n<p>These are questions the Pentagon is exploring. An October report from the Defense Innovation Board <a class=\"external-link\" data-event-click=\"{&quot;element&quot;:&quot;ExternalLink&quot;,&quot;outgoingURL&quot;:&quot;https:\/\/media.defense.gov\/2019\/Oct\/31\/2002204458\/-1\/-1\/0\/DIB_AI_PRINCIPLES_PRIMARY_DOCUMENT.PDF&quot;}\" href=\"https:\/\/media.defense.gov\/2019\/Oct\/31\/2002204458\/-1\/-1\/0\/DIB_AI_PRINCIPLES_PRIMARY_DOCUMENT.PDF\" rel=\"nofollow noopener noreferrer\" target=\"_blank\">laid out<\/a> a <a href=\"https:\/\/www.wired.com\/story\/tech-group-suggests-limits-pentagons-use-ai\/\">series of principles for how the military<\/a> might ethically adopt AI. Shanahan wants to hire an ethicist to join the JAIC, and he is at pains to emphasize that he is tuned into the ethical debates around military AI. He says he remains fundamentally opposed to what would be popularly thought of as \u201ckiller robots\u201d and what he calls \u201can unsupervised independent self-targeting system making life-or-death decisions.\u201d<\/p>\n<p>He remains an optimist. \u201cHumans make mistakes in combat every single day. Bad things happen. It&#x27;s chaotic. Emotions run high. Friends are dying. We make mistakes,\u201d Shanahan says. \u201cI am in the camp that says we can do a lot to help reduce the potential for those mistakes with AI enabled capabilities\u2014never eliminate.&quot;<\/p>\n<p><a href=\"https:\/\/www.wired.com\/story\/pentagon-ai-chief-prepares-for-battle\" target=\"bwo\" >https:\/\/www.wired.com\/category\/security\/feed\/<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/media.wired.com\/photos\/5df994d491848b0008e657c6\/master\/pass\/Security_Shanahan_RTC9J6.jpg\"\/><\/p>\n<p><strong>Credit to Author: Elias Groll| Date: Wed, 18 Dec 2019 20:34:49 +0000<\/strong><\/p>\n<p>Lt. Gen. Jack Shanahan doesn&#8217;t want killer robots\u2014but he does want artificial intelligence to occupy a central role in warfighting.<\/p>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[10378,10607],"tags":[22740,714,21465],"class_list":["post-17255","post","type-post","status-publish","format-standard","hentry","category-security","category-wired","tag-business-artificial-intelligence","tag-security","tag-security-national-security"],"_links":{"self":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/17255","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/comments?post=17255"}],"version-history":[{"count":0,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/17255\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/media?parent=17255"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/categories?post=17255"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/tags?post=17255"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}