{"id":17449,"date":"2020-01-15T10:10:19","date_gmt":"2020-01-15T18:10:19","guid":{"rendered":"http:\/\/www.palada.net\/index.php\/2020\/01\/15\/news-11185\/"},"modified":"2020-01-15T10:10:19","modified_gmt":"2020-01-15T18:10:19","slug":"news-11185","status":"publish","type":"post","link":"http:\/\/www.palada.net\/index.php\/2020\/01\/15\/news-11185\/","title":{"rendered":"Rules on deepfakes take hold in the US"},"content":{"rendered":"<p><strong>Credit to Author: David Ruiz| Date: Wed, 15 Jan 2020 16:59:33 +0000<\/strong><\/p>\n<p>For years, an annual, must-pass federal spending bill has served as a vehicle for minor or contentious provisions that might otherwise falter in standalone legislation, such as the prohibition of new service member uniforms, or the indefinite detainment of individuals without trial. <\/p>\n<p>In 2019, that federal spending bill, called the National Defense Authorization Act (NDAA), once again included provisions separate from the predictable allocation of Department of Defense funds. This time, the NDAA included language on <a rel=\"noreferrer noopener\" aria-label=\"deepfakes (opens in a new tab)\" href=\"https:\/\/blog.malwarebytes.com\/social-engineering\/2019\/11\/deepfakes-and-linkedin-malign-interference-campaigns\/\" target=\"_blank\">deepfakes<\/a>, the machine-learning technology that, with some human effort, has created fraudulent videos of UK political opponents Boris Johnson and Jeremy Corbyn <a rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\" href=\"https:\/\/www.vice.com\/en_us\/article\/8xwjkp\/deepfake-of-boris-johnson-wants-to-warn-you-about-deepfakes\" target=\"_blank\">endorsing one another for Prime Minister<\/a>. <\/p>\n<p>Matthew F. Ferraro, a senior associate at the law firm WilmerHale who advises clients on national security, cyber security, and crisis management, called the deepfakes provisions a \u201cfirst.\u201d<\/p>\n<p>\u201cThis is the first federal legislation on deepfakes in the history of the world,\u201d Ferraro said about the NDAA, which was signed by the President into law on December 20, 2019. <\/p>\n<p>But rather than creating new policies or crimes regarding deepfakes\u2014like making it illegal to develop or distribute them\u2014the NDAA asks for a better understanding of the burgeoning technology. It asks for reports and notifications to Congress. <\/p>\n<p>Per the NDAA\u2019s new rules, the US Director of National Intelligence must, within 180 days, submit a report to Congress that provides information on the potential national security threat that deepfakes pose, along with the capabilities of foreign governments to use deepfakes in US-targeted disinformation campaigns, and what countermeasures the US currently has or plans to develop. <\/p>\n<p>Further, the Director of National Intelligence must notify Congress each time a foreign government either has, is currently, or plans to launch a disinformation campaign using deepfakes of \u201cmachine-generated text,\u201d like that produced by online bots that impersonate humans. <\/p>\n<p>Lee Tien, senior staff attorney for Electronic Frontier Foundation, said that, with any luck, the DNI report could help craft future, informed policy. Whether Congress will actually write any legislation based on the DNI report\u2019s information, however, is a separate matter. <\/p>\n<p>\u201cYou can lead a horse to water,\u201d Tien said, \u201cbut you can\u2019t necessarily make them drink.\u201d <\/p>\n<p>With the NDAA&#8217;s passage, Malwarebytes is starting a two-part blog on deepfake legislation in the United States. Next week we will explore several Congressional and stateside bills in further depth. <\/p>\n<h3><strong>The National Defense Authorization Act<\/strong><\/h3>\n<p><a href=\"https:\/\/www.congress.gov\/116\/bills\/s1790\/BILLS-116s1790enr.pdf\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\">The National Defense Authorization Act of 2020<\/a> is a sprawling, 1,000-plus page bill that includes just two sections on deepfakes. The sections set up reports, notifications, and a deepfakes \u201cprize\u201d for research in the field. <\/p>\n<p>According to the first section, the country\u2019s Director of National Intelligence must submit an unclassified report to Congress within 180 days that covers the \u201cpotential national security impacts of machine manipulated media (commonly known as \u201cdeepfakes\u201d); and the actual or potential use of machine-manipulated media by foreign governments to spread disinformation or engage in other malign activities.\u201d <\/p>\n<p>The report must include the following seven items:<\/p>\n<ul>\n<li>An assessment of the technology capabilities of foreign governments concerning deepfakes and machine-generated text<\/li>\n<li>An assessment of how foreign governments could use or are using deepfakes and machine-generated text to \u201charm the national security interested of the United States\u201d<\/li>\n<li>An updated identification of countermeasure technologies that are available, or could be made available, to the US<\/li>\n<li>An updated identification of the offices inside the US government\u2019s intelligence community that have, or should have, responsibility on deepfakes<\/li>\n<li>A description of any research and development efforts carried out by the intelligence community<\/li>\n<li>Recommendations about whether the intelligence community needs tools, including legal authorities and budget, to combat deepfakes and machine-generated text<\/li>\n<li>Any additional info that the DNI finds appropriate<\/li>\n<\/ul>\n<p>The report must be submitted in an unclassified format. However, an annex to the report that specifically addresses the technological capabilities of the People\u2019s Republic of China and the Russian Federation may be classified. <\/p>\n<p>The NDAA also requires that the DNI notify the Congressional intelligence committees each time there is \u201ccredible information\u201d that an identifiable, foreign entity has used, will use, or is currently using deepfakes or machine-generated text to influence a US election or domestic political processes. <\/p>\n<p>Finally, the NDAA also requires that the DNI set up what it calls a \u201cdeepfakes prize competition,\u201d in which a program will be established \u201cto award prizes competitively to stimulate the research, development, or commercialization of technologies to automatically detect machine-manipulated media.\u201d The prize amount cannot exceed $5 million per year. <\/p>\n<p>As the first, approved federal language on deepfakes, the NDAA is rather non-controversial, Tien said.<\/p>\n<p>\u201cPolitically, there\u2019s nothing particularly significant about the fact that this is the first thing that we\u2019ve seen the government enact in any sort of way about [deepfakes and machine-generated text],\u201d Tien said, emphasizing that the NDAA has been used as a vehicle for other report-making provisions for years. \u201cIt\u2019s also not surprising that it\u2019s just reports.\u201d<\/p>\n<p>But while the NDAA focuses only on research, other pieces of legislation\u2014including some that have become laws in a couple of states\u2014directly confront the assumed threat of deepfakes to both privacy and trust. <\/p>\n<h3><strong>Pushing back against pornographic and political deception <\/strong><\/h3>\n<p>Though today feared as a democracy destabilizer, deepfakes began not with political subterfuge or international espionage, but with porn. <\/p>\n<p>In 2017, a Reddit user named \u201cdeepfakes\u201d began posting short clips of nonconsensual pornography that mapped the digital likenesses of famous actresses and celebrities onto the bodies of pornographic performers. This proved wildly popular. <\/p>\n<p>In little time, a dedicated \u201csubreddit\u201d\u2014a smaller, devoted forum\u2014was created, and increasingly more deepfake pornography was developed and posted online. Two offshoot subreddits were created, too\u2014one for deepfake \u201crequests,\u201d and another for fulfilling those requests. (Ugh.) <\/p>\n<p>While the majority of deepfake videos feature famous actresses and musicians, it is easy to imagine an abusive individual making and sharing a deepfake of an ex-partner to harm and embarrass them. &nbsp;<\/p>\n<p>In 2018, <a rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\" href=\"https:\/\/www.theverge.com\/2018\/2\/7\/16982046\/reddit-deepfakes-ai-celebrity-face-swap-porn-community-ban\" target=\"_blank\">Reddit banned the deepfake subreddits<\/a>, but <a href=\"https:\/\/www.bbc.com\/news\/technology-49961089\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\">the creation of deepfake material surged<\/a>, and in the same year, a new potential threat emerged. <\/p>\n<p>Working with producers at Buzzfeed, comedian and writer Jordan Peele helped showcase the potential danger of deepfake technology when he lent his voice to a <a href=\"https:\/\/www.youtube.com\/watch?v=cQ54GDm1eL0\" data-rel=\"lightbox-video-0\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\">manipulated video of President Barack Obama<\/a>.<\/p>\n<p>\u201cWe&#8217;re entering an era in which our enemies can make anyone say anything at any point in time, even if they would never say those things,\u201d Peele said, posing as President Obama. <\/p>\n<p>This year, that warning gained some legitimacy, when a video of Speaker of the House of Representatives Nancy Pelosi was slowed down to fool viewers into thinking that the California policymaker was either drunk or impaired. Though the video was not a deepfake because it did not rely on machine-learning technology, its impact was clear: It was viewed by more than 2 million people on Facebook and shared on Twitter by the US President\u2019s personal lawyer, Rudy Giuliani.<\/p>\n<p>These threats spurred lawmakers in several states to introduce legislation to prohibit anyone from developing or sharing deepfakes with the intent to harm or deceive. <\/p>\n<p>On July 1, Virginia passed a law that makes the distribution of nonconsensual pornographic videos a Class 1 misdemeanor. On September 1, Texas passed a law to prohibit the making and sharing of deepfake videos with the intent to harm a political candidate running for office. In October, <a href=\"https:\/\/www.dwt.com\/insights\/2019\/10\/california-deepfakes-law\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\">California Governor Gavin Newsom signed Assembly Bills 602 and 730<\/a>, which, respectively, make it illegal to create and share nonconsensual deepfake pornography and to try to influence a political candidate\u2019s run for office with a deepfake released within 60 days of an election. <\/p>\n<p>Along the way, Congressional lawmakers in Washington, DC, have matched the efforts of their stateside counterparts, with one deepfake bill clearing the House of Representatives and another deepfake bill clearing the Senate. <\/p>\n<p>The newfound interest from lawmakers is a good thing, Ferraro said. <\/p>\n<p>\u201cPeople talk a lot about how legislatures are slow, and how Congress is captured by interests, or its suffering ossification, but I look at what\u2019s going on with manipulated media, and I\u2019m filled with some sense of hope and satisfaction,\u201d Ferraro said. \u201cBoth houses have reacted quickly, and I think that should be a moment of pride.\u201d &nbsp;<\/p>\n<p>But the new legislative proposals are not universally approved. Upon the initial passage of California\u2019s AB 730, the American Civil Liberties Union urged Gov. Newsom to veto the bill. <\/p>\n<p>\u201cDespite the author\u2019s good intentions, this bill will not solve the problem of deceptive political videos; it will only result in voter confusion, malicious litigation, and repression of free speech,\u201d said Kevin Baker, ACLU legislative director.<\/p>\n<p>Another organization that opposes dramatic, quick regulation on deepfakes is EFF, which wrote earlier in the summer, that \u201c<a href=\"https:\/\/www.eff.org\/deeplinks\/2019\/06\/congress-should-not-rush-regulate-deepfakes\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\">Congress should not rush to regulate deepfakes<\/a>.\u201d<\/p>\n<p>Why then, does EFF\u2019s Tien welcome the NDAA? <\/p>\n<p>Because, he said, the NDAA does not introduce substantial policy changes, but rather proposes a first step in creating informed policy in the future. <\/p>\n<p>\u201cFrom an EFF standpoint, we do want to encourage folks to actually synthesize the existing knowledge and to get to some sort of common ground on which people can then make policy choices,\u201d Tien said. \u201cWe hope the [DNI report] will be mostly available to the public, because, if the DNI actually does what they say they\u2019re going to do, we will learn more about what folks outside the US are doing [on deepfakes], and both inside the US, like efforts funded by the Department of Defense or by the intelligence community.\u201d <\/p>\n<p>Tien continued: \u201cTo me, that\u2019s all good.\u201d <\/p>\n<h3><strong>Wait and see<\/strong><\/h3>\n<p>The Director of National Intelligence has until June to submit their report on deepfakes and machine-generated text. But until then, more states, such as New York and Massachusetts, may forward deepfake bills that were already introduced last year. <\/p>\n<p>Further, as deepfakes continue to be shared online, more companies may have to grapple with how to treat them. Just last week, <a href=\"https:\/\/www.washingtonpost.com\/technology\/2020\/01\/06\/facebook-ban-deepfakes-sources-say-new-policy-may-not-cover-controversial-pelosi-video\/\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\">Facebook announced a new political deepfake policy<\/a> that many argue does little to stop the wide array of disinformation posted on the platform. <\/p>\n<p>Join us next week, when we take a deeper look at current Federal and statewide deepfake legislation and at the tangential problem of fraudulent, low-tech videos now referred to as &#8220;cheapfakes.&#8221; <\/p>\n<p>The post <a rel=\"nofollow\" href=\"https:\/\/blog.malwarebytes.com\/artificial-intelligence\/2020\/01\/deepfake-rules-take-hold-in-the-us\/\">Rules on deepfakes take hold in the US<\/a> appeared first on <a rel=\"nofollow\" href=\"https:\/\/blog.malwarebytes.com\">Malwarebytes Labs<\/a>.<\/p>\n<p><a href=\"https:\/\/blog.malwarebytes.com\/artificial-intelligence\/2020\/01\/deepfake-rules-take-hold-in-the-us\/\" target=\"bwo\" >https:\/\/blog.malwarebytes.com\/feed\/<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p><strong>Credit to Author: David Ruiz| Date: Wed, 15 Jan 2020 16:59:33 +0000<\/strong><\/p>\n<table cellpadding='10'>\n<tr>\n<td valign='top' align='center'><a href='https:\/\/blog.malwarebytes.com\/artificial-intelligence\/2020\/01\/deepfake-rules-take-hold-in-the-us\/' title='Rules on deepfakes take hold in the US'><img src='https:\/\/blog.malwarebytes.com\/wp-content\/uploads\/2020\/01\/deepfake.jpg' border='0'  width='300px'  \/><\/a><\/td>\n<\/tr>\n<tr>\n<td valign='top' align='left'>Rather than creating new policies or crimes for deepfakes\u2014like making it illegal to use them to deceive\u2014the NDAA seeks a better understanding to the burgeoning technology. <\/p>\n<p>Categories: <\/p>\n<ul class=\"post-categories\">\n<li><a href=\"https:\/\/blog.malwarebytes.com\/category\/artificial-intelligence\/\" rel=\"category tag\">Artificial Intelligence<\/a><\/li>\n<\/ul>\n<p>Tags: <a href=\"https:\/\/blog.malwarebytes.com\/tag\/barack-obama\/\" rel=\"tag\">Barack Obama<\/a><a href=\"https:\/\/blog.malwarebytes.com\/tag\/boris-johnson\/\" rel=\"tag\">Boris Johnson<\/a><a href=\"https:\/\/blog.malwarebytes.com\/tag\/cheapfake\/\" rel=\"tag\">cheapfake<\/a><a href=\"https:\/\/blog.malwarebytes.com\/tag\/deepfake\/\" rel=\"tag\">deepfake<\/a><a href=\"https:\/\/blog.malwarebytes.com\/tag\/deepfakes\/\" rel=\"tag\">deepfakes<\/a><a href=\"https:\/\/blog.malwarebytes.com\/tag\/department-of-defense\/\" rel=\"tag\">Department of Defense<\/a><a href=\"https:\/\/blog.malwarebytes.com\/tag\/director-of-national-intelligence\/\" rel=\"tag\">Director of National Intelligence<\/a><a href=\"https:\/\/blog.malwarebytes.com\/tag\/dni\/\" rel=\"tag\">DNI<\/a><a href=\"https:\/\/blog.malwarebytes.com\/tag\/gavin-newsom\/\" rel=\"tag\">Gavin Newsom<\/a><a href=\"https:\/\/blog.malwarebytes.com\/tag\/governor-gavin-newsom\/\" rel=\"tag\">Governor Gavin Newsom<\/a><a href=\"https:\/\/blog.malwarebytes.com\/tag\/jeremy-corbyn\/\" rel=\"tag\">Jeremy Corbyn<\/a><a href=\"https:\/\/blog.malwarebytes.com\/tag\/nancy-pelosi\/\" rel=\"tag\">Nancy Pelosi<\/a><a href=\"https:\/\/blog.malwarebytes.com\/tag\/national-defense-authorization-act\/\" rel=\"tag\">National Defense Authorization Act<\/a><a href=\"https:\/\/blog.malwarebytes.com\/tag\/ndaa\/\" rel=\"tag\">NDAA<\/a><a href=\"https:\/\/blog.malwarebytes.com\/tag\/peoples-republic-of-china\/\" rel=\"tag\">People&#8217;s Republic of China<\/a><a href=\"https:\/\/blog.malwarebytes.com\/tag\/president-obama\/\" rel=\"tag\">President Obama<\/a><a href=\"https:\/\/blog.malwarebytes.com\/tag\/reddit\/\" rel=\"tag\">reddit<\/a><a href=\"https:\/\/blog.malwarebytes.com\/tag\/russian-federation\/\" rel=\"tag\">Russian Federation<\/a><\/p>\n<table width='100%'>\n<tr>\n<td align=right>\n<p><b>(<a href='https:\/\/blog.malwarebytes.com\/artificial-intelligence\/2020\/01\/deepfake-rules-take-hold-in-the-us\/' title='Rules on deepfakes take hold in the US'>Read more&#8230;<\/a>)<\/b><\/p>\n<\/td>\n<\/tr>\n<\/table>\n<\/td>\n<\/tr>\n<\/table>\n<p>The post <a rel=\"nofollow\" href=\"https:\/\/blog.malwarebytes.com\/artificial-intelligence\/2020\/01\/deepfake-rules-take-hold-in-the-us\/\">Rules on deepfakes take hold in the US<\/a> appeared first on <a rel=\"nofollow\" href=\"https:\/\/blog.malwarebytes.com\">Malwarebytes Labs<\/a>.<\/p>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[10488,10378],"tags":[11113,147,7953,23945,17608,17473,7458,6699,10684,23946,23947,23948,23949,23950,23951,23952,151,1571,23953],"class_list":["post-17449","post","type-post","status-publish","format-standard","hentry","category-malwarebytes","category-security","tag-artificial-intelligence","tag-barack-obama","tag-boris-johnson","tag-cheapfake","tag-deepfake","tag-deepfakes","tag-department-of-defense","tag-director-of-national-intelligence","tag-dni","tag-gavin-newsom","tag-governor-gavin-newsom","tag-jeremy-corbyn","tag-nancy-pelosi","tag-national-defense-authorization-act","tag-ndaa","tag-peoples-republic-of-china","tag-president-obama","tag-reddit","tag-russian-federation"],"_links":{"self":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/17449","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/comments?post=17449"}],"version-history":[{"count":0,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/17449\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/media?parent=17449"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/categories?post=17449"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/tags?post=17449"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}