{"id":23941,"date":"2024-02-14T04:30:05","date_gmt":"2024-02-14T12:30:05","guid":{"rendered":"https:\/\/www.palada.net\/index.php\/2024\/02\/14\/news-17671\/"},"modified":"2024-02-14T04:30:05","modified_gmt":"2024-02-14T12:30:05","slug":"news-17671","status":"publish","type":"post","link":"http:\/\/www.palada.net\/index.php\/2024\/02\/14\/news-17671\/","title":{"rendered":"Microsoft and the Taylor Swift genAI deepfake problem"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/article\/2019\/10\/cso_a_virtual_face_constructed_of_binary_code_artificial_intelligence_digital_identity_deepfakes_by_thinkstock_2400x1600-100812617-small.jpg\"\/><\/p>\n<p>The last few weeks have been a PR bonanza for Taylor Swift in both good ways and bad. On the good side, her boyfriend Travis Kelce was on the winning team at the Super Bowl, and her reactions during the game got plenty of air time. On the much, much worse side, generative AI-created fake nude images of her have recently flooded the internet.<\/p>\n<p>As you would expect, condemnation of the creation and distribution of those images followed swiftly, including from generative AI (genAI) companies and, notably, Microsoft CEO Satya Nadella. In addition to denouncing what happened, <a href=\"https:\/\/www.theverge.com\/2024\/1\/26\/24052196\/satya-nadella-microsoft-ai-taylor-swift-fakes-response\" rel=\"noopener nofollow\" target=\"_blank\">Nadella shared his thoughts on a solution<\/a>: \u201cI go back to what I think\u2019s our responsibility, which is all of the guardrails that we need to place around the technology so that there\u2019s more safe content that\u2019s being produced.\u201d<\/p>\n<p>Microsoft <a href=\"https:\/\/blogs.microsoft.com\/on-the-issues\/2024\/02\/13\/generative-ai-content-abuse-online-safety\/\" rel=\"noopener nofollow\" target=\"_blank\">weighed in on the issue of deepfakes again yesterday<\/a> (though without mentioning Swift). In a blog post, Microsoft Vice Chair and President Brad Smith decried the proliferation of deepfakes and said the company is taking steps to limit their spread.\u00a0<\/p>\n<p>&#8220;Tools unfortunately also become weapons, and this pattern is repeating itself,&#8221; he wrote. &#8220;We\u2019re currently witnessing a rapid expansion in the abuse of these new AI tools by bad actors, including through deepfakes based on AI-generated video, audio, and images. This trend poses new threats for elections, financial fraud, harassment through nonconsensual pornography, and the next generation of cyber bullying.&#8221;<\/p>\n<p>Smith pledged &#8220;a robust and comprehensive approach&#8221; from Microsoft, adding: &#8220;We\u2019re committed to ongoing innovation that will help users quickly determine if an image or video is AI generated or manipulated.&#8221;\u00a0<\/p>\n<p>As far as it goes, the Microsoft view is certainly true, and is the typical all-purpose, knee-jerk response one would expect from the world\u2019s biggest and most influential genAI company. But what Nadella and Smith left out is that there\u2019s evidence the company&#8217;s AI tools created the Swift images; even more damning, a Microsoft AI developer says he warned the company ahead of time that proper guardrails didn\u2019t exist, and Microsoft did nothing about it.<\/p>\n<p>Evidence that Microsoft tools were used to create the <a href=\"https:\/\/www.computerworld.com\/article\/3695508\/ai-deep-fakes-mistakes-and-biases-may-be-unavoidable-but-controllable.html\">deepfakes<\/a> comes from a <a href=\"https:\/\/www.404media.co\/ai-generated-taylor-swift-porn-twitter\/\" rel=\"nofollow noopener\" target=\"_blank\">404 Media article<\/a>, which claims they originated in a Telegram community dedicated to creating \u201cnon-consensual porn;\u201d it recommends that Microsoft Designer be used to generate the porn images. The article notes that \u201cDesigner theoretically refuses to produce images of famous people, but AI generators are easy to bamboozle, and 404 found you could break its rules with small tweaks to prompts.\u201d<\/p>\n<p>More damning still, a Microsoft AI engineer allegedly warned Microsoft in December that the safety guardrails of OpenAI\u2019s image generator DALL-E, the brains behind Microsoft Designer, could be bypassed to create explicit and violent images. He claims Microsoft ignored his warnings and tried to get him to not say anything publicly about what he found.<\/p>\n<p>The engineer, Shane Jones, <a href=\"https:\/\/cdn.geekwire.com\/wp-content\/uploads\/2024\/01\/Microsoft-Knowledge-of-DALL-E-3-Risks.pdf\" rel=\"noopener nofollow\" target=\"_blank\">wrote in a letter<\/a> to US Sens. Patty Murray (D-WA) and Maria Cantwell (D-WA); Rep. Adam Smith (D-WA), and Washington state Attorney General Bob Ferguson that he \u201cdiscovered a security vulnerability that allowed me to bypass some of the guardrails that are designed to prevent the [DALL-E] model from creating and distributing harmful images\u2026. I reached the conclusion that DALL\u00b7E 3 posed a public safety risk and should be removed from public use until OpenAI could address the risks associated with this model.<\/p>\n<p>\u201cThe vulnerabilities in DALL\u00b7E 3, and products like Microsoft Designer that use DALL\u00b7E 3, makes it easier for people to abuse AI in generating harmful images. Microsoft was aware of these vulnerabilities and the potential for abuse.\u201d<\/p>\n<p>Jones claimed Microsoft refused to act, posted a public letter about the issue on LinkedIn, and then was told by his manager to delete the letter because Microsoft\u2019s legal department demanded it.<\/p>\n<p>In his letter, Jones mentions the explicit images of Swift and says, \u201cThis is an example of the type of abuse I was concerned about and the reason why I urged OpenAI to remove DALL\u00b7E 3 from public use and reported my concerns to Microsoft.\u201d<\/p>\n<p><a href=\"https:\/\/www.geekwire.com\/2024\/microsoft-ai-engineer-says-company-thwarted-attempt-expose-dall-e-3-safety-problem\/\" rel=\"noopener nofollow\" target=\"_blank\">According to GeekWire<\/a>, Microsoft in a statement said the company \u201cinvestigated the employee\u2019s report and confirmed that the techniques he shared did not bypass our safety filters in any of our AI-powered image generation solutions.\u201d<\/p>\n<p>All of this is, to a certain extent, circumstantial evidence. There\u2019s no confirmation the images were created with Microsoft Designer, and we don\u2019t know whether to trust Microsoft or Jones. But we do know that Microsoft has a history of downplaying or ignoring the dangers of genAI.<\/p>\n<p>As <a href=\"https:\/\/www.computerworld.com\/article\/3697014\/ethics-what-ethics-for-microsoft-its-full-speed-ahead-on-ai.html\">I wrote last May<\/a>, Microsoft slashed the staffing of a 30-member team that was responsible for making sure genAI was being developed ethically at the company \u2014 and then eliminated the team entirely. The slashing took place several months before the release of Microsoft\u2019s genAI chatbot; the team\u2019s elimination was several months after.<\/p>\n<p>Before the release of the chatbot, John Montgomery, Microsoft corporate vice president of AI, told the team why it was being decimated: \u201cThe pressure from [CTO] Kevin [Scott] and [CEO] Satya [Nadella] is very, very high to take these most recent OpenAI models and the ones that come after them and move them into customers\u2019 hands at a very high speed.\u201d<\/p>\n<p>He added that the ethics team stood in the way of that.<\/p>\n<p>When a team member responded that there are significant dangers in AI that need to be addressed \u2014 and asked him to reconsider\u2014\u00a0Montgomery answered, \u201cCan I reconsider? I don&#8217;t think I will. \u2019Cause unfortunately the pressures remain the same. You don\u2019t have the view that I have, and probably you can be thankful for that. There\u2019s a lot of stuff being ground up into the sausage.\u201d<\/p>\n<p>Once the team was gone, Microsoft was off and running with genAI. And that accomplished exactly what the company wanted. The company\u2019s stock has skyrocketed, and thanks to AI, it\u2019s become the most valuable company in the world \u2014 the second company (behind Apple) to be valued at more than $3 trillion.<\/p>\n<p>That\u2019s three trillion reasons you shouldn\u2019t expect Microsoft to change its tune about the potential dangers of AI, whether or not Microsoft Designer was used to create the Taylor Swift deepfakes. And it doesn&#8217;t bode well for the chances of a tsunami of deepfakes in the year ahead, especially with a <a href=\"https:\/\/www.computerworld.com\/article\/3712189\/how-openai-plans-to-handle-genai-election-fears.html\">contested presidential election in the US<\/a>.<\/p>\n<p><a href=\"https:\/\/www.computerworld.com\/article\/3712694\/microsoft-and-the-taylor-swift-ai-deepfake-problem.html#tk.rss_security\" target=\"bwo\" >http:\/\/www.computerworld.com\/category\/security\/index.rss<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/article\/2019\/10\/cso_a_virtual_face_constructed_of_binary_code_artificial_intelligence_digital_identity_deepfakes_by_thinkstock_2400x1600-100812617-small.jpg\"\/><\/p>\n<article>\n<section class=\"page\">\n<p>The last few weeks have been a PR bonanza for Taylor Swift in both good ways and bad. On the good side, her boyfriend Travis Kelce was on the winning team at the Super Bowl, and her reactions during the game got plenty of air time. On the much, much worse side, generative AI-created fake nude images of her have recently flooded the internet.<\/p>\n<p>As you would expect, condemnation of the creation and distribution of those images followed swiftly, including from generative AI (genAI) companies and, notably, Microsoft CEO Satya Nadella. In addition to denouncing what happened, <a href=\"https:\/\/www.theverge.com\/2024\/1\/26\/24052196\/satya-nadella-microsoft-ai-taylor-swift-fakes-response\" rel=\"noopener nofollow\" target=\"_blank\">Nadella shared his thoughts on a solution<\/a>: \u201cI go back to what I think\u2019s our responsibility, which is all of the guardrails that we need to place around the technology so that there\u2019s more safe content that\u2019s being produced.\u201d<\/p>\n<p class=\"jumpTag\"><a href=\"\/article\/3712694\/microsoft-and-the-taylor-swift-ai-deepfake-problem.html#jump\">To read this article in full, please click here<\/a><\/p>\n<\/section>\n<\/article>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[11062,10643],"tags":[11113,29835,10516,5897,714],"class_list":["post-23941","post","type-post","status-publish","format-standard","hentry","category-computerworld","category-independent","tag-artificial-intelligence","tag-generative-ai","tag-microsoft","tag-privacy","tag-security"],"_links":{"self":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/23941","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/comments?post=23941"}],"version-history":[{"count":0,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/23941\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/media?parent=23941"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/categories?post=23941"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/tags?post=23941"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}