{"id":25337,"date":"2024-10-16T06:10:07","date_gmt":"2024-10-16T14:10:07","guid":{"rendered":"http:\/\/www.palada.net\/index.php\/2024\/10\/16\/news-19067\/"},"modified":"2024-10-16T06:10:07","modified_gmt":"2024-10-16T14:10:07","slug":"news-19067","status":"publish","type":"post","link":"https:\/\/www.palada.net\/index.php\/2024\/10\/16\/news-19067\/","title":{"rendered":"&#8220;Nudify&#8221; deepfake bots remove clothes from victims in minutes, and millions are using them"},"content":{"rendered":"\n<p>Millions of people are turning normal pictures into nude images, and it can be done in minutes.<\/p>\n<p><a href=\"https:\/\/www.wired.com\/story\/ai-deepfake-nudify-bots-telegram\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Journalists at Wired<\/a> found at least 50 &#8220;nudify&#8221; bots on Telegram that claim to create explicit photos or videos of people with only a couple of clicks. Combined, these bots have millions of monthly users. Although there is no sure way to find out how many unique users that are, it\u2019s appalling, and highly likely there are much more than those they found.<\/p>\n<p>The history of nonconsensual intimate image (NCII) abuse\u2014as the use of explicit deepfakes without consent is often called\u2014started near the end of 2017. Motherboard (now Vice) <a href=\"https:\/\/www.vice.com\/en\/article\/gal-gadot-fake-ai-porn\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">found<\/a> an online video in which the face of Gal Gadot had been superimposed on an existing pornographic video to make it appear that the actress was engaged in the acts depicted. The username of the person that claimed to be responsible for this video resulted in the name &#8220;deepfake.&#8221;<\/p>\n<p>Since then, deepfakes have gone through many developments. It all started with face swaps, where users put the face of one person onto the body of another person. Now, with the advancement of AI, more sophisticated methods like <a href=\"https:\/\/www.techtarget.com\/searchenterpriseai\/definition\/generative-adversarial-network-GAN\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Generative Adversarial Networks (GANs)<\/a> are available to the public.<\/p>\n<p>However, most of the uncovered bots don&#8217;t use this advanced type of technology. Some of the bots on Telegram are \u201climited\u201d to removing clothes from existing pictures, an extremely disturbing act for the victim.<\/p>\n<p>These bots have become a lucrative source of income. The use of such a Telegram bot usually requires a certain number of \u201ctokens\u201d to create images. Of course, cybercriminals have also spotted opportunities in this emerging market and are operating non-functional or bots that render low-quality images.<\/p>\n<p>Besides disturbing, the use of AI to generate explicit content is costly, there are no guarantees of privacy (as we saw the other day when <a href=\"https:\/\/www.malwarebytes.com\/blog\/news\/2024\/10\/ai-girlfriend-site-breached-user-fantasies-stolen\">AI Girlfriend was breached<\/a>), and you can even end up getting <a href=\"https:\/\/www.404media.co\/a-network-of-ai-nudify-sites-are-a-front-for-notorious-russian-hackers-2\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">infected with malware<\/a>.<\/p>\n<p>The creation and distribution of explicit nonconsensual deepfakes raises serious ethical issues around consent, privacy, and the objectification of women, let alone the creation of sexual child abuse material. Italian scientists found explicit nonconsensual deepfakes to be a new form of sexual violence, with <a href=\"https:\/\/arno.uvt.nl\/show.cgi?fid=154764\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">potential long-term psychological and emotional impacts on victims<\/a>.<\/p>\n<p>To combat this type of sexual abuse there have been several initiatives:<\/p>\n<ul>\n<li>The US has proposed legislation in the form of the <a href=\"https:\/\/www.thomsonreuters.com\/en-us\/posts\/government\/deepfakes-federal-state-regulation\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Deepfake Accountability Act<\/a>. Combined with the recent <a href=\"https:\/\/www.malwarebytes.com\/blog\/news\/2024\/09\/telegram-will-hand-over-user-details-to-law-enforcement\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">policy change by Telegram<\/a> to hand over user details to law enforcement in cases where users are suspected of committing a crime, this could slow down the use of the bots, at least on Telegram.<\/li>\n<li>Some platform policies (e.g. Google banned involuntary synthetic pornographic footage from search results).<\/li>\n<\/ul>\n<p>However, so far these steps have shown no significant impact on the growth of the market for NCIIs.<\/p>\n<h2 class=\"wp-block-heading\" id=\"h-keep-your-children-safe\">Keep your children safe<\/h2>\n<p>We\u2019re sometimes asked why it\u2019s a problem to post pictures on social media that can be <a href=\"https:\/\/www.malwarebytes.com\/blog\/news\/2024\/09\/facebook-scrapes-photos-of-kids-from-australian-user-profiles-to-train-its-ai\">harvested to train AI<\/a> models.<\/p>\n<p>We have seen many cases where social media and other platforms have used the content of their users to train their AI. Some people have a tendency to shrug it off because they don\u2019t see the dangers, but let us explain the possible problems.<\/p>\n<ul>\n<li><a href=\"https:\/\/www.malwarebytes.com\/cybersecurity\/basics\/deepfakes\">Deepfakes<\/a>: AI generated content, such as deepfakes, can be used to spread misinformation, damage your reputation or privacy, or&nbsp;<a href=\"https:\/\/www.malwarebytes.com\/blog\/news\/2022\/06\/criminals-are-applying-for-remote-work-using-deepfake-and-stolen-identities-says-fbi\">defraud people<\/a>&nbsp;you know.<\/li>\n<li><a href=\"https:\/\/www.malwarebytes.com\/glossary\/metadata\">Metadata<\/a>: Users often forget that the images they upload to social media also contain metadata like, for example, where the photo was taken. This information could potentially be sold to third parties or used in ways the photographer didn\u2019t intend.<\/li>\n<li>Intellectual property. Never upload anything you didn\u2019t create or own. Artists and photographers may feel their work is being exploited without proper compensation or attribution.<\/li>\n<li>Bias: AI models trained on biased datasets can perpetuate and amplify societal biases.<\/li>\n<li>Facial recognition: Although facial recognition is not the&nbsp;<a href=\"https:\/\/www.malwarebytes.com\/blog\/news\/2019\/08\/facial-recognition-technology-force-for-good-or-privacy-threat\">hot topic<\/a>&nbsp;it once used to be, it still exists. And actions or statements done by your images (real or not) may be linked to your persona.<\/li>\n<li>Memory: Once a picture is online, it is almost impossible to get it completely removed. It may continue to exist in caches, backups, and snapshots.<\/li>\n<\/ul>\n<p>If you want to continue using social media platforms that is obviously your choice, but consider the above when uploading pictures of you, your loved ones, or even complete strangers.<\/p>\n<hr class=\"wp-block-separator has-text-color has-cyan-bluish-gray-color has-alpha-channel-opacity has-cyan-bluish-gray-background-color has-background is-style-wide\" \/>\n<p><strong>We don\u2019t just report on threats\u2014we remove them<\/strong><\/p>\n<p>Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by&nbsp;<a href=\"https:\/\/www.malwarebytes.com\/for-home\">downloading Malwarebytes today<\/a>.<\/p>\n<p><a href=\"https:\/\/www.malwarebytes.com\/blog\/news\/2024\/10\/nudify-deepfake-bots-remove-clothes-from-victims-in-minutes-and-millions-are-using-them\" target=\"bwo\" >https:\/\/blog.malwarebytes.com\/feed\/<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Millions of people are turning normal pictures into nude images using bots on Telegram, and it can be done in minutes. <\/p>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[10488,10378],"tags":[32,5897],"class_list":["post-25337","post","type-post","status-publish","format-standard","hentry","category-malwarebytes","category-security","tag-news","tag-privacy"],"_links":{"self":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/25337","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/comments?post=25337"}],"version-history":[{"count":0,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/25337\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/media?parent=25337"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/categories?post=25337"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/tags?post=25337"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}