{"id":24011,"date":"2024-02-28T11:03:27","date_gmt":"2024-02-28T19:03:27","guid":{"rendered":"http:\/\/www.palada.net\/index.php\/2024\/02\/28\/news-17741\/"},"modified":"2024-02-28T11:03:27","modified_gmt":"2024-02-28T19:03:27","slug":"news-17741","status":"publish","type":"post","link":"http:\/\/www.palada.net\/index.php\/2024\/02\/28\/news-17741\/","title":{"rendered":"Microsoft, OpenAI move to fend off genAI-aided hackers \u2014 for now"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/idge\/imported\/imageapi\/2022\/03\/30\/16\/russia_binary_russian_flag_hacking_false_flags_by_lpettet_gettyimages-635938418_2100x1400-100826003-small-100923217-small.jpg\"\/><\/p>\n<p>Of all the potential nightmares about the dangerous effects of generative AI (genAI) tools like <a href=\"https:\/\/www.computerworld.com\/article\/3710293\/openais-chatgpt-turns-one-year-old-what-it-did-and-didnt-do.html\">OpenAI\u2019s ChatGPT<\/a> and <a href=\"https:\/\/www.computerworld.com\/article\/3700709\/m365-copilot-microsofts-generative-ai-tool-explained.html\">Microsoft\u2019s Copilot<\/a>, one is near the top of the list: their use by hackers to craft hard-to-detect malicious code. Even worse is the fear that genAI could help rogue states like Russia, Iran, and North Korea unleash unstoppable cyberattacks against the US and its allies.<\/p>\n<p>The bad news: nation states have already begun using genAI to attack the US and its friends. The good news: so far, the attacks haven\u2019t been particularly dangerous or especially effective. Even better news: Microsoft and OpenAI are taking the threat seriously. They\u2019re being transparent about it, openly describing the attacks and sharing what can be done about them.<\/p>\n<p>That said, AI-aided hacking is still in its infancy. And even if genAI is never able to write sophisticated malware, it can be used to make existing hacking techniques far more effective \u2014 especially social engineering ones like spear phishing and the theft of passwords and identities to break into even the most hardened systems.<\/p>\n<p>Microsoft and OpenAI recently revealed a spate of genAI-created attacks and detailed how the companies have been fighting them. (The attacks were based on OpenAI\u2019s ChatGPT, which is also the basis for Microsoft\u2019s Copilot; Microsoft has invested $13 billion in OpenAI.)<\/p>\n<p>\u00a0OpenAI\u00a0<a href=\"https:\/\/openai.com\/blog\/disrupting-malicious-uses-of-ai-by-state-affiliated-threat-actors\" rel=\"nofollow noopener\" target=\"_blank\">explained in a blog post<\/a> that the company has disrupted hacking attempts from five \u201cstate-affiliated malicious actors\u201d \u2014 Charcoal Typhoon and Salmon Typhoon, connected to China; Crimson Sandstorm, connected to Iran; Emerald Sleet, connected to North Korea; and Forest Blizzard, connected to Russia.<\/p>\n<p>Overall, OpenAI said, the groups used \u201cOpenAI services for querying open-source information, translating, finding coding errors, and running basic coding tasks.\u201d<\/p>\n<p>It\u2019s all fairly garden-variety hacking, according to the company. For example, Charcoal Typhoon used OpenAI services to \u201cresearch various companies and cybersecurity tools, debug code and generate scripts, and create content likely for use in phishing campaigns.\u201d Forest Blizzard used them \u201cfor open-source research into satellite communication protocols and radar imaging technology, as well as for support with scripting tasks.\u201d And Crimson Sandstorm used them for \u201cscripting support related to app and web development, generating content likely for spear-phishing campaigns, and researching common ways malware could evade detection.\u201d<\/p>\n<p>In other words, we\u2019ve not yet seen supercharged coding, new techniques for evading detection, or serious advances of any kind, really. Mainly OpenAI\u2019s tools have been used to help and support existing malware and hacking campaigns.<\/p>\n<p>\u201cThe activities of these actors are consistent with previous red team assessments we conducted in partnership with external cybersecurity experts, which found that GPT-4 offers only limited, incremental capabilities for malicious cybersecurity tasks beyond what is already achievable with publicly available, non-AI powered tools,\u201d OpenAI concluded.<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2024\/02\/14\/staying-ahead-of-threat-actors-in-the-age-of-ai\/\" rel=\"nofollow noopener\" target=\"_blank\">Microsoft in a separate blog post<\/a>\u00a0echoed OpenAI, offered more details, and laid out the framework the company is using to fight the hacking: \u201cMicrosoft and OpenAI have not yet observed particularly novel or unique AI-enabled attack or abuse techniques resulting from threat actors\u2019 usage of AI.\u201d<\/p>\n<p>That\u2019s all good to hear, as is the decision by Microsoft and OpenAI to be so transparent about genAI hacking dangers and their efforts to combat them. But remember, genAI is still in its infancy. Don\u2019t be surprised if eventually this technology becomes capable of building far more effective malware and hacking tools.<\/p>\n<p>Even if that never happens, there\u2019s plenty to worry about. Because genAI can make existing techniques far more powerful. A dirty little secret of hacking is that many of the most successful and dangerous attacks have nothing to do with the quality of the code hackers use. Instead, they turn to \u201csocial engineering\u201d \u2014 convincing people to hand over passwords or other identifying information that can be used to break into systems and wreak havoc.<\/p>\n<p>That\u2019s how the group Fancy Bear, associated with the Russian government, hacked Hilary Clinton\u2019s campaign during the 2016 presidential election, stole her emails, and eventually made them public. <a href=\"https:\/\/www.cbsnews.com\/news\/the-phishing-email-that-hacked-the-account-of-john-podesta\/\" rel=\"nofollow noopener\" target=\"_blank\">The group sent an email to the personal Gmail account of campaign chairman John Podesta,<\/a> convinced him it was sent by Google, and told him he needed to change his password. He clicked a malicious link, the hackers stole his password, and then used those credentials to break into the campaign network.<\/p>\n<p>Perhaps the most effective social engineering technique is \u201cspear phishing,\u201d crafting emails or making phone calls to specific people that contain information that only they likely know. That\u2019s where genAI shines. State-sponsored hacker groups often don\u2019t have a good grasp of English, and their spear-phishing emails can sound inauthentic. But they can now use ChatGPT or Copilot to write far more convincing emails.<\/p>\n<p>In fact, they\u2019re already doing it. And they\u2019re doing even worse.<\/p>\n<p>As security company <a href=\"https:\/\/slashnext.com\/blog\/wormgpt-the-generative-ai-tool-cybercriminals-are-using-to-launch-business-email-compromise-attacks\/\" rel=\"nofollow noopener\" target=\"_blank\">SlashNext explains<\/a>, there\u2019s already a toolkit circulating called WormGPT, a genAI tool \u201cdesigned specifically for malicious activities.\u201d<\/p>\n<p>The site got its hands on the tool and tested it. It asked WormGPT to craft an email \u201cintended to pressure an unsuspecting account manager into paying a fraudulent invoice.\u201d<\/p>\n<p>According to SlashNext, \u201cthe results were unsettling. WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC [business email compromise] attacks. In summary, it\u2019s similar to ChatGPT, but has no ethical boundaries or limitations. This experiment underscores the significant threat posed by generative AI technologies like WormGPT, even in the hands of novice cybercriminals.\u201d<\/p>\n<p>Even that falls far short of what genAI can do. It can create fake photos and fake videos, which can be used to make spear-phishing attacks more persuasive. It can supercharge internet searches to more easily find personal information about people. It can imitate people\u2019s voices. (Imagine getting a phone call from someone who sounds like your boss or someone in IT. You\u2019re likely to do whatever you\u2019re told to do.)<\/p>\n<p>All this is possible today. In fact, according to SlashNext, <a href=\"https:\/\/slashnext.com\/state-of-phishing-2023\/\" rel=\"nofollow noopener\" target=\"_blank\">the launch of ChatGPT has led to a 1,265% increase in phishing emails<\/a>, \u201csignaling a new era of cybercrime fueled by generative AI,\u201d in the company\u2019s words.<\/p>\n<p>And that means, despite the considerable work OpenAI and Microsoft are doing to fight genAI-powered hacking, timeworn attacks \u2014 spear phishing and other social engineering techniques \u2014 may be the biggest genAI hacking danger we face for some time to come.<\/p>\n<p><a href=\"https:\/\/www.computerworld.com\/article\/3713084\/microsoft-openai-move-to-fend-off-genai-aided-hackers-for-now.html#tk.rss_security\" target=\"bwo\" >http:\/\/www.computerworld.com\/category\/security\/index.rss<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/idge\/imported\/imageapi\/2022\/03\/30\/16\/russia_binary_russian_flag_hacking_false_flags_by_lpettet_gettyimages-635938418_2100x1400-100826003-small-100923217-small.jpg\"\/><\/p>\n<article>\n<section class=\"page\">\n<p>Of all the potential nightmares about the dangerous effects of generative AI (genAI) tools like <a href=\"https:\/\/www.computerworld.com\/article\/3710293\/openais-chatgpt-turns-one-year-old-what-it-did-and-didnt-do.html\">OpenAI\u2019s ChatGPT<\/a> and <a href=\"https:\/\/www.computerworld.com\/article\/3700709\/m365-copilot-microsofts-generative-ai-tool-explained.html\">Microsoft\u2019s Copilot<\/a>, one is near the top of the list: their use by hackers to craft hard-to-detect malicious code. Even worse is the fear that genAI could help rogue states like Russia, Iran, and North Korea unleash unstoppable cyberattacks against the US and its allies.<\/p>\n<p>The bad news: nation states have already begun using genAI to attack the US and its friends. The good news: so far, the attacks haven\u2019t been particularly dangerous or especially effective. Even better news: Microsoft and OpenAI are taking the threat seriously. They\u2019re being transparent about it, openly describing the attacks and sharing what can be done about them.<\/p>\n<p class=\"jumpTag\"><a href=\"\/article\/3713084\/microsoft-openai-move-to-fend-off-genai-aided-hackers-for-now.html#jump\">To read this article in full, please click here<\/a><\/p>\n<\/section>\n<\/article>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[11062,10643],"tags":[11113,29835,10516,714],"class_list":["post-24011","post","type-post","status-publish","format-standard","hentry","category-computerworld","category-independent","tag-artificial-intelligence","tag-generative-ai","tag-microsoft","tag-security"],"_links":{"self":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/24011","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/comments?post=24011"}],"version-history":[{"count":0,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/24011\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/media?parent=24011"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/categories?post=24011"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/tags?post=24011"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}