{"id":21867,"date":"2023-04-27T16:10:06","date_gmt":"2023-04-28T00:10:06","guid":{"rendered":"http:\/\/www.palada.net\/index.php\/2023\/04\/27\/news-15598\/"},"modified":"2023-04-27T16:10:06","modified_gmt":"2023-04-28T00:10:06","slug":"news-15598","status":"publish","type":"post","link":"http:\/\/www.palada.net\/index.php\/2023\/04\/27\/news-15598\/","title":{"rendered":"ChatGPT writes insecure code"},"content":{"rendered":"<p>Research by computer scientists associated with the Universit&eacute; du Qu&eacute;bec in Canada has found&nbsp;that ChatGPT, OpenAI&#8217;s popular chatbot, is prone to generating insecure code.<\/p>\n<p>&#8220;<em><a href=\"https:\/\/arxiv.org\/abs\/2304.09655\" target=\"_blank\">How Secure is Code Generated by ChatGPT?<\/a><\/em>&#8221; is the work of Rapha&euml;l Khoury, Anderson Avila, Jacob Brunelle, and Baba Mamadou Camara. The paper concludes that ChatGPT generates code that isn&#8217;t robust, despite claiming awareness of its vulnerabilities.&nbsp;<\/p>\n<p>&#8220;The results were worrisome,&#8221; the researchers say in the paper. &#8220;We found that, in several cases, the code generated by ChatGPT fell well below minimal security standards applicable in most contexts.&#8221;<\/p>\n<blockquote>\n<p>&#8220;In fact, when prodded to whether or not the produced code was secure, ChatGPT was able to recognize that it was not. The chatbot, however, was able to provide a more secure version of the code in many cases if explicitly asked to do so.&#8221;<\/p>\n<\/blockquote>\n<p>In the experiment, the&nbsp;researchers assumed the role of a novice programmer&nbsp;who doesn&#8217;t have&nbsp;security in mind. They asked ChatGPT to generate code,&nbsp;specifying in some cases that the code would be used in a &#8220;security-sensitive context.&#8221; What they didn&#8217;t do, however, was specifically ask the AI chatbot to create secure code or include certain security features.<\/p>\n<p>ChatGPT generated 21 applications written in five programming languages: C, C++, HTML, Java, and Python. The programs are simple, with 97 lines of code at most.<\/p>\n<p>In its first run, ChatGPT produced five secure applications out of 21. When prompted for changes, it made seven more secure applications from the remaining 16.<\/p>\n<p>The authors note that ChatGPT can only create &#8220;secure&#8221; code when a user requests it. When tasked with creating a simple FTP server for file sharing, it generated code without applying&nbsp;input sanitization&nbsp;(where code is checked for harmful characters and removed where necessary).&nbsp;ChatGPT only added the security feature&nbsp;<i>after<\/i>&nbsp;the authors prompted it to do so.<\/p>\n<p>&#8220;Part of the problem seems to be that ChatGPT simply doesn&#8217;t assume an adversarial model of execution,&#8221;&nbsp;the authors say, explaining why the AI bot cannot create secure code by default.&nbsp;Despite this, the bot readily admits to errors in its code.<\/p>\n<p>&#8220;If asked specifically on this topic, the chatbot will provide the user with a cogent explanation of why the code is potentially exploitable. However, any explanatory benefit would only be available to a user who &#8216;asks the right questions&#8217;. i.e.; a security-conscious programmer who queries ChatGPT about security issues.&#8221;<\/p>\n<p>Additionally, the authors point to the chatbot&#8217;s ethical inconsistency when it refuses to create attack code but&nbsp;<i>will<\/i>&nbsp;create insecure code.<\/p>\n<p>It might refuse to create attack code, but there are ways round it. Malwarebytes Security Evangelist Mark Stockley&nbsp;decided to try&nbsp;to&nbsp;<a href=\"https:\/\/www.malwarebytes.com\/blog\/news\/2023\/03\/chatgpt-happy-to-write-ransomware-just-really-bad-at-it\">create ransomware using ChatGPT<\/a>. The AI bot&nbsp;refused to&nbsp;create malware code at first, but Stockley found his way around the initial safeguards and&nbsp;managed to get it to create (admittedly quite dubious)&nbsp;ransomware&nbsp;anyway.<\/p>\n<p>In an&nbsp;<a href=\"https:\/\/www.theregister.com\/2023\/04\/21\/chatgpt_insecure_code\/\" target=\"_blank\">interview<\/a>&nbsp;with&nbsp;<i>The Register<\/i>, one of the Universit&eacute; du Qu&eacute;bec researchers said he had concerns about&nbsp;ChatGPT. &#8220;We have actually already seen students use this, and programmers will use this in the wild,&#8221; Khoury said. &#8220;So having a tool that generates insecure code is really dangerous. We need to make students aware that if code is generated with this type of tool, it very well might be insecure.&#8221;<\/p>\n<hr \/>\n<p dir=\"ltr\">Malwarebytes removes all remnants of ransomware and prevents you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.<\/p>\n<p style=\"text-align: center;\"><a href=\"https:\/\/www.malwarebytes.com\/business\/contact-us\/\" class=\"blue-cta-bttn\">TRY NOW<\/a><\/p>\n<p><a href=\"https:\/\/www.malwarebytes.com\/blog\/news\/2023\/04\/chatgpt-creates-not-so-secure-code-study-finds\" target=\"bwo\" >https:\/\/blog.malwarebytes.com\/feed\/<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<table cellpadding=\"10\">\n<tr>\n<td valign=\"top\" align=\"left\">\n<p>Categories: <a href=\"https:\/\/www.malwarebytes.com\/blog\/category\/news\" rel=\"category tag\">News<\/a><\/p>\n<p>Tags: ChatGPT<\/p>\n<p>Tags:  How Secure is Code Generated by ChatGPT?<\/p>\n<p>Tags:  Rapha\u00ebl Khoury<\/p>\n<p>Tags:  Anderson Avila<\/p>\n<p>Tags:  Jacob Brunelle<\/p>\n<p>Tags:  Baba Mamadou Camara<\/p>\n<p>Tags:  Universit\u00e9 du Qu\u00e9bec<\/p>\n<p>Tags:  ChatGPT makes insecure code<\/p>\n<p>Researchers have found that ChatGPT, OpenAI&#8217;s popular chatbot, is prone to generating insecure code.<\/p>\n<table width=\"100%\">\n<tr>\n<td align=\"right\">\n<p><b>(<a href=\"https:\/\/www.malwarebytes.com\/blog\/news\/2023\/04\/chatgpt-creates-not-so-secure-code-study-finds\" title=\"ChatGPT writes insecure code\">Read more&#8230;<\/a>)<\/b><\/p>\n<\/td>\n<\/tr>\n<\/table>\n<\/td>\n<\/tr>\n<\/table>\n<p>The post <a rel=\"nofollow\" href=\"https:\/\/www.malwarebytes.com\/blog\/news\/2023\/04\/chatgpt-creates-not-so-secure-code-study-finds\">ChatGPT writes insecure code<\/a> appeared first on <a rel=\"nofollow\" href=\"https:\/\/www.malwarebytes.com\">Malwarebytes Labs<\/a>.<\/p>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[10488,10378],"tags":[29224,29226,28405,29228,29222,29225,32,29223,29227],"class_list":["post-21867","post","type-post","status-publish","format-standard","hentry","category-malwarebytes","category-security","tag-anderson-avila","tag-baba-mamadou-camara","tag-chatgpt","tag-chatgpt-makes-insecure-code","tag-how-secure-is-code-generated-by-chatgpt","tag-jacob-brunelle","tag-news","tag-raphael-khoury","tag-universite-du-quebec"],"_links":{"self":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/21867","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/comments?post=21867"}],"version-history":[{"count":0,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/21867\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/media?parent=21867"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/categories?post=21867"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/tags?post=21867"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}