{"id":21816,"date":"2023-04-24T09:04:16","date_gmt":"2023-04-24T17:04:16","guid":{"rendered":"https:\/\/www.palada.net\/index.php\/2023\/04\/24\/news-15547\/"},"modified":"2023-04-24T09:04:16","modified_gmt":"2023-04-24T17:04:16","slug":"news-15547","status":"publish","type":"post","link":"http:\/\/www.palada.net\/index.php\/2023\/04\/24\/news-15547\/","title":{"rendered":"Do the productivity gains from generative AI outweigh the security risks?"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/idge\/imported\/imageapi\/2023\/03\/23\/08\/macbook-chatgpt-100938841-small.jpg\"\/><\/p>\n<p><strong>Credit to Author: eschuman@thecontentfirm.com| Date: Fri, 21 Apr 2023 08:08:00 -0700<\/strong><\/p>\n<p><span style=\"font-weight: 400;\">There&#8217;s no doubt generative AI models such as ChatGPT, BingChat, or GoogleBard can deliver massive efficiency benefits \u2014 but they bring with them major cybersecurity and privacy concerns along with accuracy worries.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">It&#8217;s already known that these programs \u2014 especially ChatGPT itself \u2014 make up facts and repeatedly lie. Far more troubling, no one seems to understand why and how these lies, coyly dubbed &#8220;hallucinations,&#8221; are happening.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In <\/span><a href=\"https:\/\/www.cbsnews.com\/news\/google-artificial-intelligence-future-60-minutes-transcript-2023-04-16\/\" rel=\"nofollow noopener\" target=\"_blank\"><span style=\"font-weight: 400;\">a recent\u00a0<\/span><i><span style=\"font-weight: 400;\">60 Minutes<\/span><\/i><span style=\"font-weight: 400;\"> interview<\/span><\/a><span style=\"font-weight: 400;\">, Google CEO Sundar Pichai explained:\u00a0<\/span><span style=\"font-weight: 400;\">\u201cThere is an aspect of this which we call \u2014 all of us in the field \u2014 call it as a \u2018black box.&#8217; You don\u2019t fully understand. And you can\u2019t quite tell why it said this.\u201d<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The fact that OpenAI, which created ChatGPT and the foundation for various other generative models, refuses to detail how it trained these models adds to the confusion.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Even so, enterprises are experimenting with these models for almost everything, regardless of the fact the systems lie repeatedly, no one knows why this happens and there doesn&#8217;t seem to be a fix anywhere in sight. That&#8217;s an enormous problem.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Consider something as mundane as summarizing lengthy documents. If you can\u2019t trust that the summary is accurate, what\u2019s the point? Where is the value? <\/span><\/p>\n<p><span style=\"font-weight: 400;\">How about when these systems do coding? How comfortable are you riding in an electronic vehicle with a brain designed by ChatGPT? What if it hallucinates that the road is clear when it isn\u2019t? What about the guidance system on an airplane, or a smart pacemaker, or the manufacturing procedures for pharmaceuticals or even breakfast cereals?<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In a frighteningly on-point pop-culture reference from 1983, <\/span><a href=\"https:\/\/www.youtube.com\/watch?v=tGNBdjVO04Y\" rel=\"nofollow noopener\" target=\"_blank\"><span style=\"font-weight: 400;\">the film <\/span><i><span style=\"font-weight: 400;\">Wargames<\/span><\/i> <\/a><span style=\"font-weight: 400;\">depicted a generative AI system used by the Pentagon to more effectively counter-strike in a nuclear war. It was housed at NORAD. At one point, the system decides to run its own test and fabricates a large number of imminent incoming nuclear missile strikes from Russia.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The developer of the system argues the attacks are fictitious, that the system made them up. In an eerily predictive moment, the developer says that the system was \u201challucinating\u201d \u2014 decades before the term was coined in the AI community. (<\/span><a href=\"https:\/\/openreview.net\/forum?id=SkxJ-309FQ\" rel=\"nofollow\"><span style=\"font-weight: 400;\">The first reference to hallucinations appears to be from Google in 2018<\/span><\/a><span style=\"font-weight: 400;\">.)<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In the movie, NORAD officials decide to ride out the &#8220;attack,&#8221; prompting the system to try and take over command so it can retaliate on its own. That was fantasy sci-fi back 40 years ago; today, not so much.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In short, using generative AI to code is dangerous, but its efficiencies are so great that it will be extremely tempting for corporate executives to use it anyway. Bratin Saha, vice president for AI and ML Services at AWS<\/span><span style=\"font-weight: 400;\">, argues the decision doesn\u2019t have to be one or the other. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">How so? Saha\u00a0maintains that the efficiency benefits of coding with generative AI are so sky-high that there will be plenty of dollars in the budget for post-development repairs. That could mean enough dollars to pay for extensive security and functionality testing in a sandbox \u2014 both with automated software and expensive human talent \u2014 and the very attractive spreadsheet ROI.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Software development can be executed 57% more efficiently with generative AI \u2014 at least <\/span><a href=\"https:\/\/aws.amazon.com\/blogs\/machine-learning\/announcing-new-tools-for-building-with-generative-ai-on-aws\/\" rel=\"nofollow\"><span style=\"font-weight: 400;\">the AWS flavor<\/span><\/a>\u00a0\u2014\u00a0<span style=\"font-weight: 400;\">but that efficiency gets even better if it replaces les experienced coders, Saha said in a <\/span><i><span style=\"font-weight: 400;\">Computerworld<\/span><\/i><span style=\"font-weight: 400;\"> interview.\u00a0<\/span><span style=\"font-weight: 400;\">\u201cWe have trained it on lots of high-quality code, but the efficiency depends on the task you are doing and the proficiency level,\u201d Saha said, adding that a coder \u201cwho has just started programming won\u2019t know the libraries and the coding.\u201d<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Another security concern about pouring sensitive data into generative AI is that it can pour out somewhere else. Some enterprises have discovered that data fed into the system for summaries, for example, can be revealed to a different company later in the form of an answer. In essence, the questions and data fed into the system become part of its learning process.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Saha said generative AI systems will get safeguards to minimize data leakage. The AWS version, he said, will allow users to \u201cconstrain the output to what it has been given,\u201d which should minimize hallucinations. \u201cThere are ways of using the model to just generate answers from specific content given it. And you can contain where the model gets its information from.\u201d<\/span><\/p>\n<p><span style=\"font-weight: 400;\">As for the issue of hallucinations, Saha said his team has come up with ways to minimize that, noting also that t<\/span><span style=\"font-weight: 400;\">he code-generation engine from AWS, called <\/span><a href=\"https:\/\/aws.amazon.com\/codewhisperer\/\" rel=\"nofollow noopener\" target=\"_blank\">CodeWhisperer<\/a><span style=\"font-size: 15px;\">,<\/span><span style=\"font-weight: 400;\">\u00a0uses machine learning to check for security bugs.<\/span><\/p>\n<p><span style=\"font-weight: 400;\"> But Saha\u2019s key argument is that the efficiency is so high that enterprises can pour lots of additional resources into the post-coding analysis and <em>still<\/em> deliver an ROI strong enough to make even a CFO smile.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Is that bargain worth the risk? It reminds me of a classic scene in <em>The Godfather<\/em>. Don Corleone is explaining to the heads of other organized crime families why he opposes selling drugs. Another family head says that he originally thought that way, but he had to bow to the huge profits.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u201cI also don\u2019t believe in drugs. For years, I paid my people extra so they wouldn\u2019t do that kind of business. But somebody comes to them and says \u2018I have powders. You put up $3,000-$4,000 investment, we can make $50,000 distributing.\u2019 So they can\u2019t resist,\u201d <\/span><a href=\"https:\/\/www.youtube.com\/watch?v=6jpwqWPKAUc\" rel=\"nofollow\"><span style=\"font-weight: 400;\">the chief said<\/span><\/a><span style=\"font-weight: 400;\">. \u201cI want to control it as a business to keep it respectable. I don\u2019t want it near schools. I don\u2019t want it sold to children.\u201d\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In other words, CISOs and even CIOs might find the security tradeoff dangerous and unacceptable, but line-of-business chiefs will find the savings so powerful they won\u2019t be able to resist. So CISOs might as well at least put safeguards in place.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Dirk Hodgson, the director of cybersecurity for NTT Ltd., said he would urge caution on using generative AI for coding.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u201cThere is a real risk for software development and you are going to have to explain how it generated the wrong answers rather than the right answers,\u201d Hodgson said. Much depends on the nature of the business \u2014 and the nature of the task being coded.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u201cI would argue that if you look at every discipline where AI has been highly successful, in all cases it had a low cost of failure,\u201d Hodgson said, meaning that if something went wrong, the damage would be limited.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">One example of a low-risk effort would be an entertainment company using generative AI to devise ideas for shows or perhaps dialogue. In that scenario, no harm would come from the system making stuff up because that&#8217;s the actual task at hand. Then again, there&#8217;s danger in plagiarizing an idea or dialogue from a copyrighted source.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Another major programming risk includes unintended security holes. Although security lapses can happen within one application, they can also easily happen when two clean apps interact and create a security hole; that&#8217;s a scenario that would never have been tested because no one anticipated the apps interacting. Add in some API coding and the potential for problems is orders of magnitude higher.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u201cIt could accidentally introduce new vulnerabilities at the time of coding, such as a new way to exploit some underlying databases. With AI, you don\u2019t know what holes you may be introducing into that code,\u201d Hodgson said. \u201cThat said, AI coding is coming and it does have benefits. We absolutely have to try to take advantage of those benefits. Still, do we really know the liability it will create? I don\u2019t think we know that yet. Our policy at this stage is that we don\u2019t use it.\u201d<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Hodgson noted Saha&#8217;s comments about AI efficiencies being highest when replacing junior coders. But he resisted the suggestion that he take programming tasks away from junior programmers and give them to AI. \u201cIf I don\u2019t develop those juniors, I won\u2019t ever make them seniors. They have to learn the skills to make them good seniors.\u201d<\/span><\/p>\n<p><a href=\"https:\/\/www.computerworld.com\/article\/3694349\/do-the-productivity-gains-from-generative-ai-outweigh-the-security-risks.html#tk.rss_security\" target=\"bwo\" >http:\/\/www.computerworld.com\/category\/security\/index.rss<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/idge\/imported\/imageapi\/2023\/03\/23\/08\/macbook-chatgpt-100938841-small.jpg\"\/><\/p>\n<p><strong>Credit to Author: eschuman@thecontentfirm.com| Date: Fri, 21 Apr 2023 08:08:00 -0700<\/strong><\/p>\n<article>\n<section class=\"page\">\n<p><span style=\"font-weight: 400;\">There&#8217;s no doubt generative AI models such as ChatGPT, BingChat, or GoogleBard can deliver massive efficiency benefits \u2014 but they bring with them major cybersecurity and privacy concerns along with accuracy worries.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">It&#8217;s already known that these programs \u2014 especially ChatGPT itself \u2014 make up facts and repeatedly lie. Far more troubling, no one seems to understand why and how these lies, coyly dubbed &#8220;hallucinations,&#8221; are happening.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In <\/span><a href=\"https:\/\/www.cbsnews.com\/news\/google-artificial-intelligence-future-60-minutes-transcript-2023-04-16\/\" rel=\"nofollow noopener\" target=\"_blank\"><span style=\"font-weight: 400;\">a recent\u00a0<\/span><i><span style=\"font-weight: 400;\">60 Minutes<\/span><\/i><span style=\"font-weight: 400;\"> interview<\/span><\/a><span style=\"font-weight: 400;\">, Google CEO Sundar Pichai explained:\u00a0<\/span><span style=\"font-weight: 400;\">\u201cThere is an aspect of this which we call \u2014 all of us in the field \u2014 call it as a \u2018black box.&#8217; You don\u2019t fully understand. And you can\u2019t quite tell why it said this.\u201d<\/span><\/p>\n<p class=\"jumpTag\"><a href=\"\/article\/3694349\/do-the-productivity-gains-from-generative-ai-outweigh-the-security-risks.html#jump\">To read this article in full, please click here<\/a><\/p>\n<\/section>\n<\/article>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[11062,10643],"tags":[714,14247],"class_list":["post-21816","post","type-post","status-publish","format-standard","hentry","category-computerworld","category-independent","tag-security","tag-software-development"],"_links":{"self":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/21816","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/comments?post=21816"}],"version-history":[{"count":0,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/21816\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/media?parent=21816"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/categories?post=21816"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/tags?post=21816"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}