{"id":21885,"date":"2023-05-01T10:30:03","date_gmt":"2023-05-01T18:30:03","guid":{"rendered":"http:\/\/www.palada.net\/index.php\/2023\/05\/01\/news-15616\/"},"modified":"2023-05-01T10:30:03","modified_gmt":"2023-05-01T18:30:03","slug":"news-15616","status":"publish","type":"post","link":"http:\/\/www.palada.net\/index.php\/2023\/05\/01\/news-15616\/","title":{"rendered":"Generative AI is about to destroy your company. Will you stop it?"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/article\/2023\/03\/shutterstock_2255630107-100938203-small.jpg\"\/><\/p>\n<p><strong>Credit to Author: eschuman@thecontentfirm.com| Date: Mon, 01 May 2023 10:21:00 -0700<\/strong><\/p>\n<p><span style=\"font-weight: 400;\">As the debate rages about <\/span><a href=\"https:\/\/www.computerworld.com\/article\/3694349\/do-the-productivity-gains-from-generative-ai-outweigh-the-security-risks.html\"><span style=\"font-weight: 400;\">how much IT admins and CISOs should use generative AI<\/span><\/a>\u00a0\u2014\u00a0<span style=\"font-weight: 400;\">especially for coding \u2014 SailPoint CISO\u00a0Rex Booth sees far more danger than benefit, especially given the industry\u2019s less-than-stellar history of making the right security decisions.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Google has <a href=\"https:\/\/blog.google\/technology\/ai\/code-with-bard\/\" rel=\"nofollow noopener\" target=\"_blank\">already decided to publicly leverage generative AI<\/a> in its searches, a move that is freaking out a wide range of AI specialists, including <a href=\"https:\/\/www.nytimes.com\/2023\/05\/01\/technology\/ai-google-chatbot-engineer-quits-hinton.html?smid=nytcore-ios-share&amp;referringSource=articleShare\" rel=\"nofollow noopener\" target=\"_blank\">a senior manager of AI at Google itself<\/a>.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Although some have made the case that the extreme efficiencies generative AI promises could fund additional security (and functionality checks on the backend), Booth says industry history says otherwise.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u201cTo propose that we can depend on all companies to use the savings to go back and fix the flaws on the back-end is insane,\u201d Booth said in an interview. \u201cThe market hasn\u2019t provided any incentive for that to happen in decades \u2014 why should we think the industry will suddenly start favoring quality over profit? The entire cyber industry exists because we\u2019ve done a really bad job of building in security. We\u2019re finally making traction with the developer community to consider security as a core functional component. We can\u2019t let the allure of efficiency distract us from improving the foundation of the ecosystem.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">&#8220;Sure, use AI, but don\u2019t abdicate responsibility for the quality of every single line of code you commit,&#8221; he said.\u00a0<\/span><span style=\"font-weight: 400;\">\u201cThe proposition of, \u2018Hey, the output may be flawed, but you\u2019re getting it at a bargain price\u2019 is ludicrous. We don\u2019t need a higher volume of crappy, insecure software. We need higher quality software.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u201cIf the developer community is going to use AI as an efficiency, good for them. I sure would have when I was writing code.\u00a0 But it needs to be done smartly.\u201d<\/span><\/p>\n<p><span style=\"font-weight: 400;\">One\u00a0<\/span><a href=\"https:\/\/www.computerworld.com\/article\/3694349\/do-the-productivity-gains-from-generative-ai-outweigh-the-security-risks.html\">option that&#8217;s been bandied about<\/a><span style=\"font-weight: 400;\">\u00a0would see junior programmers, who can be more efficiently replaced by AI than experienced coders, retrained as cybersecurity specialists who could not only fix AI-generated coding problems but handle \u00a0other security tasks. In theory, that might help address the shortage of cybersecurity talent.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">But Booth sees generative AI having the opposite impact. He worries that, \u201cAI could actually lead to a boom in security hiring to clean up the backend, further exacerbating the labor shortages we already have.\u201d<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Oh, generative AI, whether your name is ChatGPT, BingChat, Google Bard or something else, is there no end to the ways your use can make IT nightmares worse?<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Booth&#8217;s argument about the cybersecurity talent shortage makes sense. There is, more or less, a finite number of trained cybersecurity people available for hire. If enterprises try and combat that shortage by paying them more money \u2014 an unlikely but possible scenario \u2014 it will improve the security situation at one company at the expense of another. \u201cWe are constantly just trading people back and forth,\u201d Booth said.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The most likely short-term result from the growing use of large language models is that it will impact coders a lot more than security people. \u201cI am sure that ChatGPT will lead to a sharp decrease in the number of entry-level developer positions,\u201d Booth said. \u201dIt will instead enable a broader spectrum of people to get into the development process.\u201d<\/span><\/p>\n<p><span style=\"font-weight: 400;\">This is a reference to the potential for line of business (LOB) executives and managers to use generative AI to directly code, eliminating the need for a coder to act as an intermediary. The key question: Is that a good thing or bad?<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The \u201cgood thing\u201d argument is that it will save companies money and allow LOBs to get apps coded more quickly. That&#8217;s certainly true. The \u201cbad thing\u201d argument is that not only do LOB people know less about security than even the most junior programmer, but their main concern is speed. Will those LOB people even bother to do security checks and repairs? (We all know the answer to that question, but I\u2019m obligated to ask.)\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Booth&#8217;s view: if C-suite execs permit development via generative AI without limitations, problems will boil over that go well beyond cybersecurity.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">LOBs will \u201cfind themselves empowered through the wonders of AI to completely circumvent the normal development process,&#8221; he said. &#8220;Corporate policy should not permit that. Developers are trained in the domain. They know the right way to do things in the development process. They know proper deployment including integration with the rest of the enterprise. This goes <\/span><i><span style=\"font-weight: 400;\">way <\/span><\/i><span style=\"font-weight: 400;\">beyond, \u2018Hey, I can slap some code together.\u2019 Just because we can do it faster, that doesn&#8217;t mean that all bets are off and it\u2019s suddenly the wild west.\u201d<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Actually, for many enterprise CISOs and business managers, that is exactly what it means.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">This forces us back to the sensitive issue of generative AI going out of its way to lie, which is the worst realization of AI hallucinations. Some have said this is nothing new and that human coders have been making mistakes like this for generations. I strongly disagree.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">We&#8217;re not talking about mistakes here and there or the AI system not knowing a fact. Consider what coders do. Yes, even the best coders make mistakes from time to time and others are sloppy and make a lot more errors. But what&#8217;s typical for a human coder is that they will enter 10,000 when the number was supposed to be 100,000. Or they won\u2019t close an instruction. These are bad things, but there&#8217;s no evil intent. It&#8217;s just a mistake.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">To make those mishaps equivalent to what generate AI is doing today, a coder would have to completely invent new instructions and change existing instructions to something ridiculous. That\u2019s not an error or carelessness, that&#8217;s intentional lying. Even worse, it\u2019s for no discernible reason other than to lie. That would absolutely be a firing offense unless the coder has an amazingly good explanation.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">What if the coder&#8217;s boss acknowledged this lying and said, \u201cYep. the coder clearly lied. I have no idea why they did it and they admit their error, but they won&#8217;t say that they won\u2019t do it again. Indeed, my assessment is that they will absolutely do it repeatedly. And until we can figure out why they are doing it, we can\u2019t stop them. And, again, we have no clue why they are doing it and we have no reason we\u2019ll figure it out anytime soon.\u201d<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Is there any doubt you would fire that coder (and maybe the manager, too)? And yet, that is precisely what generative AI is doing. Stunningly, top enterprise executives seem to be okay with that, as long as AI tools continue to code quickly and efficiently.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">It is not simply a matter of trusting your code, but trusting your coder. What if I were to tell you that one of the quotes in this column is something I completely made up? (None were, but follow along with me.) Could you tell which quote isn&#8217;t real? Spot-checking wouldn&#8217;t help; the first 10 comments might be perfect, but the next one might not be. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Think about that a moment, then tell me how much you can really trust code generated by ChatGPT.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The only way to know that the quotes in this post are legitimate is to trust the quoter, the columnist \u2014 me. If you can\u2019t, how can you trust the words? Generative AI has repeatedly shown that it will fabricate things for no reason. Consider that when you are making your strategic decisions.<\/span><\/p>\n<p><a href=\"https:\/\/www.computerworld.com\/article\/3694854\/generative-ai-is-about-to-destroy-your-company-will-you-stop-it.html#tk.rss_security\" target=\"bwo\" >http:\/\/www.computerworld.com\/category\/security\/index.rss<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/article\/2023\/03\/shutterstock_2255630107-100938203-small.jpg\"\/><\/p>\n<p><strong>Credit to Author: eschuman@thecontentfirm.com| Date: Mon, 01 May 2023 10:21:00 -0700<\/strong><\/p>\n<article>\n<section class=\"page\">\n<p><span style=\"font-weight: 400;\">As the debate rages about <\/span><a href=\"https:\/\/www.computerworld.com\/article\/3694349\/do-the-productivity-gains-from-generative-ai-outweigh-the-security-risks.html\"><span style=\"font-weight: 400;\">how much IT admins and CISOs should use generative AI<\/span><\/a>\u00a0\u2014\u00a0<span style=\"font-weight: 400;\">especially for coding \u2014 SailPoint CISO\u00a0Rex Booth sees far more danger than benefit, especially given the industry\u2019s less-than-stellar history of making the right security decisions.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Google has <a href=\"https:\/\/blog.google\/technology\/ai\/code-with-bard\/\" rel=\"nofollow noopener\" target=\"_blank\">already decided to publicly leverage generative AI<\/a> in its searches, a move that is freaking out a wide range of AI specialists, including <a href=\"https:\/\/www.nytimes.com\/2023\/05\/01\/technology\/ai-google-chatbot-engineer-quits-hinton.html?smid=nytcore-ios-share&amp;referringSource=articleShare\" rel=\"nofollow noopener\" target=\"_blank\">a senior manager of AI at Google itself<\/a>.\u00a0<\/span><\/p>\n<p class=\"jumpTag\"><a href=\"\/article\/3694854\/generative-ai-is-about-to-destroy-your-company-will-you-stop-it.html#jump\">To read this article in full, please click here<\/a><\/p>\n<\/section>\n<\/article>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[11062,10643],"tags":[11113,714,14247],"class_list":["post-21885","post","type-post","status-publish","format-standard","hentry","category-computerworld","category-independent","tag-artificial-intelligence","tag-security","tag-software-development"],"_links":{"self":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/21885","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/comments?post=21885"}],"version-history":[{"count":0,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/21885\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/media?parent=21885"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/categories?post=21885"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/tags?post=21885"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}