{"id":23662,"date":"2024-01-13T12:30:53","date_gmt":"2024-01-13T20:30:53","guid":{"rendered":"https:\/\/www.palada.net\/index.php\/2024\/01\/13\/news-17392\/"},"modified":"2024-01-13T12:30:53","modified_gmt":"2024-01-13T20:30:53","slug":"news-17392","status":"publish","type":"post","link":"http:\/\/www.palada.net\/index.php\/2024\/01\/13\/news-17392\/","title":{"rendered":"Choosing a genAI partner: Trust, but verify"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/idge\/imported\/imageapi\/2023\/07\/19\/18\/chatgpt-iphone-100943578-small.jpg\"\/><\/p>\n<p><strong>Credit to Author: eschuman@thecontentfirm.com| Date: Tue, 19 Dec 2023 10:03:00 -0800<\/strong><\/p>\n<p>Enterprise executives, still enthralled by the possibilities of generative artificial intelligence (genAI), more often than not are insisting that their IT departments figure out how to make the technology work.\u00a0<\/p>\n<p>Let\u2019s set aside the usual concerns about genAI, such as the <a href=\"https:\/\/www.computerworld.com\/article\/3711467\/questions-raised-as-amazon-q-reportedly-starts-to-hallucinate-and-leak-confidential-data.html\">hallucinations and other errors<\/a> that make it essential to check every single line it generates (and obliterate any hoped-for efficiency boosts). Or that data leakage is inevitable and will be next to impossible to detect until it is too late. (<a href=\"https:\/\/owasp.org\" rel=\"noopener nofollow\" target=\"_blank\">OWASP<\/a> has put together an <a href=\"https:\/\/owasp.org\/www-project-top-10-for-large-language-model-applications\/llm-top-10-governance-doc\/LLM_AI_Security_and_Governance_Checklist.pdf\" rel=\"nofollow noopener\" target=\"_blank\">impressive list of the biggest IT threats from genAI<\/a> and <a href=\"https:\/\/www.computerworld.com\/article\/3697649\/what-are-large-language-models-and-how-are-they-used-in-generative-ai.html\">LLMs in general<\/a>.)\u00a0<\/p>\n<p>Logic and common sense have not always been the strengths of senior management when on a mission.\u00a0That means the IT question will rarely be, \u201cShould we do GenAI? Does it make sense for us?\u201d It will be: \u201cWe have been ordered to do it. What is the most cost-effective and secure way to proceed?\u201d<\/p>\n<p>With those questions in mind, I was intrigued by an <a href=\"https:\/\/apnews.com\/article\/amazon-aws-generative-ai-anthropic-chatgpt-6676af80819ef81cdc3913fd4541b641\" rel=\"nofollow noopener\" target=\"_blank\">Associated Press interview with AWS CEO Adam Selipsky<\/a> \u2014 specifically this comment: \u201cMost of our enterprise customers are not going to build models. Most of them want to use models that other people have built. The idea that one company is going to be supplying all the models in the world, I think, is just not realistic. We\u2019ve discovered that customers need to experiment and we are providing that service.\u201d<\/p>\n<p>It\u2019s a valid argument and a fair summation of the thinking of many top executives. But should it be? The choice is not merely buy versus build. Should the enterprise create and manage its own model? Rely on a big player (such as AWS, Microsoft or Google especially)? Or use one of the dozens of smaller specialty players in the GenAI arena?<\/p>\n<p>It can be \u2014 and probably should be \u2014 a combination of all three, depending on the enterprise and its particular needs and objectives.<\/p>\n<p>Although there are thousands of logistics and details to consider, the fundamental enterprise IT question involving genAI developments and deployments is simple: Trust.<\/p>\n<p>The decision to use genAI has a lot of in common with the enterprise cloud decision. In either case, a company is turning over much of its intellectual crown jewels (its most sensitive data) to a third party. And in both instances, the third-party is trying to offer as little visibility and control as possible.\u00a0<\/p>\n<p>In the cloud, enterprise tenants are rarely if ever told of configuration or other settings changes that directly affect their data. (Don\u2019t even dream about a cloud vendor <i>asking <\/i>the enterprise tenant for permission to make those changes.)\u00a0<\/p>\n<p>With genAI, the similarities are obvious: How is my data being safeguarded? How are genAI answers safeguarded? Is our data training a model that will be used by our competitors? For that matter, how do I know exactly what the model <i>is <\/i>being trained with?\u00a0<\/p>\n<p>As a practical matter, this will be handled (or avoided) via contracts, which brings us back to the choice of working with a big-name third-party or a smaller, lesser-known company. The smaller they are, the more likely they will be open to accepting your contract terms.\u00a0<\/p>\n<p>Remember that dynamic when figuring out your genAI strategy: you&#8217;re going to want a lot of concessions, which are easier to get when you&#8217;re the bigger fish.<\/p>\n<p>It&#8217;s when setting up a contract that trust really comes into play. It will be difficult to write into it sufficient visibility and control for your general counsel and CISO and your compliance chief. But of even greater concern is verification: What will a third-party genAI provider allow you to do to audit their operations to ensure they&#8217;re doing what they promised?\u00a0<\/p>\n<p>More frighteningly, even if they agree to everything you ask, how <i>can <\/i>some of these items be verified? If the third-party promises you your data will not be used to train their algorithm, how the heck can you make sure that it won\u2019t?<\/p>\n<p>This is why enterprises should not so quickly dismiss doing a lot of genAI work themselves, possibly by acquiring a smaller player. (Let\u2019s not get into whether you trust your own employees. Let\u2019s pretend that you do.)\u00a0<\/p>\n<p>Steve Winterfield, the advisory CISO at Akamai, draws a key distinction between generic AI \u2014 including machine learning \u2014 and LLMs and genAI, which are fundamentally different.<\/p>\n<p>\u201cI was never worried about my employees dabbling with (generic) AI, but now we are talking about public AI,\u201d Winterfeld said. \u201cIt can take part of its learning database and can spit it out somewhere else. Can I even audit what is going on? Let\u2019s say someone on a sales team wants to write an email about a new product that is going to be announced soon and asks (genAI) for help. The risk is exposing something we haven\u2019t announced yet. The Google DNA is that the customer is the business model. How can I prevent our information from being shared? Show me.\u201d<\/p>\n<p>Negotiating with smaller genAI companies is fine, Winterfeld said, but he worries about that company\u2019s future, as in going out of business or being acquired by an Akamai rival. \u201cAre they even going to be around in two years?\u201d<\/p>\n<p>Another key worry is cybersecurity: How well will the third-party firmprotect your data, and if your CISO chooses to use genAI to handle your own security, how well will it work?<\/p>\n<p>\u201cSOCs are going to be completely blindsided by the lack of visibility into adversarial attacks on AI systems,\u201d said Josey George, a general manager for strategy at global consulting firm Wipro. \u201cSOCs today collect data from multiple types of IT infrastructure acting as event\/log sources [such as] firewalls, servers, routers, end points, gateways and pour that data into security analytics platforms. Newer applications that will embed classic and genAI within them will not be able to differentiate advanced adversarial attacks on AI systems from regular inputs and thus will generate business-as-usual event logs.<\/p>\n<p>&#8220;That could mean that what gets collected from these systems as event logs will have nothing of value to indicate an imminent or ongoing attack,&#8221;\u00a0George said.<\/p>\n<p>\u201cRight now is a dangerous time to be partnering with AI companies,&#8221; said Michael Krause, co-founder and CTO of AI vendor Ensense and a long time AI industry veteran. &#8220;A lot of AI companies have been founded while riding this wave and it\u2019s hard to tell fact from fiction.\u00a0<\/p>\n<p>&#8220;This situation will change as the industry matures and smoke-and-mirrors companies are thinned out,\u201d Krause said. \u201cMany companies and products make it virtually impossible to prove compliance.\u201d<\/p>\n<p>Krause offered a few suggestions for enterprise CISOs trying to partner for genAI projects.<\/p>\n<p>\u201cRequire that no internal data be used to train or fine-tune shared models \u2014 and no data [should] be saved or stored. Require a separate environment be deployed for your exclusive use, prohibiting any data sharing, and being access controlled by you. Require any and all data and environments be shut down and deleted upon request or conclusion. Agree to a data security audit prior to and following the engagement conclusion.\u201d<\/p>\n<p>Speaking of things to be careful of, OpenAI \u2014 the only company where the CEO can fire the board, albeit with a little help from Microsoft and especially Microsoft\u2019s money \u2014 raised a lot of eyebrows when it updated its terms and conditions on Dec. 13.\u00a0In its new<a href=\"https:\/\/openai.com\/policies\/terms-of-use\" rel=\"nofollow noopener\" target=\"_blank\"> terms of use<\/a>, OpenAI said that if someone uses a company email address, that account may be automatically \u201cadded to the organization&#8217;s business account with us.\u201d If that happens, \u201cthe organization\u2019s administrator will be able to control your account, including being able to access content.\u201d\u00a0<\/p>\n<p>You&#8217;ll either need to find a free personal account to use or avoid asking ChatGPT \u201cCan you write a resume for me?\u201d or \u201cHow do I break into my boss\u2019s email account?\u201d<\/p>\n<p>The new version allows people to opt-out of OpenAI training its algorithms on their data. But OpenAI doesn\u2019t make it easy, forcing users to jump through a lot of hoops to do so. It starts by telling users to go <a href=\"https:\/\/help.openai.com\/en\/articles\/5722486-how-your-data-is-used-to-improve-model-performance\" rel=\"nofollow noopener\" target=\"_blank\">to this page<\/a>. That page, however, doesn\u2019t allow an opt-out. Instead, that page suggests users go<a href=\"https:\/\/docs.google.com\/forms\/d\/e\/1FAIpQLScrnC-_A7JFs4LbIuzevQ_78hVERlNqqCPCt3d8XqnKOfdRdQ\/closedform\" rel=\"nofollow noopener\" target=\"_blank\"> to another page<\/a>. That page doesn\u2019t work either, but it does point to yet another URL \u2014 and it has a button in the right corner to apply. Next, it has to verify an email address and <em>then<\/em>\u00a0says it will consider the request.<\/p>\n<p>You might almost think they want to discourage opt-outs. (Update: Shortly after the \u00a0update was posted, OpenAI removed one of the bad links.)<\/p>\n<p><a href=\"https:\/\/www.computerworld.com\/article\/3711661\/choosing-a-genai-partner-trust-but-verify-ok-maybe-just-verify.html#tk.rss_security\" target=\"bwo\" >http:\/\/www.computerworld.com\/category\/security\/index.rss<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/idge\/imported\/imageapi\/2023\/07\/19\/18\/chatgpt-iphone-100943578-small.jpg\"\/><\/p>\n<p><strong>Credit to Author: eschuman@thecontentfirm.com| Date: Tue, 19 Dec 2023 10:03:00 -0800<\/strong><\/p>\n<article>\n<section class=\"page\">\n<p>Enterprise executives, still enthralled by the possibilities of generative artificial intelligence (genAI), more often than not are insisting that their IT departments figure out how to make the technology work.\u00a0<\/p>\n<p>Let\u2019s set aside the usual concerns about genAI, such as the <a href=\"https:\/\/www.computerworld.com\/article\/3711467\/questions-raised-as-amazon-q-reportedly-starts-to-hallucinate-and-leak-confidential-data.html\">hallucinations and other errors<\/a> that make it essential to check every single line it generates (and obliterate any hoped-for efficiency boosts). Or that data leakage is inevitable and will be next to impossible to detect until it is too late. (<a href=\"https:\/\/owasp.org\" rel=\"noopener nofollow\" target=\"_blank\">OWASP<\/a> has put together an <a href=\"https:\/\/owasp.org\/www-project-top-10-for-large-language-model-applications\/llm-top-10-governance-doc\/LLM_AI_Security_and_Governance_Checklist.pdf\" rel=\"nofollow noopener\" target=\"_blank\">impressive list of the biggest IT threats from genAI<\/a> and <a href=\"https:\/\/www.computerworld.com\/article\/3697649\/what-are-large-language-models-and-how-are-they-used-in-generative-ai.html\">LLMs in general<\/a>.)\u00a0<\/p>\n<p class=\"jumpTag\"><a href=\"\/article\/3711661\/choosing-a-genai-partner-trust-but-verify-ok-maybe-just-verify.html#jump\">To read this article in full, please click here<\/a><\/p>\n<\/section>\n<\/article>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[11062,10643],"tags":[11113,29835,714,12747],"class_list":["post-23662","post","type-post","status-publish","format-standard","hentry","category-computerworld","category-independent","tag-artificial-intelligence","tag-generative-ai","tag-security","tag-technology-industry"],"_links":{"self":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/23662","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/comments?post=23662"}],"version-history":[{"count":0,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/23662\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/media?parent=23662"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/categories?post=23662"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/tags?post=23662"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}