{"id":22743,"date":"2023-08-21T02:30:06","date_gmt":"2023-08-21T10:30:06","guid":{"rendered":"https:\/\/www.palada.net\/index.php\/2023\/08\/21\/news-16473\/"},"modified":"2023-08-21T02:30:06","modified_gmt":"2023-08-21T10:30:06","slug":"news-16473","status":"publish","type":"post","link":"https:\/\/www.palada.net\/index.php\/2023\/08\/21\/news-16473\/","title":{"rendered":"Why and how to create corporate genAI policies"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/article\/2023\/08\/shutterstockmonopoly919-100944841-small.jpg\"\/><\/p>\n<p>As a large number of companies continue to test and deploy generative artificial intelligence (genAI) tools, many are at risk of AI errors, malicious attacks, and running afoul of regulators \u2014 not to mention the potential exposure of sensitive data.<\/p>\n<p>For example, in April, after Samsung\u2019s semiconductor division allowed engineers to use ChatGPT, workers using the platform leaked trade secrets on least three instances, according to\u00a0<a href=\"https:\/\/mashable.com\/article\/samsung-chatgpt-leak-details\" rel=\"nofollow noopener\" target=\"_blank\">published accounts<\/a>. One employee pasted confidential source code into the chat to check for errors, while another worker shared code with ChatGPT and \u201crequested code optimization.\u201d<\/p>\n<p>ChatGPT is hosted by its developer, OpenAI, which <a href=\"https:\/\/help.openai.com\/en\/articles\/6783457-what-is-chatgpt\" rel=\"nofollow noopener\" target=\"_blank\">asks users\u00a0not to share any sensitive information<\/a>\u00a0because it cannot be deleted.<\/p>\n<p>\u201cIt\u2019s almost like using Google at that point,\u201d said Matthew Jackson, global CTO at systems integration provider Insight Enterprises. \u201cYour data is being saved by OpenAI. They\u2019re allowed to use whatever you put into that chat window. You can still use ChatGPT to help write generic content, but you don\u2019t want to paste confidential information into that window.\u201d<\/p>\n<p>The bottom line is that <a href=\"https:\/\/www.computerworld.com\/article\/3697649\/what-are-large-language-models-and-how-are-they-used-in-generative-ai.html\">large language models<\/a>\u00a0(LLMs) and other genAI applications \u201care not fully baked,\u201d according to Avivah Litan, a vice president and distinguished Gartner analyst. \u201cThey still have accuracy issues, liability and privacy concerns, security vulnerabilities, and can veer off in unpredictable or undesirable directions,\u201d she said, \u201cbut they are entirely usable and provide an enormous boost to productivity and innovation.\u201d<\/p>\n<p>A recent <a href=\"https:\/\/www.insight.com\/en_US\/content-and-resources\/gated\/beyond-hypotheticals--understanding-the-real-possibilities-of-generative-ai-ac1293.html\" rel=\"noopener nofollow\" target=\"_blank\">Harris Poll<\/a> found that business leaders\u2019 top two reasons for rolling out genAI tools over the next year are to increase revenue and drive innovation. Almost half (49%) said keeping pace with competitors on tech innovation is a top challenge this year. (The Harris Poll surveyed 1,000 employees employed as directors or higher between April and May 2023.)<\/p>\n<p>Those polled named employee productivity (72%) as the greatest benefit of AI, with customer engagement (via chatbots) and research and development taking second and third, respectively.<\/p>\n<p>Within the next three years, most business leaders expect to adopt genAI to make employees more productive and enhance customer service, according to separate surveys by consultancy <a href=\"https:\/\/www.ey.com\/en_us\/ceo\/ceo-survey-us-report\" rel=\"nofollow noopener\" target=\"_blank\">Ernst &amp; Young<\/a> (EY) and research firm The Harris Poll. And a majority of CEOs are integrating AI into products\/services or planning to do so within 12 months.<\/p>\n<p>\u201cNo corporate leader can ignore AI in 2023,\u201d EY said in its survey report. \u201cEighty-two percent of leaders today believe organizations must invest in digital transformation initiatives, like generative AI, or be left behind.\u201d<\/p>\n<p>About half of respondents to The Harris Poll, which was commissioned by systems integration services vendor\u00a0<a href=\"http:\/\/www.insight.com\/\" rel=\"nofollow noopener\" target=\"_blank\">Insight Enterprises<\/a>, indicated they\u2019re embracing AI to ensure product quality and to address safety and security risks.<\/p>\n<p>Forty-two percent of US CEOs surveyed by EY said they have already fully integrated AI-driven product or service changes into their capital allocation processes and are actively investing in AI-driven innovation, while 38% say they plan to make major capital investments in the technology over the next 12 months.<\/p>\n<p>Just over half (53%) of those surveyed expect to use genAI to assist with research and development, and 50% plan to use it for software development\/testing, according to The Harris Poll.<\/p>\n<p>While C-suite leaders recognize the importance of genAI, they also remain wary. Sixty-three percent of CEOs in the EY poll said it is a force for good and can drive business efficiency, but 64% believe not enough is being done to manage any unintended consequences of genAI use on business and society.<\/p>\n<p>In light of the \u201cunintended consequences of AI,\u201d eight in 10 organizations have either put in place AI policies and strategies or are considering doing so, according to both polls.<\/p>\n<p>Generative AI was the second most-frequently named risk in <a href=\"https:\/\/www.gartner.com\/en\/newsroom\/press-releases\/2023-08-08-gartner-survey-shows-generative-ai-has-become-an-emerging-risk-for-enterprises\" rel=\"nofollow\">Gartner&#8217;s second quarter survey<\/a>, appearing in the top 10 for the first time, according to Ran Xu director, research in Gartner&#8217;s Risk &amp; Audit Practice.<\/p>\n<p>\u201cThis reflects both the rapid growth of public awareness and usage of generative AI tools, as well as the breadth of potential use cases, and therefore potential risks, that these tools engender,&#8221; Xu said in a statement.<\/p>\n<p>Hallucinations, in which genAI apps present facts and data that look accurate and factual but are not, are a key risk. AI outputs are known to inadvertently infringe on the intellectual property rights of others. The use of genAI tools can raise privacy issues, as they may share user information with third parties, such as vendors or service providers, without prior notice. Hackers are using a method known as &#8220;<a href=\"https:\/\/research.nccgroup.com\/2022\/12\/05\/exploring-prompt-injection-attacks\/\" rel=\"nofollow noopener\" target=\"_blank\">prompt injection attacks<\/a>&#8221; to manipulate how a large language model responds to queries.<\/p>\n<p>\u201cThat\u2019s one potential risk in that people may ask it a question and assume the data is correct and go off and make some important business decision with inaccurate data,\u201d Jackson said. \u201cThat was the number one concern \u2014 using bad data. Number two in our survey was security.\u201d<\/p>\n<p>The problems organizations face when deploying genAI, Litan explained, lie in three main categories:<\/p>\n<p>Mitigating those kinds of threats, Litan said, requires a layered security and risk management approach. There are several different ways organizations can reduce the prospect of unwanted or illegitimate inputs or outputs.\u00a0<\/p>\n<p>First, organizations should define policies for acceptable use and establish systems and processes to record requests to use genAI applications, including the intended use and the data being requested. GenAI application use should also require approvals by various overseers.<\/p>\n<p>Organizations can also use input content filters for information submitted to hosted LLM environments. This helps screen inputs against enterprise policies for acceptable use.<\/p>\n<p>Privacy and data protection risks can be mitigated by opting out of hosting a prompt data storage, and by making sure a vendor doesn\u2019t use corporate data to train its models. Additionally, companies should comb through a hosting vendor\u2019s licensing agreement, which defines the rules and its responsibility for data protection in its LLM environment.<\/p>\n<p>Lastly, organizations need to be aware of prompt injection attacks, which is a malicious input designed to trick a LLM into changing its desired behavior. That can result in stolen data or <a href=\"https:\/\/www.wired.com\/story\/chatgpt-prompt-injection-attack-security\/\" rel=\"nofollow noopener\" target=\"_blank\">customers being scammed<\/a> by the generative AI systems.<\/p>\n<p>Organizations need strong security around the local Enterprise LLM environment, including access management, data protection, and network and endpoint security, according to Gartner.<\/p>\n<p>Litan recommends that genAI users deploy <a href=\"https:\/\/www.cybersecurity-insiders.com\/sse-decoded-answers-to-your-questions-on-security-service-edge-the-latest-innovation-in-cyberattack-prevention\/\" rel=\"nofollow noopener\" target=\"_blank\">Security Service Edge<\/a> software that combines networking and security together into a cloud-native software stack that\u00a0protects\u00a0an organization\u2019s edges, its sites and applications.<\/p>\n<p>Additionally, organizations should hold their LLM or genAI service providers accountable for how they prevent indirect prompt injection attacks on their LLMs, over which a user organization has no control or visibility.<\/p>\n<p>One mistake companies make is to decide that it\u2019s not worth the risk to use AI, so \u201cthe first policy most companies come up with is \u2018don\u2019t use it,\u2019\u201d Insight\u2019s Jackson said.<\/p>\n<p>\u201cThat was our first policy as well,\u201d he said. \u201cBut we very quickly stood up a private tenant using Microsoft\u2019s OpenAI on Azure\u2019s technology. So, we created an environment that was secure, where we were able to connect to some of our private enterprise data. So, that way we could allow people to use it.\u201d<\/p>\n<p>One Insight employee described the generative AI technology as being like Excel. \u201cYou don\u2019t ask people how they\u2019re going to use Excel before you give it to them; you just give it to them and they come up with all these creative ways to use it,\u201d Jackson said.<\/p>\n<p>Insight ended up talking to a lot of clients about genAI use cases considering the firm\u2019s own experiences with the technology.<\/p>\n<p>\u201cOne of the things that dawned on us with some of our pilots is AI\u2019s really just a general productivity tool. It can handle so many use cases,&#8221; Jackson said. &#8220;&#8230;What we decided [was] rather than going through a long, drawn-out process to overly customize it, we were just going to give it out to some departments with some general frameworks and boundaries around what they could and couldn\u2019t do \u2014 and then see what they came up with.\u201d<\/p>\n<p>One of the first tasks Insight Enterprises used ChatGPT for was in its distribution center, where clients purchase technology and the company then images those devices and sends them out to clients; the process is filled with mundane tasks, such as updating product statuses and supply systems.<\/p>\n<p>\u201cSo, one of the folks in one of our warehouses realized if you can ask generative AI to write a script to automate some of these system updates,\u201d Jackson said. &#8220;This was a practical use case that emerged from Insight\u2019s crowd-sourcing of its own private, enterprise instance of ChatGPT, called Insight GPT, across the organization.&#8221;<\/p>\n<p>The generative AI program wrote a short Python script for Insight\u2019s warehouse operation that automated a significant number of tasks, and enabled system updates that could run against its SAP inventory system; it essentially automated a task that took people five minutes every time they had to make an update.<\/p>\n<p>\u201cSo, there was a huge productivity improvement within our warehouse. When we rolled it out to the rest of the employees in that center, hundreds of hours a week were saved,\u201d Jackson said.<\/p>\n<p>Now, Insight is focusing on prioritizing critical use cases that may require more customization. That could include using prompt engineering to train the LLM differently or tying in more diverse or complicated back-end data sources.<\/p>\n<p>Jackson described LLMs as a pretrained \u201cblack box,\u201d with data they\u2019re trained on typically a couple years old and excluding corporate data. Users can, however, instruct APIs to access corporate data like an advanced search engine. \u201cSo, that way you get access to more relevant and current content,\u201d he said.<\/p>\n<p>Insight is currently working with ChatGPT on a project to automate how contracts are written. Using a standard ChatGPT 4.0 model, the company connected it to its existing library of contracts, of which it has tens of thousands.<\/p>\n<p>Organizations can use LLM extensions such as <a href=\"https:\/\/docs.langchain.com\/docs\/\" rel=\"nofollow noopener\" target=\"_blank\">LangChain<\/a> or Microsoft\u2019s <a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/search\/search-what-is-azure-search\" rel=\"nofollow noopener\" target=\"_blank\">Azure Cognitive Search<\/a> to discover corporate data relative to a task given the generative AI tool.<\/p>\n<p>In Insight\u2019s case, genAI will be used to discover which contracts the company won, prioritize those, and then cross-reference them against CRM data to automate writing future contracts for clients.<\/p>\n<p>Some data sources, such as standard SQL databases or libraries of files, are easy to connect to; others, such as AWS cloud or custom storage environments, are more difficult to access securely.<\/p>\n<p>\u201cA lot of people think you need to retrain the model to get their own data into it, and that\u2019s absolutely not the case; that can actually be risky, depending on where that model lives and how it\u2019s executed,\u201d Jackson said. \u201cYou can easily stand up one of these OpenAI models within Azure and then connect in your data within that private tenant.\u201d<\/p>\n<p>\u201cHistory tells us if you give people the right tools, they become more productive and discover new ways to work to their benefit,\u201d Jackson added. \u201cEmbracing this technology gives employees an unprecedented opportunity to evolve and elevate how they work and, for some, even discover new career paths.\u201d<\/p>\n<p><a href=\"https:\/\/www.computerworld.com\/article\/3705028\/why-and-how-to-create-corporate-generative-ai-policies.html#tk.rss_security\" target=\"bwo\" >http:\/\/www.computerworld.com\/category\/security\/index.rss<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/article\/2023\/08\/shutterstockmonopoly919-100944841-small.jpg\"\/><\/p>\n<article>\n<section class=\"page\">\n<p>As a large number of companies continue to test and deploy generative artificial intelligence (genAI) tools, many are at risk of AI errors, malicious attacks, and running afoul of regulators \u2014 not to mention the potential exposure of sensitive data.<\/p>\n<p>For example, in April, after Samsung\u2019s semiconductor division allowed engineers to use ChatGPT, workers using the platform leaked trade secrets on least three instances, according to\u00a0<a href=\"https:\/\/mashable.com\/article\/samsung-chatgpt-leak-details\" rel=\"nofollow noopener\" target=\"_blank\">published accounts<\/a>. One employee pasted confidential source code into the chat to check for errors, while another worker shared code with ChatGPT and \u201crequested code optimization.\u201d<\/p>\n<p class=\"jumpTag\"><a href=\"\/article\/3705028\/why-and-how-to-create-corporate-generative-ai-policies.html#jump\">To read this article in full, please click here<\/a><\/p>\n<\/section>\n<\/article>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[11062,10643],"tags":[11113,13431,11063,11070,29835,714],"class_list":["post-22743","post","type-post","status-publish","format-standard","hentry","category-computerworld","category-independent","tag-artificial-intelligence","tag-chatbots","tag-data-privacy","tag-emerging-technology","tag-generative-ai","tag-security"],"_links":{"self":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/22743","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/comments?post=22743"}],"version-history":[{"count":0,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/22743\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/media?parent=22743"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/categories?post=22743"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/tags?post=22743"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}