{"id":22842,"date":"2023-09-05T02:30:04","date_gmt":"2023-09-05T10:30:04","guid":{"rendered":"http:\/\/www.palada.net\/index.php\/2023\/09\/05\/news-16572\/"},"modified":"2023-09-05T02:30:04","modified_gmt":"2023-09-05T10:30:04","slug":"news-16572","status":"publish","type":"post","link":"http:\/\/www.palada.net\/index.php\/2023\/09\/05\/news-16572\/","title":{"rendered":"GenAI in productivity apps: What could possibly go wrong?"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/article\/2023\/08\/person-at-laptop-using-generative-ai-interface-by-amperespy44-via-shutterstock-100945121-small.jpg\"\/><\/p>\n<p>We\u2019re in the \u201ciPhone moment\u201d for <a href=\"https:\/\/www.infoworld.com\/article\/3689973\/what-is-generative-ai-artificial-intelligence-that-creates.html\" rel=\"noopener\" target=\"_blank\">generative AI<\/a>, with every company rushing to figure out its strategy for dealing with this disruptive technology.<\/p>\n<p>According to a <a href=\"https:\/\/info.kpmg.us\/news-perspectives\/technology-innovation\/kpmg-usexecutives-genai-2023.html\" rel=\"noopener nofollow\" target=\"_blank\">KPMG survey<\/a> conducted this June, 97% of US executives at large companies expect their organizations to be impacted highly by generative AI in the next 12 to 18 months, and 93% believe it will provide value to their business. Some 35% of companies have already started to deploy AI tools and solutions, while 83% say that they will increase their generative AI investments by at least 50% in the next six to twelve months.<\/p>\n<p>Companies have been using machine learning and AI for years now, said Kalyan Veeramachaneni, principal research scientist at MIT\u2019s Laboratory for Information and Decision Systems, which is working on developing custom generative models to use for tabular data. What\u2019s different now, he said, is that generative AI tools are accessible to people who are not data scientists.<\/p>\n<p>\u201cIt opens new doors,\u201d he said. \u201cThis will enhance the productivity of a lot of people.\u201d<\/p>\n<p>According to a recent <a href=\"https:\/\/valoir.com\/blog-1\/assessing-the-value-of-ai-and-automation\" rel=\"noopener nofollow\" target=\"_blank\">study by analyst firm Valoir<\/a>, 40% of the average workday can be automated with AI, with the highest potential for automation in IT, followed by finance, operations, customer service, and sales.<\/p>\n<p>It can take years for enterprises to build their own generative AI models and integrate them into their workflows, but one area where generative AI can make an immediate and dramatic business impact is when it\u2019s embedded into popular productivity apps. According to David McCurdy, chief enterprise architect and CTO at Insight, a Tempe-based solutions integrator, 99% of companies that adopt generative AI will start by using genAI tools embedded into core business apps built by someone else.<\/p>\n<p><a href=\"https:\/\/www.computerworld.com\/article\/3700709\/m365-copilot-microsofts-generative-ai-tool-explained.html\">Microsoft 365<\/a>, <a href=\"https:\/\/www.computerworld.com\/article\/3705372\/googles-duet-ai-now-available-for-workspace-enterprise-customers.html\">Google Workspace<\/a>, <a href=\"https:\/\/www.computerworld.com\/article\/3696976\/adobe-brings-generative-ai-to-photoshop.html\">Adobe Photoshop<\/a>, <a href=\"https:\/\/www.computerworld.com\/article\/3695733\/slack-gpt-brings-native-generative-ai-to-chat-app.html\">Slack<\/a>, and <a href=\"https:\/\/www.computerworld.com\/article\/3690270\/grammarlygo-and-the-coming-wave-of-generative-ai-productivity.html\">Grammarly<\/a> are among the many popular productivity software tools that now offer a generative AI component. (Some are still in private beta testing.) Employees already know and use these tools every day, so when the vendors add generative AI features, it immediately makes the new technology widely accessible.<\/p>\n<p>In fact, according to a recent <a href=\"https:\/\/go.grammarly.com\/forrester-report-23\" rel=\"nofollow noopener\" target=\"_blank\">study conducted by Forrester on behalf of Grammarly<\/a>, 70% of employees are already using generative AI for some or all of their writing \u2014 but 80% of them are doing this at companies that haven\u2019t officially implemented it yet.<\/p>\n<p>Embedding AIs like OpenAI\u2019s ChatGPT into productivity apps is one quick way for vendors to add generative AIs to their platforms. Grammarly, for instance, <a href=\"https:\/\/www.grammarly.com\/business\/learn\/enterprise-grade-generative-ai\/\" rel=\"nofollow noopener\" target=\"_blank\">added genAI capabilities to its writing assistance platform<\/a> in March, using ChatGPT in a private Azure cloud environment. But soon vendors will be able to build their own custom models as well.<\/p>\n<p>It doesn\u2019t take millions of dollars and billions of training data records to train a <a href=\"https:\/\/www.computerworld.com\/article\/3697649\/what-are-large-language-models-and-how-are-they-used-in-generative-ai.html\">large language model<\/a> (LLM), the foundation for a genAI chatbot, if a company starts with a pre-trained foundational model and then fine-tunes it, said Omdia analyst Bradley Shimmin. \u201cThe amount of data required for that type of training is dramatically smaller.\u201d<\/p>\n<p>Commercially licensed LLMs are already available, the biggest recent release being Meta\u2019s <a href=\"https:\/\/www.infoworld.com\/article\/3702732\/meta-lets-loose-second-generation-of-llama-ai-models.html\" rel=\"noopener\" target=\"_blank\">Llama 2<\/a>. This means that the amount of AI built into popular productivity tools is about to explode. \u201cThe genie is out of the bottle,\u201d said Juan Orlandini, Insight&#8217;s CTO for North America.<\/p>\n<p>Generative AI can also be useful for vendors whose products aren\u2019t focused on creating new text or images. For instance, it can be used as a natural-language interface to complex back-end systems. According to Doug Ross, VP and head of insights and data at Sogeti, part of Capgemini, there are already hundreds \u2014 if not thousands \u2014 of companies adding conversational interfaces to their products.<\/p>\n<p>\u201cThat would indicate that there\u2019s value there,\u201d he said. \u201cIt\u2019s a different way of interacting with various databases or back ends that can help you explore data in ways that were more difficult before. \u201c<\/p>\n<p>While generative AI may be a groundbreaking technology that brings a new set of risks, the traditional SaaS playbook can work when it comes to getting it under control: educating employees on the risks and benefits, setting up security guardrails to prevent employees from accessing malicious apps or sites or accidentally sharing sensitive data, and offering corporate-approved technologies that follow security best practices.<\/p>\n<p>But first, let\u2019s talk about what can go wrong.<\/p>\n<p>ChatGPT, Bard, Claude, and other genAI tools \u2014 as well as every productivity app that\u2019s now adding generative AI as a feature \u2014 all share a few problems that could pose risks to companies.<\/p>\n<p>The first and most obvious risk is the accuracy issue. Generative AI is designed to generate content \u2014 text, images, video, audio, computer code, and so on \u2014 based on patterns in the data it\u2019s been trained on. Its ability to provide answers to legal, medical, and technical questions is a bonus.<\/p>\n<p>And in fact, often the AIs are accurate. The latest releases of some popular genAI chatbots have passed bar exams and medical licensing tests. But this can give some users a false sense of security, as when <a href=\"https:\/\/www.forbes.com\/sites\/lanceeliot\/2023\/05\/29\/lawyers-getting-tripped-up-by-generative-ai-such-as-chatgpt-but-who-really-is-to-blame-asks-ai-ethics-and-ai-law\/?sh=68adf5373212\" rel=\"nofollow noopener\" target=\"_blank\">a couple of lawyers got in trouble<\/a> by relying on ChatGPT to find relevant case law \u2014 only to discover that it had invented the cases it cited.<\/p>\n<p>That\u2019s because generative AIs are not search engines, nor are they calculators. They don\u2019t always give the right answer, and they don\u2019t give the same answer every time.<\/p>\n<p>For generating code, for example, large language models can have extremely high error rates, said Andy Thurai, an analyst at Constellation Research. \u201cLLMs can have rates as high as 50% of code that is useless, wrong, vulnerable, insecure, and can be exploited by hackers,\u201d he said. \u201cAfter all, these models are trained based on the GitHub repository, which is notoriously error-prone.\u201d<\/p>\n<p>As a result, while coding assistants can improve productivity, they can also sometimes create even more work, as someone has to check that all the code passes corporate standards.<\/p>\n<p>The picture gets even more complicated when you move beyond the big generative AI tools like ChatGPT to vendors adding proprietary AI models into their productivity tools.<\/p>\n<p>\u201cIf you put bad data into the models, you\u2019re not going to have very happy customers,\u201d said Vrinda Khurjeka, senior director of Americas business at technology consulting firm Searce. \u201cIf you\u2019re really just going to use it for the sake of having a feature, and not think about whether it will really help your customers, you will be in a lose-lose situation.\u201d<\/p>\n<p>Then there\u2019s the risk of bias, she said. \u201cYou are only going to get the outputs based on what your input data is.\u201d For example, if a tool that helps you generate customer emails is trained on your internal communications, and company culture includes a lot of swearing, then outward-bound emails created by the tool can have the same language, she said.<\/p>\n<p>This kind of bias can have more significant implications, as well, if it results in employment discrimination or biased lending practices. \u201cIt\u2019s a very real problem,\u201d she said. \u201cWhat we are recommending to all of our customers is that it\u2019s not just about implementing the model once and being done with it. You need to have audits and checks and balances.\u201d<\/p>\n<p>According to the KPMG survey, accuracy and reliability are among the top ten concerns that companies have about generative AI.<\/p>\n<p>But that\u2019s just the start of the problems that generative AI can create.<\/p>\n<p>For example, some AIs get ongoing training based on interactions with users. The publicly available version of ChatGPT, for example, uses conversations with its users for its ongoing training unless users specifically opt out. So, for example, if an employee uploads their company\u2019s secret plans and asks for the AI to write some text for a presentation about these plans, the AI will then know those plans. Then, if another person, possibly at a competing company, asks about those plans, the AI might well answer them and provide all the details.<\/p>\n<p>Other information that could potentially leak out this way includes personally identifiable information, financial and legal data, and proprietary code.<\/p>\n<p>According to the KPMG survey, 63% of executives say that data and privacy concerns are a top priority, followed by cybersecurity at 62%.<\/p>\n<p>\u201cIt\u2019s a very real risk,\u201d said Forrester analyst Chase Cunningham. \u201cAnytime you\u2019re leveraging these types of systems, they\u2019re reliant on data to improve their models, and you might not necessarily have control or knowledge of what\u2019s being used.\u201d<\/p>\n<p>(Note that OpenAI has just announced an <a href=\"https:\/\/www.computerworld.com\/article\/3705551\/openai-launches-enterprise-grade-chatgpt.html\">enterprise version of ChatGPT<\/a> that it says does not use customers\u2019 data to train its models.)<\/p>\n<p>Another potential liability created by generative AI is the legal risk associated with improperly sourced training data. There are several lawsuits currently making their way through the courts having to do with the fact that some AI companies have \u2014 allegedly \u2014 used pirate sites to read copyrighted books and scraped images from the web without artists\u2019 permission.<\/p>\n<p>This means that an enterprise that heavily uses these AIs might also inherit some of this liability. \u201cI think you\u2019re exposing yourself to risk of being tied to some sort of litigious action,\u201d said Cunningham.<\/p>\n<p>Indeed, legal exposure was cited as a top barrier to implementing generative AI by 20% of the KPMG survey respondents.<\/p>\n<p>Plus, in theory, generative AI creates original, new works, inspired by the content it was trained on \u2014 but sometimes the results can wind up almost identical to the training data. So an enterprise might accidentally wind up using content that comes too close to copyright infringement.<\/p>\n<p>Here\u2019s how to address these potential risks.<\/p>\n<p>It is not too early to start running generative AI training for employees. Employees need to understand both the capabilities and the limitations of generative AI. And they need to know which tools are safe to use.<\/p>\n<p>\u201cThere needs to be education at the enterprise level,\u201d said IDC analyst Wayne Kurtzman. \u201cIt\u2019s incumbent on companies to set up specific guidelines, and they need <a href=\"https:\/\/www.computerworld.com\/article\/3705028\/why-and-how-to-create-corporate-generative-ai-policies.html\">an AI policy to guide users in this<\/a>.\u201d<\/p>\n<p>For instance, genAI output should always be treated as a starting draft that employees review closely and amend as needed, not as a final product ready to send out into the world.<\/p>\n<p>Enterprises need to help their employees develop critical thinking skills around AI, Kurtzman said, and to set up a feedback loop that includes an array of users who can flag some of the issues that crop up.<\/p>\n<p>\u201cWhat companies want to see is productivity improvements,\u201d he said. \u201cBut they also hope that the productivity improvements are greater than the time necessary to fix any challenges that may have occurred in the adoption. This will not go as smoothly as everyone would like, and we all know that.\u201d<\/p>\n<p>Companies have already started on this journey, elevating data literacy among their employees as part of their push to becoming a data-driven enterprise, said Omdia\u2019s Shimmin. \u201cThis is really no different,\u201d he said, \u201cexcept that the stakes are higher.\u201d<\/p>\n<p>At Insight, for example, IT and corporate leaders have created a generative AI policy for the company\u2019s 14,000 global employees. The starting point is a safe, company-approved generative AI tool that everyone can use \u2014 an instance of ChatGPT running on a private Azure cloud.<\/p>\n<p>This lets employees know, \u201cHere\u2019s a safe place,\u201d said Orlandini. \u201cGo ahead and use it, because we verified that this is a secure environment to do it in.\u201d<\/p>\n<p>For any other tool Insight employees use that has recently added generative AI capabilities, the company cautions them to be careful not to share any proprietary information. \u201cUnless we\u2019ve given you permission, treat every one of those like you would Twitter, or Reddit, or Facebook,\u201d Orlandini said. \u201cBecause you don\u2019t know who\u2019s going to see it.\u201d<\/p>\n<p>The unsanctioned use of generative AI is just part of the broader <a href=\"https:\/\/www.computerworld.com\/article\/3541829\/3-big-saas-challenges-for-it.html\">unsanctioned SaaS problem<\/a>, with many of the same challenges. It\u2019s hard for companies to track what apps employees are using and the security implications of all the different apps.<\/p>\n<p>According to the <a href=\"https:\/\/www.bettercloud.com\/monitor\/the-2023-state-of-saasops-report\/\" rel=\"nofollow noopener\" target=\"_blank\">2023 BetterCloud State of SaaSOps report<\/a>, 65% of all SaaS apps used in the enterprise are unsanctioned. But there are cybersecurity products that track \u2014 or block \u2014 employee access to particular SaaS applications or websites and that block sensitive data from being uploaded to outside sites and apps.<\/p>\n<p><a href=\"https:\/\/www.gartner.com\/en\/documents\/3992205\" rel=\"nofollow noopener\" target=\"_blank\"><strong>CASB (cloud access security broker) tools<\/strong><\/a> can help companies protect themselves from unsanctioned SaaS use. In 2020, the top vendors in this space included Netskope, Microsoft, Bitglass, and McAfee \u2014 now SkyHigh Security. There are standalone CASB vendors, but CASB features are also included in security service edge (SSE) and secure access service edge (SASE) platforms.<\/p>\n<p>This is a good time for companies to talk to their CASB vendors and ask about how they track and block both standalone generative AI tools and those embedded into SaaS applications.<\/p>\n<p>\u201cOur advice to security folks is to make sure they apply these web tracking tools to understand where people are going, and potentially blocking them,\u201d said Gartner\u2019s Wong. \u201cYou also don\u2019t want to lock it down too much and inhibit productivity,\u201d he added.<\/p>\n<p><a href=\"https:\/\/www.computerworld.com\/article\/3705429\/generative-ai-productivity-apps-problems-solutions.html#tk.rss_security\" target=\"bwo\" >http:\/\/www.computerworld.com\/category\/security\/index.rss<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/article\/2023\/08\/person-at-laptop-using-generative-ai-interface-by-amperespy44-via-shutterstock-100945121-small.jpg\"\/><\/p>\n<article>\n<section class=\"page\">\n<p>We\u2019re in the \u201ciPhone moment\u201d for <a href=\"https:\/\/www.infoworld.com\/article\/3689973\/what-is-generative-ai-artificial-intelligence-that-creates.html\" rel=\"noopener\" target=\"_blank\">generative AI<\/a>, with every company rushing to figure out its strategy for dealing with this disruptive technology.<\/p>\n<p>According to a <a href=\"https:\/\/info.kpmg.us\/news-perspectives\/technology-innovation\/kpmg-usexecutives-genai-2023.html\" rel=\"noopener nofollow\" target=\"_blank\">KPMG survey<\/a> conducted this June, 97% of US executives at large companies expect their organizations to be impacted highly by generative AI in the next 12 to 18 months, and 93% believe it will provide value to their business. Some 35% of companies have already started to deploy AI tools and solutions, while 83% say that they will increase their generative AI investments by at least 50% in the next six to twelve months.<\/p>\n<p class=\"jumpTag\"><a href=\"\/article\/3705429\/generative-ai-productivity-apps-problems-solutions.html#jump\">To read this article in full, please click here<\/a><\/p>\n<\/section>\n<\/article>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[11062,10643],"tags":[11113,11098,29835,20885,714],"class_list":["post-22842","post","type-post","status-publish","format-standard","hentry","category-computerworld","category-independent","tag-artificial-intelligence","tag-enterprise-applications","tag-generative-ai","tag-productivity-software","tag-security"],"_links":{"self":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/22842","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/comments?post=22842"}],"version-history":[{"count":0,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/22842\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/media?parent=22842"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/categories?post=22842"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/tags?post=22842"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}