{"id":22982,"date":"2023-09-25T04:30:12","date_gmt":"2023-09-25T12:30:12","guid":{"rendered":"http:\/\/www.palada.net\/index.php\/2023\/09\/25\/news-16712\/"},"modified":"2023-09-25T04:30:12","modified_gmt":"2023-09-25T12:30:12","slug":"news-16712","status":"publish","type":"post","link":"http:\/\/www.palada.net\/index.php\/2023\/09\/25\/news-16712\/","title":{"rendered":"Q&amp;A: How one CSO secured his environment from generative AI risks"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/article\/2023\/08\/person-at-laptop-using-generative-ai-interface-by-amperespy44-via-shutterstock-100945121-small.jpg\"\/><\/p>\n<p>In February, travel and expense management company <a href=\"https:\/\/navan.com\/\" rel=\"nofollow noopener\" target=\"_blank\">Navan<\/a> (formerly TripActions) chose to go all-in on generative AI technology for a myriad of business and customer assistance uses.<\/p>\n<p>The Palo Alto, CA company turned to ChatGPT from <a href=\"https:\/\/openai.com\/blog\/chatgpt\" rel=\"nofollow noopener\" target=\"_blank\">OpenAI<\/a> and coding assistance tools from <a href=\"https:\/\/github.com\/features\/copilot\/\" rel=\"nofollow noopener\" target=\"_blank\">GitHub Copilot<\/a> to write, test, and fix code; the decision has boosted Navan\u2019s operational efficiency and reduced overhead costs.<\/p>\n<p>GenAI tools have also been used to build a conversational experience for the company\u2019s client virtual assistant, <a href=\"https:\/\/navan.com\/blog\/meet-ava-ai-travel-expense\" rel=\"nofollow noopener\" target=\"_blank\">Ava<\/a>. Ava, a travel and expense chatbot assistant, offers customers answers to questions and a conversational booking experience. It can also offer data to business travelers, such as company travel spend, volume, and granular carbon emissions details.<\/p>\n<p>Through genAI, many of Navan\u2019s 2,500 employees have been able to eliminate redundant tasks and create code far faster than if they\u2019d generated it from scratch. However, genAI tools are not without security and regulatory risks. For example, 11% of data employees paste into ChatGPT is confidential, according to <a href=\"https:\/\/www.cyberhaven.com\/blog\/4-2-of-workers-have-pasted-company-data-into-chatgpt\/\" rel=\"nofollow noopener\" target=\"_blank\">a report\u00a0from cyber security provider CyberHaven<\/a>.<\/p>\n<p>Navan CSO Prabhath Karanth<\/p>\n<p>Navan CSO Prabhath Karanth has had to deal with the security risks posed by genAI, including data security leaks, malware, and potential regulatory violations.<\/p>\n<p>Navan has a license for ChatGPT, but the company has allowed employees to use their own public instances of the technology \u2014 potentially\u00a0leaking data outside company walls. That led the company to curb leaks and other threats through the use of monitoring tools in conjunction with a clear set of corporate guidelines.<\/p>\n<p>One SaaS tool, for example, flags an employee when they&#8217;re about to violate company policy, which has led to greater awareness about security among workers, according to Karanth.<\/p>\n<p><em>Computerworld<\/em> spoke to Karanth about how he secured his organization from misuse and intentional or unintentional threats related to genAI. The following are excerpts from that interview.<\/p>\n<p><strong>For what purposes does your company use ChatGPT? <\/strong>&#8220;AI has been around a long time, but the adoption of AI in business to solve specific problems \u2014 this year it has gone to a whole different level. Navan was one of the early adopters. We were one of the first companies in the travel and expense space that realized this tech is going to be disruptive. We adopted very early on in our product workflows\u2026and also in our internal operations.&#8221;<\/p>\n<p><strong>Product workflows and internal operations. Is that chatbots to help employees answer questions and help customers to do the same?\u00a0<\/strong>&#8220;There are a few applications on [the] product side. We do have a workflow assistant called Ava, which is a chatbot powered by this technology. There are a ton of features on our product. For example, there\u2019s a dashboard where an admin can look up information around travel and expenses related to their company. And internally, to power our operations, we\u2019ve looked at how can we expedite software development from a development organization perspective. Even from a security perspective, I\u2019m very closely looking at all my tooling where I want to leverage this technology.<\/p>\n<p>&#8220;This applies across the business.&#8221;<\/p>\n<p><strong>I\u2019ve read of some developers who used genAI technology and think it\u2019s terrible. They say the code it generates is sometimes nonsensical. What are your developers telling you about the use of AI for writing code?\u00a0<\/strong>&#8220;That\u2019s not been the experience here. We\u2019ve had very good adoption in the developer community here, especially in two areas. One is operational efficiency; developers don\u2019t have to write code from scratch anymore, at least for standard libraries and development stuff. We\u2019re seeing some very good results. Our developers are able to get to a certain percentage of what they need and then build on top of that.<\/p>\n<p>&#8220;In some cases, we do use open-source libraries \u2014 every developer does \u2014 and so in order to get that open source library to the point where we have to build on top of that, it\u2019s another avenue where this technology helps.<\/p>\n<p>&#8220;I think there are certain ways to adopt it. You can\u2019t just blindly adopt it. You can\u2019t adopt it in every context. The context is key.&#8221;<\/p>\n<p>[Navan has a group it calls \u201ca start-up within a start-up\u201d where new technologies are carefully integrated into existing operations under close oversight.]<\/p>\n<p><strong>Do you use tools other than chatGPT?\u00a0<\/strong>&#8220;Not really in the business context. On the developer\u2019s side of the house, we also use Github Copilot to a certain extent. But in non-developer context, it\u2019s mostly OpenAI.&#8221;<\/p>\n<p><strong>How would you rank AI in terms of a potential security threat to your organization?\u00a0<\/strong>&#8220;I wouldn\u2019t characterize it as lowest to highest, but I would categorize it as a net new threat vector that you need an overall strategy to mitigate. It\u2019s about risk management.<\/p>\n<p>&#8220;Mitigation is not just from a technology perspective. Technology and tooling is one aspect, but there also must be governance and policies in terms of how you use this technology internally and productize it. You need a people, process, technology risk assessment and then mitigate that. Once you have that mitigation policy in place, then you\u2019ve reduced the risk.<\/p>\n<p>&#8220;If you don\u2019t do all of that, then yes, AI is the highest-risk vector.&#8221;<\/p>\n<p><strong>What kinds of problems did you run into with employees using ChatGPT? Did you catch them copying and pasting sensitive corporate information into prompt windows?\u00a0<\/strong>&#8220;We always try to stay ahead of things at Navan; it\u2019s just the nature of our business. When the company decided to adopt this technology, as a security team we had to do a holistic risk assessment&#8230;. So I sat down with my leadership team to do that. The way my leadership team is structured is, I have a leader who runs product platform security, which is on the engineering side; then we have SecOps, which is a combination of enterprise security, DLP \u2013 detection and response; then there\u2019s a governance, risk and compliance and trust function, and that\u2019s responsible for risk management, compliance and all of that.<\/p>\n<p>&#8220;So, we sat down and did a risk assessment for every avenue of the application of this technology. We did put in place some controls, such as data loss prevention to make sure even unintentionally there is no exploitation of this technology to pull out data \u2014 both IP and customer [personally identifiable information].<\/p>\n<p>&#8220;So, I\u2019d say we stayed ahead of this.&#8221;<\/p>\n<p><strong>Did you still catch employees intentionally trying to paste sensitive data into ChatGPT?\u00a0<\/strong>&#8220;The way we do DLP here is it\u2019s based on context. We don\u2019t do blanket blocking. We always catch things and we run in it like an incident. It could be insider risk or external, then we involve legal and HR counterparts. This is part and parcel with running a security team. We\u2019re here to identify threats and build protections against them.&#8221;<\/p>\n<p><strong>Were you surprised at the number of employees pasting corporate data into chatGPT prompts?\u00a0<\/strong>&#8220;Not really. We were expecting it with this technology. There\u2019s a huge push across the company overall to generate awareness around this technology for developers and others. So, we weren\u2019t surprised. We expected it.&#8221;<\/p>\n<p><strong>Are you concerned about genAI running afoul of copyright infringement as you use it for content creation?\u00a0<\/strong>&#8220;It\u2019s an area of risk that needs to be addressed. You need some legal expertise there for that area of risk. Our in-house counsel and legal team have fully lit into this and there is guidance, and we have all of our legal programs in place. We\u2019ve tried to manage the risk there.&#8221;<\/p>\n<p>[Navan has focused on communication between privacy, security and legal teams and its product and content teams on new guidelines and restrictions as they arise and there has been additional training for employees around those issues.]<\/p>\n<p><strong>Are you aware of the issue around ChatGPT creating malware, intentionally or unintentionally? And have you had to address that?\u00a0<\/strong>&#8220;I\u2019m a career security guy, so I keep a very close watch on everything going on in the offensive side of the house. There\u2019s all kinds of applications there. There\u2019s malware, there\u2019s social engineering that\u2019s happening through generative AI. I think the defense has to constantly catch up and keep up. I\u2019m definitely aware of this.&#8221;<\/p>\n<p><strong>How do you monitor for malware if an employee is using chatGPT to create code; how do you stop something like that from slipping through? Do you have software tools, or do you require a second set of eyes on all newly created code?\u00a0<\/strong>&#8220;There are two avenues. One [is] around making sure whatever code we ship to production is secure. And then the other is the insider risk \u2014 making sure any code that is generated doesn\u2019t leave Navan\u2019s corporate environment. For the first piece, we have a continuous integration, continuous deployment \u2014 CICD \u2014 automated co-deployment pipeline, which is completely secured. Any code that gets shipped to production, we have static code running on that at the integration point, before developers merge it to a branch. We also have software composition analysis for any third-party code that\u2019s injected into the environment. In addition to that, we also have CICD hardening this entire pipeline, from merge to branch to deployment is hardened.<\/p>\n<p>&#8220;In addition to all of this, we also have runtime API testing and build-time API testing. We also have a product security team that [does] threat modeling and design review for all the critical features that get shipped to production.<\/p>\n<p>&#8220;The second part \u2014 the insider risk piece \u2014 goes back to our DLP strategy, which is data detection and response. We don\u2019t do blanket blocking, but we do do blocking based on context \u2014 based on a lot of context areas&#8230;. We\u2019ve had relatively highly accurate detections and we\u2019ve been able to protect Navan\u2019s IT environment.&#8221;<\/p>\n<p><strong>Can you talk about any particular tools you\u2019ve been using to bolster your security profile against AI threats?\u00a0<\/strong>&#8220;<a href=\"https:\/\/www.cyberhaven.com\/\" rel=\"nofollow noopener\" target=\"_blank\">CyberHaven<\/a>, definitely. I\u2019ve used traditional DLP technologies in the past and sometimes the noise-to-signal ratio can be a lot. What Cyberhaven allows us to do is put a lot of context around the monitoring of data movement across the company \u2014 anything leaving an endpoint. That includes endpoint to SaaS, endpoint to storage, so much context. This has significantly improved our protection and also significantly improved our monitoring of data movement and insider risk.<\/p>\n<p>&#8220;[It&#8217;s] also hugely important in the context of OpenAI\u2026, this technology has helped us tremendously.&#8221;<\/p>\n<p><strong>Speaking of CyberHaven, a recent report by them showed about one in 20 employees paste company confidential data into just chatGPT, never mind other in-house AI tools. When you\u2019ve caught employees doing it, what kinds of data were they typically copying and pasting that would be considered sensitive?\u00a0<\/strong>&#8220;To be honest, in the context of OpenAI, I haven\u2019t really identified anything significant. When I say significant, I\u2019m referring to customer [personally identifiable information] or product-related information. Of course there have been several other insider risk instances where we had to triage and do get legal involved and do all the investigations. Specifically with OpenAI, I\u2019ve seen it here and there where we blocked it based on context, but I cannot remember any massive data leak there.&#8221;<\/p>\n<p><strong>Do you think general purpose genAI tools will eventually be overtaken by smaller, domain-specific, internal tools that can be better used for specific uses and more easily secured?\u00a0<\/strong>&#8220;There\u2019s a lot of that going on right now \u2014 smaller models. But I don\u2019t think OpenAI will be overtaken. If you look at how OpenAI is positioning their technology, they want it to be a platform on which these smaller or larger models can be built.<\/p>\n<p>&#8220;So, I feel like there will be a lot of these smaller models created because of the compute resources larger models consume. Compute will become a challenge, but I don\u2019t think OpenAI will be overtaken. They\u2019re a platform that offers you flexibility over how you want to develop and what size platform you want to use. That\u2019s how I see this continuing.&#8221;<\/p>\n<p><strong>Why should organizations trust that OpenAI or other SaaS providers of AI won\u2019t be using the data for purposes unknown to you, such as training their own large language models?\u00a0<\/strong>&#8220;We have an enterprise agreement with them, and we\u2019ve opted out of it. We got ahead of that from a legal perspective. That\u2019s very standard with any cloud provider.&#8221;<\/p>\n<p><strong>What steps would you advise other CSOs to take in securing their organizations against the potential risks posed by generative AI technology?\u00a0<\/strong>&#8220;Start with the people, process, technology approach. Do a risk analysis assessment from a people, process, technology perspective. Start with an overall, holistic risk assessment. And what I mean by that is look at your overall adoption: Are you going to use it in your product workflows? If you are, then you have to have your CTO and engineering organization as key stakeholders in this risk assessment.<\/p>\n<p>&#8220;You, of course, need to have legal involved. You need to have your security and privacy counterparts involved.<\/p>\n<p>&#8220;There are also several frameworks already offered to do these risk assessments. NIST published a framework to do a risk assessment around adoption of this, which addresses just about every risk you need to be considering. Then you can figure out which one is applicable to your environment.<\/p>\n<p>&#8220;Then have a process to monitor these controls on an ongoing basis, so you\u2019re covering this end-to-end.&#8221;<\/p>\n<p><a href=\"https:\/\/www.computerworld.com\/article\/3706894\/qanda-how-one-cso-secured-his-environment-from-generative-ai-risks.html#tk.rss_security\" target=\"bwo\" >http:\/\/www.computerworld.com\/category\/security\/index.rss<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/article\/2023\/08\/person-at-laptop-using-generative-ai-interface-by-amperespy44-via-shutterstock-100945121-small.jpg\"\/><\/p>\n<article>\n<section class=\"page\">\n<p>In February, travel and expense management company <a href=\"https:\/\/navan.com\/\" rel=\"nofollow noopener\" target=\"_blank\">Navan<\/a> (formerly TripActions) chose to go all-in on generative AI technology for a myriad of business and customer assistance uses.<\/p>\n<p>The Palo Alto, CA company turned to ChatGPT from <a href=\"https:\/\/openai.com\/blog\/chatgpt\" rel=\"nofollow noopener\" target=\"_blank\">OpenAI<\/a> and coding assistance tools from <a href=\"https:\/\/github.com\/features\/copilot\/\" rel=\"nofollow noopener\" target=\"_blank\">GitHub Copilot<\/a> to write, test, and fix code; the decision has boosted Navan\u2019s operational efficiency and reduced overhead costs.<\/p>\n<p>GenAI tools have also been used to build a conversational experience for the company\u2019s client virtual assistant, <a href=\"https:\/\/navan.com\/blog\/meet-ava-ai-travel-expense\" rel=\"nofollow noopener\" target=\"_blank\">Ava<\/a>. Ava, a travel and expense chatbot assistant, offers customers answers to questions and a conversational booking experience. It can also offer data to business travelers, such as company travel spend, volume, and granular carbon emissions details.<\/p>\n<p class=\"jumpTag\"><a href=\"\/article\/3706894\/qanda-how-one-cso-secured-his-environment-from-generative-ai-risks.html#jump\">To read this article in full, please click here<\/a><\/p>\n<\/section>\n<\/article>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[11062,10643],"tags":[18063,11113,13431,11063,11070,29835,714,14247],"class_list":["post-22982","post","type-post","status-publish","format-standard","hentry","category-computerworld","category-independent","tag-5g","tag-artificial-intelligence","tag-chatbots","tag-data-privacy","tag-emerging-technology","tag-generative-ai","tag-security","tag-software-development"],"_links":{"self":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/22982","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/comments?post=22982"}],"version-history":[{"count":0,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/22982\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/media?parent=22982"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/categories?post=22982"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/tags?post=22982"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}