{"id":22046,"date":"2023-05-19T08:30:20","date_gmt":"2023-05-19T16:30:20","guid":{"rendered":"https:\/\/www.palada.net\/index.php\/2023\/05\/19\/news-15776\/"},"modified":"2023-05-19T08:30:20","modified_gmt":"2023-05-19T16:30:20","slug":"news-15776","status":"publish","type":"post","link":"http:\/\/www.palada.net\/index.php\/2023\/05\/19\/news-15776\/","title":{"rendered":"Apple bans employees from using ChatGPT. Should you?"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/idge\/imported\/imageapi\/2023\/03\/28\/16\/cybersecurity_main-100939073-small.jpg\"\/><\/p>\n<p>Reflecting <a href=\"https:\/\/www.computerworld.com\/article\/3692358\/samsung-shows-we-need-an-apple-approach-to-generative-ai.html\">warnings given earlier<\/a>, Apple is now among the growing number of businesses banning employees from using <a href=\"https:\/\/openai.com\/\" rel=\"noopener nofollow\" target=\"_blank\">OpenAI&#8217;s ChatGPT<\/a> and other similar cloud-based generative AI services in a bid to protect data confidentiality. The<em> <a href=\"https:\/\/www.wsj.com\/articles\/apple-restricts-use-of-chatgpt-joining-other-companies-wary-of-leaks-d44d7d34\" rel=\"noopener nofollow\" target=\"_blank\">Wall Street Journal<\/a><\/em> reports that Apple has also barred staff from using <a href=\"https:\/\/blog.gitguardian.com\/crappy-code-crappy-copilot\/\" rel=\"noopener nofollow\" target=\"_blank\">GitHub\u2019s Copilot<\/a> tool, which some developers use to help write software.<\/p>\n<p>A <a href=\"https:\/\/www.applemust.com\/so-many-mac-developers-are-using-chatgpt-setapp-survey-says\/\" rel=\"noopener nofollow\" target=\"_blank\">recent survey<\/a> found that 39% of Mac developers are using the tech.<\/p>\n<p>While a ban may seem extreme, it shows the company is paying attention to the flood of warnings emanating from security professionals regarding the use of these services. The concern is that their use could lead to the disclosure of sensitive or confidential data. Samsung banned the tools earlier this year when it discovered that staff had uploaded confidential source code to ChatGPT.<\/p>\n<p>Security professionals are very aware of the problem. Wicus Ross, senior security researcher at\u00a0Orange Cyberdefense <a href=\"https:\/\/www.orangecyberdefense.com\/global\/news\/orange-cyberdefense\/ncsc-on-chatgpt\" rel=\"noopener nofollow\" target=\"_blank\">warns<\/a>:<\/p>\n<p>\u201cWhile AI-powered chatbots are trained and further refined by their developers, it isn\u2019t out of the question for staff to access the data that\u2019s being inputted into them. And, considering that humans are often the weakest element of a business\u2019 security posture, this opens the information to a range of threats, even if the risk is accidental.\u201d<\/p>\n<p>While OpenAI does sell a more confidential (and expensive to run) self-hosted version of the service to enterprise clients, the risk is that under the public use agreement, there is very little to respect data confidentiality.<\/p>\n<p>That\u2019s bad in terms of confidential code and internal documentation, but deeply dangerous when handling information from heavily regulated industries, banking, health and elsewhere. We have already seen at least one incident in which ChatGPT queries were <a href=\"https:\/\/www.theverge.com\/2023\/3\/21\/23649806\/chatgpt-chat-histories-bug-exposed-disabled-outage?ref=blog.gitguardian.com\" rel=\"noopener nofollow\" target=\"_blank\">exposed to unrelated users<\/a>.<\/p>\n<p>While Apple\u2019s decision may feel like an over-reaction, it is essential enterprises convince staff to be wary of what data they are sharing. The issue is that when using a cloud-based service to process the data, it is very likely the information will be retained by the service, for grading, assessment, or even future use.<\/p>\n<p>In essence, the questions you ask a service of this kind become data points for future answers. Information supplied to a cloud-based service may be accessed by humans, either from inside the company or by outside attack. We\u2019ve already seen it happen. OpenAI had to take ChatGPT offline following <a href=\"https:\/\/www.cshub.com\/data\/news\/openai-confirms-chatgpt-data-breach\" rel=\"noopener nofollow\" target=\"_blank\">a data breach earlier this year<\/a>.<\/p>\n<p>Advice from the <a href=\"https:\/\/www.ncsc.gov.uk\/blog-post\/chatgpt-and-large-language-models-whats-the-risk\" rel=\"noopener nofollow\" target=\"_blank\">UK National Cyber Security Research Center<\/a> (NCSC) explains the nature of the risk. It points out that queries will be visible to the service provider, stored and \u201calmost certainly\u201d be used to develop the service at some point.<\/p>\n<p>In this context, the terms of use and privacy policy for a service need to be scrutinized deeply before anyone makes a sensitive query. The other challenge is that once a question is asked, it, too, becomes data.<\/p>\n<p>As the NCSC explains: \u201cAnother risk, which increases as more organizations produce LLMs, is that queries stored online may be hacked, leaked, or more likely accidentally made publicly accessible. This could include potentially user-identifiable information.\u201d<\/p>\n<p>There is also another layer of risk as <a href=\"https:\/\/www.computerworld.com\/article\/3696232\/if-you-worry-about-big-tech-what-do-you-expect-from-big-ai.html\">consolidation across the AI industry accelerates<\/a>. A user might ask a sensitive question of a verified secure LLM service that meets all the requirements of enterprise security protocols on Tuesday, but that service could be purchased by a third-party with weaker policies the following week.<\/p>\n<p>That purchaser would then also take possession of the sensitive enterprise data previously supplied to the service but protect it less effectively.<\/p>\n<p>These concerns aren\u2019t being raised because security professionals have seen them in some form of fever dreams; they reflect events we\u2019ve already seen. For example, one recent report revealed over 10 million confidential items, such as API keys and credentials were <a href=\"https:\/\/www.gitguardian.com\/state-of-secrets-sprawl-report-2023?ref=blog.gitguardian.com\" rel=\"noopener nofollow\" target=\"_blank\">exposed in public repositories such as GitHub last year<\/a>.<\/p>\n<p>Much of the time, this kind of confidential data is shared through personal accounts using services of this kind, which is what has already happened to information from <a href=\"https:\/\/www.bleepingcomputer.com\/news\/security\/toyota-discloses-data-leak-after-access-key-exposed-on-github\/\" rel=\"noopener nofollow\" target=\"_blank\">Toyota<\/a>, <a href=\"https:\/\/www.computerworld.com\/article\/3692358\/samsung-shows-we-need-an-apple-approach-to-generative-ai.html\">Samsung<\/a>, and, presumably given the ChatGPT usage ban, Apple.<\/p>\n<p>With this in mind, most security professionals I follow are united in warning users not to include sensitive\/confidential information in queries made to public services such as ChatGPT.<\/p>\n<p>Users should never ask questions that could lead to issues if they became public, and within the context of any major company attempting to secure its data, a blanket ban on use is by far the easiest solution, at least for now.<\/p>\n<p>It won\u2019t always be this way.<\/p>\n<p>The most logical path forward is toward small LLM systems capable of being hosted on the edge device. We know it is possible, as Stanford University has already been able to make one\u00a0<a href=\"https:\/\/gizmodo.com\/stanford-ai-alpaca-llama-facebook-taken-down-chatgpt-1850247570\" rel=\"noopener nofollow\" target=\"_blank\">run on a Google Pixel phone<\/a>. This makes it plausible to anticipate that at some point, no one will be using the recently introduced ChatGPT app on an iPhone, <a href=\"https:\/\/www.computerworld.com\/article\/3696281\/apples-neural-engine-and-the-generative-ai-game.html\">because they\u2019ll be using a similar tech supplied with the phone itself.<\/a><\/p>\n<p>But the bottom line for anybody using cloud-based LLM services is that security is not guaranteed and you should never share confidential or sensitive data with them. It\u2019s a testament to the <a href=\"https:\/\/www.computerworld.com\/article\/3692358\/samsung-shows-we-need-an-apple-approach-to-generative-ai.html\">value of edge processing and privacy protection<\/a> in a connected age.<\/p>\n<p><em>Please follow me on\u00a0<a href=\"https:\/\/social.vivaldi.net\/@jonnyevans\" rel=\"nofollow noopener\" target=\"_blank\">Mastodon<\/a>, or join me in the\u00a0<a href=\"https:\/\/mewe.com\/join\/appleholics_bar_and_grill\" rel=\"nofollow noopener\" target=\"_blank\">AppleHolic\u2019s bar &amp; grill<\/a>\u00a0and\u00a0<\/em><a href=\"https:\/\/mewe.com\/join\/apple_discussions\" rel=\"nofollow noopener\" target=\"_blank\"><em style=\"font-weight: inherit;\">Apple<\/em>\u00a0<em style=\"font-weight: inherit;\">Discussions<\/em><\/a><em style=\"font-weight: inherit;\">\u00a0groups on MeWe.<\/em><\/p>\n<p><a href=\"https:\/\/www.computerworld.com\/article\/3697000\/apple-bans-employees-from-using-chatgpt-should-you.html#tk.rss_security\" target=\"bwo\" >http:\/\/www.computerworld.com\/category\/security\/index.rss<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/idge\/imported\/imageapi\/2023\/03\/28\/16\/cybersecurity_main-100939073-small.jpg\"\/><\/p>\n<article>\n<section class=\"page\">\n<p>Reflecting <a href=\"https:\/\/www.computerworld.com\/article\/3692358\/samsung-shows-we-need-an-apple-approach-to-generative-ai.html\">warnings given earlier<\/a>, Apple is now among the growing number of businesses banning employees from using <a href=\"https:\/\/openai.com\/\" rel=\"noopener nofollow\" target=\"_blank\">OpenAI&#8217;s ChatGPT<\/a> and other similar cloud-based generative AI services in a bid to protect data confidentiality. The<em> <a href=\"https:\/\/www.wsj.com\/articles\/apple-restricts-use-of-chatgpt-joining-other-companies-wary-of-leaks-d44d7d34\" rel=\"noopener nofollow\" target=\"_blank\">Wall Street Journal<\/a><\/em> reports that Apple has also barred staff from using <a href=\"https:\/\/blog.gitguardian.com\/crappy-code-crappy-copilot\/\" rel=\"noopener nofollow\" target=\"_blank\">GitHub\u2019s Copilot<\/a> tool, which some developers use to help write software.<\/p>\n<p class=\"jumpTag\"><a href=\"\/article\/3697000\/apple-bans-employees-from-using-chatgpt-should-you.html#jump\">To read this article in full, please click here<\/a><\/p>\n<\/section>\n<\/article>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[11062,10643],"tags":[2211,11113,13431,10480,10554,714,24580],"class_list":["post-22046","post","type-post","status-publish","format-standard","hentry","category-computerworld","category-independent","tag-apple","tag-artificial-intelligence","tag-chatbots","tag-ios","tag-mobile","tag-security","tag-small-and-medium-business"],"_links":{"self":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/22046","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/comments?post=22046"}],"version-history":[{"count":0,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/22046\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/media?parent=22046"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/categories?post=22046"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/tags?post=22046"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}