{"id":21855,"date":"2023-04-26T18:31:52","date_gmt":"2023-04-27T02:31:52","guid":{"rendered":"http:\/\/www.palada.net\/index.php\/2023\/04\/26\/news-15586\/"},"modified":"2023-04-26T18:31:52","modified_gmt":"2023-04-27T02:31:52","slug":"news-15586","status":"publish","type":"post","link":"http:\/\/www.palada.net\/index.php\/2023\/04\/26\/news-15586\/","title":{"rendered":"ChatGPT learns to forget: OpenAI implements data privacy controls"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/idge\/imported\/imageapi\/2022\/06\/23\/10\/eye_circuits_system_artificial_intelligence_machine_learning_privacy_by_vijay_patel_gettyimages-936718998_1200x800-100768000-small-100929427-small.jpg\"\/><\/p>\n<p>OpenAI, the Microsoft-backed firm behind the groundbreaking <a href=\"https:\/\/www.computerworld.com\/article\/3682143\/chatgpt-finally-an-ai-chatbot-worth-talking-to.html\">ChatGPT<\/a> <a href=\"https:\/\/www.infoworld.com\/article\/3689973\/what-is-generative-ai-the-evolution-of-artificial-intelligence.html\">generative AI<\/a> system, announced this week that it would allow users to turn off the chat history feature for its flagship chatbot, in what\u2019s being seen as a partial answer to critics concerned about the security of data provided to ChatGPT.<\/p>\n<p>The \u201chistory disabled\u201d feature means that conversations marked as such won\u2019t be used to train OpenAI\u2019s underlying models, and won\u2019t be displayed in the history sidebar. They will still be stored on the company\u2019s servers, but will only be reviewed on an as-needed basis for abuse, and will be deleted after 30 days.<\/p>\n<p>\u201cWe hope this provides an easier way to manage your data than our existing opt-out process,\u201d the company said in an official blog post.<\/p>\n<p>OpenAI also said that the company is working on a new ChatGPT business subscription model, aimed at organizational users who may need more direct control over their data. ChatGPT Business will adhere to the company\u2019s API data usage policies, meaning that user data will not, by default, be used for model training. OpenAI said that it hopes to debut this subscription model \u201cin the coming months.\u201d<\/p>\n<p>The news comes in the wake of <a href=\"http:\/\/www.computerworld.com\/cms\/article\/Regulators%20set sights on OpenAI\">a move by the European Data Protection Board, earlier this month, to investigate ChatGPT,<\/a> after complaints from privacy watchdogs that the chatbot did not comply with the EU\u2019s <a href=\"https:\/\/www.csoonline.com\/article\/3202771\/general-data-protection-regulation-gdpr-requirements-deadlines-and-facts.html\">General Data Protection Regulation<\/a>. Italy, in March, temporarily banned the use of ChatGPT due to alleged violations of user privacy. That country\u2019s guarantor for data protection demanded that the service demonstrate compliance with applicable privacy laws, and provide improved transparency into how the system handles user data.<a href=\"https:\/\/www.computerworld.com\/article\/3693316\/eu-privacy-regulators-to-create-taskforce-to-investigate-chatgpt.html\"> <\/a><\/p>\n<p>It\u2019s clear that privacy and data governance were not top-of-mind at the outset for OpenAI, according to Gartner vice president and analyst Nader Henein \u2013 who noted that that\u2019s nothing new for a startup focused on getting a workable product out into the market.<\/p>\n<p>\u201cThey are continuing to build the airplane mid-flight,\u201d he said. \u201cI imagine most of the development underway at Microsoft on Copilot is focused on wrapping that governance and enterprise support around the OpenAI [large language model.]\u201d<\/p>\n<p>It\u2019s a step in the right direction, Henein added, but reflects that the design decisions underlying much of ChatGPT may have treated privacy as an afterthought, not as a core component.<\/p>\n<p>\u201cThere is no doubt in my mind that the team at OpenAI are working feverishly to retrofit governance to their architecture,\u201d he said. \u201cIt\u2019s a matter of how much can be done after the fact. The analogy that we have seen used time and time again is that of baking a cake and trying to add sugar or baking powder after you\u2019ve taken it out of the oven.\u201d<\/p>\n<p><a href=\"https:\/\/www.computerworld.com\/article\/3694652\/chatgpt-learns-to-forget-openai-implements-data-privacy-controls.html#tk.rss_security\" target=\"bwo\" >http:\/\/www.computerworld.com\/category\/security\/index.rss<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/idge\/imported\/imageapi\/2022\/06\/23\/10\/eye_circuits_system_artificial_intelligence_machine_learning_privacy_by_vijay_patel_gettyimages-936718998_1200x800-100768000-small-100929427-small.jpg\"\/><\/p>\n<article>\n<section class=\"page\">\n<p>OpenAI, the Microsoft-backed firm behind the groundbreaking <a href=\"https:\/\/www.computerworld.com\/article\/3682143\/chatgpt-finally-an-ai-chatbot-worth-talking-to.html\">ChatGPT<\/a> <a href=\"https:\/\/www.infoworld.com\/article\/3689973\/what-is-generative-ai-the-evolution-of-artificial-intelligence.html\">generative AI<\/a> system, announced this week that it would allow users to turn off the chat history feature for its flagship chatbot, in what\u2019s being seen as a partial answer to critics concerned about the security of data provided to ChatGPT.<\/p>\n<p>The \u201chistory disabled\u201d feature means that conversations marked as such won\u2019t be used to train OpenAI\u2019s underlying models, and won\u2019t be displayed in the history sidebar. They will still be stored on the company\u2019s servers, but will only be reviewed on an as-needed basis for abuse, and will be deleted after 30 days.<\/p>\n<p class=\"jumpTag\"><a href=\"\/article\/3694652\/chatgpt-learns-to-forget-openai-implements-data-privacy-controls.html#jump\">To read this article in full, please click here<\/a><\/p>\n<\/section>\n<\/article>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[11062,10643],"tags":[11113,11063],"class_list":["post-21855","post","type-post","status-publish","format-standard","hentry","category-computerworld","category-independent","tag-artificial-intelligence","tag-data-privacy"],"_links":{"self":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/21855","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/comments?post=21855"}],"version-history":[{"count":0,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/21855\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/media?parent=21855"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/categories?post=21855"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/tags?post=21855"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}