{"id":15557,"date":"2019-06-12T10:45:26","date_gmt":"2019-06-12T18:45:26","guid":{"rendered":"http:\/\/www.palada.net\/index.php\/2019\/06\/12\/news-9306\/"},"modified":"2019-06-12T10:45:26","modified_gmt":"2019-06-12T18:45:26","slug":"news-9306","status":"publish","type":"post","link":"http:\/\/www.palada.net\/index.php\/2019\/06\/12\/news-9306\/","title":{"rendered":"The Next Big Privacy Hurdle? Teaching AI to Forget"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/media.wired.com\/photos\/5d003443f5d72cd088c02c58\/master\/pass\/Make-Ai-Forget-146104159.jpg\"\/><\/p>\n<p><strong>Credit to Author: Darren Shou| Date: Wed, 12 Jun 2019 12:00:00 +0000<\/strong><\/p>\n<p><span class=\"lede\">When the European <\/span>Union enacted the <a href=\"https:\/\/eugdpr.org\/\" target=\"_blank\">General Data Protection Regulation<\/a> (GDPR) a year ago, one of the most revolutionary aspects of the regulation was the \u201c<a href=\"https:\/\/gdpr-info.eu\/issues\/right-to-be-forgotten\/\" target=\"_blank\">right to be forgotten<\/a>\u201d\u2014an often-hyped and debated right, sometimes perceived as empowering individuals to request the erasure of their information on the internet, most commonly from search engines or social networks.<\/p>\n<p name=\"inset-left\" class=\"inset-left-component__el\">Darren Shou is vice president of research at Symantec.<\/p>\n<p class=\"paywall\">Since then, the issue of digital privacy has rarely been far from the spotlight. There is widespread debate in governments, boardrooms, and the media on how data is collected, stored, and used, and what ownership the public should have over their own information. But as we continue to grapple with this crucial issue, we\u2019ve largely failed to address one of the most important aspects\u2014how do we control our data once it\u2019s been fed into the artificial intelligence (AI) and machine-learning algorithms that are becoming omnipresent in our lives?<\/p>\n<p class=\"paywall\">Virtually every modern enterprise is in some way or another collecting data on its customers or users, and that data is stored, sold, brokered, analyzed, and used to train AI systems. For instance, this is how recommendation engines work\u2014the next video we should watch online, the next purchase, and so on, are all driven by this process.<\/p>\n<p class=\"paywall\">At present, when data is sucked into this complex machinery, there\u2019s no efficient way to reclaim it and its influence on the resulting output. When we think about exerting the right to be forgotten, we recognize that reclaiming specific data from a vast number of private businesses and data brokers offers its own unique challenge. However, we need to realize that even if we can succeed there, we\u2019ll still be left with a difficult question\u2014how do we teach a machine to \u201cforget\u201d something?<\/p>\n<p class=\"paywall\">This question is even more impactful for children and adolescents coming of age in this world\u2014the \u201cAI Generation.\u201d They have gone through the largest \u201cbeta test\u201d of all time, and it\u2019s one that did not consider the fact that children make mistakes, they make choices, and they are given space by society to collectively learn from them and evolve. Algorithms may not offer this leniency, meaning that data collected on a youthful transgression may be given the same weight (and remembered the same) as any other data\u2014potentially resulting in the reinforcement of bad behavior, or limited opportunities down the line as this data becomes more embedded into our lives.<\/p>\n<p class=\"paywall\">For instance, today a college admissions counselor may be able to stumble upon incriminating photos of an applicant on a social media platform\u2014in the future, they may be able to <a href=\"https:\/\/www.apnews.com\/f062c28ae72144b3b22146d9d4c6fab3\" target=\"_blank\">hear recordings<\/a> of that applicant as a 12-year-old taken by a voice assistant in the child\u2019s home.<\/p>\n<p class=\"paywall\">The AI Generation needs a right to be forgiven.<\/p>\n<p class=\"paywall\">Historically, we have worked hard to create protections for children\u2014whether that\u2019s <a href=\"https:\/\/www.ftc.gov\/public-statements\/1997\/07\/abcs-ftc-marketing-and-advertising-children\" target=\"_blank\">laws about advertising<\/a>, the expunging of juvenile criminal records, the <a href=\"https:\/\/www.ftc.gov\/enforcement\/rules\/rulemaking-regulatory-reform-proceedings\/childrens-online-privacy-protection-rule\" target=\"_blank\">Children&#x27;s Online Privacy Protection Act<\/a>, or other initiatives. All of these align with a common belief in our society that there\u2019s a dividing line between adulthood and childhood, and that standards and accountability need to be separate and more forgiving for youth.<\/p>\n<p class=\"paywall\">Children coming of age today are not always enjoying that privilege. This prolific data collection and the infusion of AI into their daily lives has happened with minimal oversight, and seemingly little serious thought has been given to what the consequences could be. Society engaged in far more rigorous debate over advancements that would seem trivial today\u2014the introduction of car radios, for example, drew much more <a href=\"http:\/\/mentalfloss.com\/article\/29631\/when-car-radio-was-introduced-people-freaked-out\" target=\"_blank\">concern<\/a> from the United States government. The moral panics of the mid 20th century seem quaint in comparison to today\u2019s digital free-for-all.<\/p>\n<p class=\"paywall\">The lack of debate on what data collection and analysis will mean for kids coming of age in an AI-driven world leaves us to imagine its implications for the future. Mistakes, accidents, teachable moments\u2014this is how children learn in the physical world. But in the digital world, when every click, view, interaction, engagement, and purchase is recorded, collected, shared, and analyzed through the AI behemoth, can algorithms recognize a mistake and understand remorse? Or will bad behavior be compounded by algorithms that are nudging our every action and decision for their own purposes?<\/p>\n<p class=\"paywall\">What makes this even more serious is that the massive amount of data we\u2019re feeding these algorithms has enabled them to make decisions experientially or intuitively like humans. This is a huge break from the past, in which computers would simply execute human-written instructions. Now, advanced AI systems can analyze the data they\u2019ve internalized in order to arrive at a solution that humans may not even be able to understand\u2014meaning that many AI systems have become \u201cblack boxes,\u201d even to the developers who built them, and it may be impossible to reason about how an algorithm made or came to a certain decision.<\/p>\n<p class=\"paywall\">On a basic level, people understand that there are trade-offs when they use digital services, but many are oblivious to the <em>amount<\/em> of information captured, how it is used, and whom it is shared with. It\u2019s easy to view an email address or a birth date as a single, discrete puzzle piece, but when small bits of information are continually given to an ever-consuming, ever-calculating algorithm, they add up to a shockingly complete picture.<\/p>\n<p class=\"paywall\">One of the starkest examples of this dates back to 2012, when <em><a href=\"https:\/\/www.nytimes.com\/2012\/02\/19\/magazine\/shopping-habits.html\" target=\"_blank\">The New York Times<\/a><\/em> published the story of how a major retailer\u2019s customer prediction model ended up informing a father that his teenage daughter was pregnant through the targeted advertisements she received in the mail. That was seven years ago\u2014not only has technology made great progress since then, but the meter has been running.<\/p>\n<p class=\"paywall\">In 2019, data profiles of everyone who has gone through the system are seven years richer. The teenager in this example is now an adult, and the data surrounding her pregnancy is forever attached to her. Who has the right to know that? And who\u2014or \u201cwhat,\u201d when we consider AI systems\u2014has the right to make judgments based on that?<\/p>\n<p class=\"paywall\">This is where the problem lies\u2014all this data collection and personali\u00adzation seems benign, even beneficial, until it isn\u2019t. The fault line between the two is time. Looking further into the future raises more questions. What rights do human beings have to their data after they die? Should AI be able to train on an individual\u2019s choices or behaviors once that person is dead?<\/p>\n<p class=\"paywall\">When a person dies, they need to consent as an organ donor to give up their organs. If they pass away with a safety deposit box at a bank, they can specify who gains ownership after their death. In the physical world, we\u2019re given choices and have control over our own possessions. The reverse would be preposterous. Imagine the outrage if, upon dying, our bodies, thoughts and possessions could be taken and used in perpetuity by private enterprises. But that\u2019s essentially what we\u2019ve allowed the digital world to do.<\/p>\n<p class=\"paywall\">Lacking readily applicable laws, rules setting boundaries or technology that changes the \u201cart of the possible,\u201d we\u2019re left with a decentralized system without a human at the controls. The algorithms can\u2019t choose what to unlearn, and those in charge of them may have no reason, ability, or desire to address the problem.<\/p>\n<p class=\"paywall\">AI began in academia, and those behind its development had altruistic purposes. The advancements made by AI were going to cure the sick and feed the hungry. As businesses have deployed AI, it\u2019s been used to make products and services better, often through learning what the customer wants. The combination of cheap storage and AI\u2019s seemingly endless capacity has made it an incredibly attractive tool, but it has also resulted in mass data collection with no easy way to \u201cforget\u201d data.<\/p>\n<p class=\"paywall\">While AI systems may have the memory of an elephant, they are not infallible\u2014researchers have recently discovered that AI can be \u201c<a href=\"https:\/\/www.cs.cmu.edu\/~mfredrik\/papers\/fjr2015ccs.pdf\" target=\"_blank\">tortured<\/a>\u201d into giving up secrets and data. This discovery means that the inability to forget doesn\u2019t only impact personal privacy\u2014it could also lead to real problems for our global security.<\/p>\n<p class=\"paywall\">It\u2019s not too late to address this crucial issue, but the time to act is now. People, not artificial intelligence, constructed this problem, and it\u2019s time for them to take ownership of solving it. There are no simple answers when it comes to privacy, but there are guardrails, safety nets, and limits that can be put into place to restore order and give the public power over their own information.<\/p>\n<p class=\"paywall\">While initial <a href=\"https:\/\/patents.justia.com\/patent\/10225277\" target=\"_blank\">research<\/a> has already begun investigating potential solutions, a true shift will require partnership from the private entities leading the cutting edge of AI development, technologists, ethicists, researchers, academics, sociologists, policymakers, and governments. Together, these entities must work to create safeguards and frameworks to guide the development of AI systems for decades to come. As artificial intelligence becomes increasingly prevalent, the need for governance is becoming more and more dire.<\/p>\n<p class=\"paywall\">To fall short in this effort would be unforgivable.<\/p>\n<p class=\"paywall\"><strong>WIRED Opinion<\/strong> <em>publishes essays by outside contributors, representing a wide range of viewpoints. Read more opinions <a href=\"https:\/\/www.wired.com\/opinion\">here<\/a>. Submit an op-ed at\u00a0opinion@wired.com<\/em><\/p>\n<p class=\"related-cne-video-component__dek\">Sinovation Ventures CEO Kai-Fu Lee and Stanford AI Lab Director Fei Fei Li spoke with WIRED\u2019s Maria Streshinsky as part of WIRED25, WIRED\u2019s 25th anniversary celebration in San Francisco.<\/p>\n<p><a href=\"https:\/\/www.wired.com\/story\/the-next-big-privacy-hurdle-teaching-ai-to-forget\" target=\"bwo\" >https:\/\/www.wired.com\/category\/security\/feed\/<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/media.wired.com\/photos\/5d003443f5d72cd088c02c58\/master\/pass\/Make-Ai-Forget-146104159.jpg\"\/><\/p>\n<p><strong>Credit to Author: Darren Shou| Date: Wed, 12 Jun 2019 12:00:00 +0000<\/strong><\/p>\n<p>Opinion: The inability to forget doesn\u2019t only impact personal privacy\u2014it could also lead to real problems for our global security.<\/p>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[10378,10607],"tags":[234,714,21382],"class_list":["post-15557","post","type-post","status-publish","format-standard","hentry","category-security","category-wired","tag-opinion","tag-security","tag-security-privacy"],"_links":{"self":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/15557","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/comments?post=15557"}],"version-history":[{"count":0,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/15557\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/media?parent=15557"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/categories?post=15557"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/tags?post=15557"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}