{"id":15277,"date":"2019-05-08T10:45:28","date_gmt":"2019-05-08T18:45:28","guid":{"rendered":"http:\/\/www.palada.net\/index.php\/2019\/05\/08\/news-9026\/"},"modified":"2019-05-08T10:45:28","modified_gmt":"2019-05-08T18:45:28","slug":"news-9026","status":"publish","type":"post","link":"https:\/\/www.palada.net\/index.php\/2019\/05\/08\/news-9026\/","title":{"rendered":"Artificial Intelligence May Not &#8216;Hallucinate&#8217; After All"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/media.wired.com\/photos\/5ccccce48cb4955f51aeb0ea\/master\/pass\/Ai-Image-Detection_1.gif\"\/><\/p>\n<p><strong>Credit to Author: Louise Matsakis| Date: Wed, 08 May 2019 14:39:59 +0000<\/strong><\/p>\n<p><span class=\"lede\">Thanks to advances <\/span>in <a href=\"https:\/\/www.wired.com\/story\/guide-artificial-intelligence\/\">machine learning<\/a>, computers have gotten really good at identifying what\u2019s in photographs. They started <a href=\"https:\/\/www.theguardian.com\/global\/2015\/may\/13\/baidu-minwa-supercomputer-better-than-humans-recognising-images\" target=\"_blank\">beating humans<\/a> at the task years ago, and can now even generate <a href=\"https:\/\/www.wired.com\/story\/is-this-photo-real-ai-getting-better-faking-images\/\">fake images<\/a> that look eerily real. While the technology has come a long way, it\u2019s still not entirely foolproof. In particular, researchers have found that image detection algorithms remain susceptible to a class of problems called <a href=\"https:\/\/www.wired.com\/story\/researcher-fooled-a-google-ai-into-thinking-a-rifle-was-a-helicopter\/\">adversarial examples<\/a>.<\/p>\n<p>Adversarial examples are like optical (or <a href=\"https:\/\/nicholas.carlini.com\/code\/audio_adversarial_examples\/\" target=\"_blank\">audio<\/a>) illusions for AI. By altering a handful of pixels, a computer scientist can fool a machine learning classifier into thinking, say, a picture of a rifle is actually <a href=\"https:\/\/www.wired.com\/story\/researcher-fooled-a-google-ai-into-thinking-a-rifle-was-a-helicopter\/\">one of a helicopter<\/a>. But to you or me, the image still would look like a gun\u2014it almost seems like the algorithm is <a href=\"https:\/\/www.wired.com\/story\/ai-has-a-hallucination-problem-thats-proving-tough-to-fix\/\">hallucinating<\/a>. As image recognition technology is used in more places, adversarial examples may present a troubling security risk. Experts have shown they can be used to do things like cause a self-driving car to ignore a <a href=\"https:\/\/www.wired.com\/story\/machine-learning-backdoors\/\">stop sign<\/a>, or make a facial recognition system <a href=\"https:\/\/www.vice.com\/en_us\/article\/ne43pz\/ai-fooling-glasses-could-be-good-enough-to-trick-facial-recognition-at-airports\" target=\"_blank\">falsely identify<\/a> someone.<\/p>\n<p class=\"paywall\">Organizations like <a href=\"https:\/\/ai.googleblog.com\/2018\/09\/introducing-unrestricted-adversarial.html\" target=\"_blank\">Google<\/a> and the <a href=\"https:\/\/arxiv.org\/pdf\/1602.02697v2.pdf?loc=contentwell&amp;lnk=that-latter-research&amp;dom=section-10\" target=\"_blank\">US Army<\/a> have studied adversarial examples, but what exactly causes them is still largely a mystery. Part of the problem is that the visual world is incredibly complex, and photos can contain millions of pixels. Another issue is deciphering whether adversarial examples are a product of the original photographs, or how an AI is trained to look at them. Some researchers have hypothesized they are a high-dimensional <a href=\"https:\/\/arxiv.org\/abs\/1801.02774\" target=\"_blank\">statistical phenomenon<\/a>, or caused when the AI isn\u2019t trained on <a href=\"https:\/\/papers.nips.cc\/paper\/7749-adversarially-robust-generalization-requires-more-data\" target=\"_blank\">enough data<\/a>.<\/p>\n<p class=\"paywall\">Now, a leading group of researchers from MIT have found a different answer, in a <a href=\"https:\/\/arxiv.org\/abs\/1905.02175\" target=\"_blank\">paper<\/a> that was presented earlier this week: adversarial examples only look like hallucinations to <em>people<\/em>. In reality, the AI is picking up on tiny details that are imperceptible to the human eye. While you might look at an animal\u2019s ears to differentiate a dog from a cat, AI detects minuscule patterns in the photo\u2019s pixels and uses those to classify it. \u201cThe only thing that makes these features special is that we as humans are not sensitive to them,\u201d says Andrew Ilyas, a PhD student at MIT and one of the lead authors of the work, which has yet to be peer-reviewed.<\/p>\n<p class=\"paywall\">The explanation makes intuitive sense, but is difficult to document because it\u2019s hard to untangle which features an AI uses to classify an image. To conduct their study, the researchers used a novel method to separate \u201crobust\u201d characteristics of images, which humans can often perceive, from the \u201cnon-robust\u201d ones that only an AI can detect. Then in one experiment, they trained a classifier using an intentionally mismatched dataset of images. According to the robust features\u2014i.e., what the pictures looked like to the human eye\u2014the photos were of dogs. But according to the non-robust features, invisible to us, the photos were in fact of cats, and that\u2019s how the classifier was trained\u2014to think the photos were of kitties.<\/p>\n<p class=\"paywall\">The researchers then tested showing the classifier new, normal pictures of cats it hadn\u2019t seen before. It was able to identify the kitties correctly, indicating the AI was relying on the hidden, non-robust features embedded in the training set. That suggests these invisible characteristics represent real patterns in the visual world, just ones that humans can\u2019t see. And adversarial examples are instances where these patterns don\u2019t line up with how we view the world.<\/p>\n<p class=\"paywall\">When algorithms fall for an adversarial example, they\u2019re not hallucinating\u2014they\u2019re seeing something that people don\u2019t. \u201cIt\u2019s not something that the model is doing weird, it\u2019s just that you don\u2019t see these things that are really predictive,\u201d says Shibani Santurkar, a PhD student at MIT and another lead author on the paper. \u201cIt\u2019s about humans not being able to see these things in the data.\u201d<\/p>\n<p class=\"paywall\">The study calls into question whether computer scientists can <a href=\"https:\/\/www.wired.com\/story\/what-does-a-fair-algorithm-look-like\/\">really explain<\/a> how their algorithms make decisions. \u201cIf we know that our models are relying on these microscopic patterns that we don\u2019t see, then we can\u2019t pretend that they are interpretable in a human fashion,\u201d says Santurkar. That may be problematic, say, if someone needs to prove in court that a facial recognition algorithm identified them <a href=\"https:\/\/theintercept.com\/2016\/10\/13\/how-a-facial-recognition-mismatch-can-ruin-your-life\/\" target=\"_blank\">incorrectly<\/a>. There might not be a way to account for why the algorithm thought they were a person they\u2019re not.<\/p>\n<p class=\"paywall\">Engineers may ultimately need to make a choice between building automated systems that are the most accurate, versus ones that are the most similar to humans. If you force an algorithm to rely solely on robust features, there\u2019s a chance it might make more mistakes than if it also used hidden, non-robust ones. But if the AI also leans on those invisible characteristics, it may be more susceptible to attacks like adversarial examples. As image recognition tech is increasingly used for tasks like <a href=\"https:\/\/www.wired.com\/story\/facebook-rosetta-ai-memes\/\">identifying hate speech<\/a> and <a href=\"https:\/\/www.wired.com\/2014\/07\/qylur-security-world-cup\/\">scanning luggage<\/a> at the airport, deciding how to navigate these kinds of trade offs will only become more important.<\/p>\n<p class=\"related-cne-video-component__dek\">In a discussion that covers ethics in technology, hacking humans, free will, and how to avoid potential dystopian scenarios, historian and philosopher Yuval Noah Harari speaks with Fei-Fei Li, renowned computer scientist and Co-Director of Stanford University&#39;s Human-Centered AI Institute &#8212; in a conversation moderated by Nicholas Thompson, WIRED&#39;s Editor-in-Chief.<\/p>\n<p><a href=\"https:\/\/www.wired.com\/story\/adversarial-examples-ai-may-not-hallucinate\" target=\"bwo\" >https:\/\/www.wired.com\/category\/security\/feed\/<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/media.wired.com\/photos\/5ccccce48cb4955f51aeb0ea\/master\/pass\/Ai-Image-Detection_1.gif\"\/><\/p>\n<p><strong>Credit to Author: Louise Matsakis| Date: Wed, 08 May 2019 14:39:59 +0000<\/strong><\/p>\n<p>What makes an algorithm mistake a helicopter for a gun? Researchers think the answer has to do more with man than machine.<\/p>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[10378,10607],"tags":[714,21357],"class_list":["post-15277","post","type-post","status-publish","format-standard","hentry","category-security","category-wired","tag-security","tag-security-security-news"],"_links":{"self":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/15277","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/comments?post=15277"}],"version-history":[{"count":0,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/15277\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/media?parent=15277"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/categories?post=15277"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/tags?post=15277"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}