{"id":19535,"date":"2022-07-07T02:30:04","date_gmt":"2022-07-07T10:30:04","guid":{"rendered":"https:\/\/www.palada.net\/index.php\/2022\/07\/07\/news-13268\/"},"modified":"2022-07-07T02:30:04","modified_gmt":"2022-07-07T10:30:04","slug":"news-13268","status":"publish","type":"post","link":"http:\/\/www.palada.net\/index.php\/2022\/07\/07\/news-13268\/","title":{"rendered":"Microsoft backs off facial recognition analysis, but big questions remain"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/article\/2017\/11\/facial_recognition_system_identification_digital_id_security_scanning_thinkstock_858236252_3x3-100740902-large.3x2.jpg?auto=webp&amp;quality=85,70\"\/><\/p>\n<p><strong>Credit to Author: Evan Schuman| Date: Thu, 07 Jul 2022 03:00:00 -0700<\/strong><\/p>\n<p><span style=\"font-weight: 400;\">Microsoft is backing away from its public support for some AI-driven features, including facial recognition, and acknowledging the discrimination and accuracy issues these offerings create. But the company had years to fix the problems and didn\u2019t. That&#8217;s akin to a car manufacturer recalling a vehicle rather than fixing it.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Despite concerns that facial recognition technology can be discriminatory, the real issue is that results are inaccurate. (The discriminatory argument plays a role, though, due to the assumptions Microsoft developers made when crafting these apps.)<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Let\u2019s start with what Microsoft did and said. <\/span><span style=\"font-weight: 400;\">Sarah Bird, the principal group product manager for Microsoft&#8217;s Azure AI, summed up the pullback last month\u00a0<\/span><a href=\"https:\/\/azure.microsoft.com\/en-us\/blog\/responsible-ai-investments-and-safeguards-for-facial-recognition\/\" rel=\"nofollow noopener\" target=\"_blank\"><span style=\"font-weight: 400;\">in a Microsoft blog<\/span><\/a><span style=\"font-weight: 400;\">.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u201c<\/span><span style=\"font-weight: 400;\">Effective today (June 21), new customers need to apply for access to use facial recognition operations in Azure Face API, Computer Vision, and Video Indexer. Existing customers have one year to apply and receive approval for continued access to the facial recognition services based on their provided use cases. By introducing Limited Access, we add an additional layer of scrutiny to the use and deployment of facial recognition to ensure use of these services aligns with Microsoft\u2019s Responsible AI Standard and contributes to high-value end-user and societal benefit. This includes introducing use case and customer eligibility requirements to gain access to these services. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">&#8220;Facial detection capabilities\u2013including detecting blur, exposure, glasses, head pose, landmarks, noise, occlusion, and facial bounding box \u2014 will remain generally available and do not require an application.\u201d<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Look at that second sentence, where Bird highlights this additional hoop for users to jump through \u201cto ensure use of these services aligns with Microsoft\u2019s Responsible AI Standard and contributes to high-value end-user and societal benefit.\u201d<\/span><\/p>\n<p><span style=\"font-weight: 400;\">This certainly sounds nice, but is that truly what this change does? Or will Microsoft simply lean on it as a way to stop people from using the app where the inaccuracies are the biggest?\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">One of the situations Microsoft discussed involves speech recognition, where it found that \u201c<\/span><span style=\"font-weight: 400;\">speech-to-text technology across the tech sector produced error rates for members of some Black and African American communities that were nearly double those for white users,\u201d said Natasha Crampton, Microsoft\u2019s Chief Responsible AI Officer. \u201cWe stepped back, considered the study\u2019s findings, and learned that our pre-release testing had not accounted satisfactorily for the rich diversity of speech across people with different backgrounds and from different regions.\u201d<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Another issue Microsoft identified is that people of all backgrounds tend to speak differently in formal versus informal settings. Really? The developers didn\u2019t know that before? I bet they did, but failed to think through the implications of not doing anything.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">One way to address this is to reexamine the data collection process. By its very nature, people being recorded for voice analysis are going to be a bit nervous and they are likely to speak strictly and stiffly. One way to deal with is to hold much longer recording sessions in as relaxed an environment as possible, After a few hours, some people may forget that they are being recorded and settle into casual speaking patterns.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">I&#8217;ve seen this play out with how people interact with voice recognition. At first, they speak slowly and tend to over-enunciate. Over time, they slowly fall into what I\u2019ll call &#8220;Star Trek&#8221; mode and speak as they would to another person.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">A similar problem was discovered with emotion-detection efforts.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">More from Bird: \u201c<\/span><span style=\"font-weight: 400;\">In another change, we will retire facial analysis capabilities that purport to infer emotional states and identity attributes such as gender, age, smile, facial hair, hair, and makeup. We collaborated with internal and external researchers to understand the limitations and potential benefits of this technology and navigate the tradeoffs. In the case of emotion classification specifically, these efforts raised important questions about privacy, the lack of consensus on a definition of emotions and the inability to generalize the linkage between facial expression and emotional state across use cases, regions, and demographics. API access to capabilities that predict sensitive attributes also opens up a wide range of ways they can be misused\u2014including subjecting people to stereotyping, discrimination, or unfair denial of services. To mitigate these risks, we have opted to not support a general-purpose system in the Face API that purports to infer emotional states, gender, age, smile, facial hair, hair, and makeup. Detection of these attributes will no longer be available to new customers beginning June 21, 2022, and existing customers have until June 30, 2023, to discontinue use of these attributes before they are retired.<\/span><span style=\"font-weight: 400;\">\u201d<\/span><\/p>\n<p><span style=\"font-weight: 400;\">On emotion detection, facial analysis has historically proven to be much less accurate than simple voice analysis. Voice recognition of emotion has proven quite effective in call center applications, where a customer who sounds very angry can get immediately transferred to a senior supervisor.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">To a limited extent, that helps make Microsoft\u2019s point that it is the way the data is used that needs to be restricted. In that call center scenario, if the software is wrong and that customer was <\/span><i><span style=\"font-weight: 400;\">not <\/span><\/i><span style=\"font-weight: 400;\">in fact angry, no harm is done. The supervisor simply completes the call normally. Note: the only common emotion-detection with voice I&#8217;ve seen is where the customer is angry at the phonetree and its inability to truly understand simple sentences. The software thinks the customer is angry at the company. A reasonable mistake.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">But again, if the software is wrong, no harm is done.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Bird made a good point that some use cases can still rely on these AI functions responsibly. \u201c<\/span><span style=\"font-weight: 400;\">Azure Cognitive Services customers can now take advantage of the open-source Fairlearn package and Microsoft\u2019s Fairness Dashboard to measure the fairness of Microsoft\u2019s facial verification algorithms on their own data \u2014 allowing them to identify and address potential fairness issues that could affect different demographic groups before they deploy their technology.\u201d<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Bird also said technical issues played a role in some of the inaccuracies. \u201cIn working with customers using our Face service, we also realized some errors that were originally attributed to fairness issues were caused by poor image quality. If the image someone submits is too dark or blurry, the model may not be able to match it correctly. We acknowledge that this poor image quality can be unfairly concentrated among demographic groups.\u201d<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Among demographic groups? Isn\u2019t that everyone, given that everyone belongs to some demographic group? That sounds like a coy way of saying that non-whites can have poor match functionality. This is why law enforcement\u2019s use of these tools is so problematic. A key question for IT to ask: What are the consequences if the software is wrong? Is the software one of 50 tools being used, or is it being relied upon solely?\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Microsoft said it&#8217;s working to fix that issue with a new tool. \u201cThat is why Microsoft is offering customers a new Recognition Quality API that flags problems with lighting, blur, occlusions, or head angle in images submitted for facial verification,\u201d Bird said. \u201cMicrosoft also offers a reference app that provides real-time suggestions to help users capture higher-quality images that are more likely to yield accurate results.\u201d<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In a <\/span><a href=\"https:\/\/www.nytimes.com\/2022\/06\/21\/technology\/microsoft-facial-recognition.html\" rel=\"nofollow noopener\" target=\"_blank\"><i><span style=\"font-weight: 400;\">New York Times <\/span><\/i><span style=\"font-weight: 400;\">interview<\/span><\/a><span style=\"font-weight: 400;\">, Crampton pointed to another issue was with \u201c<\/span><span style=\"font-weight: 400;\">the system\u2019s so-called gender classifier was binary \u2018and that\u2019s not consistent with our values.\u2019\u201d <\/span><\/p>\n<p><span style=\"font-weight: 400;\">In short, she\u2019s saying while the system not only thinks in terms of just male and female, it couldn\u2019t easily label people who identified in other gender ways. In this case, Microsoft simply opted to stop trying to guess gender, which is likely the right call.<\/span><\/p>\n<p><a href=\"https:\/\/www.computerworld.com\/article\/3665109\/microsoft-backs-off-facial-recognition-analysis-but-big-questions-remain.html#tk.rss_security\" target=\"bwo\" >http:\/\/www.computerworld.com\/category\/security\/index.rss<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/article\/2017\/11\/facial_recognition_system_identification_digital_id_security_scanning_thinkstock_858236252_3x3-100740902-large.3x2.jpg?auto=webp&amp;quality=85,70\"\/><\/p>\n<p><strong>Credit to Author: Evan Schuman| Date: Thu, 07 Jul 2022 03:00:00 -0700<\/strong><\/p>\n<article>\n<section class=\"page\">\n<p><span style=\"font-weight: 400;\">Microsoft is backing away from its public support for some AI-driven features, including facial recognition, and acknowledging the discrimination and accuracy issues these offerings create. But the company had years to fix the problems and didn\u2019t. That&#8217;s akin to a car manufacturer recalling a vehicle rather than fixing it.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Despite concerns that facial recognition technology can be discriminatory, the real issue is that results are inaccurate. (The discriminatory argument plays a role, though, due to the assumptions Microsoft developers made when crafting these apps.)<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Let\u2019s start with what Microsoft did and said. <\/span><span style=\"font-weight: 400;\">Sarah Bird, the principal group product manager for Microsoft&#8217;s Azure AI, summed up the pullback last month\u00a0<\/span><a href=\"https:\/\/azure.microsoft.com\/en-us\/blog\/responsible-ai-investments-and-safeguards-for-facial-recognition\/\" rel=\"nofollow noopener\" target=\"_blank\"><span style=\"font-weight: 400;\">in a Microsoft blog<\/span><\/a><span style=\"font-weight: 400;\">.\u00a0<\/span><\/p>\n<p class=\"jumpTag\"><a href=\"\/article\/3665109\/microsoft-backs-off-facial-recognition-analysis-but-big-questions-remain.html#jump\">To read this article in full, please click here<\/a><\/p>\n<\/section>\n<\/article>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[11062,10643],"tags":[11113,10516,5897],"class_list":["post-19535","post","type-post","status-publish","format-standard","hentry","category-computerworld","category-independent","tag-artificial-intelligence","tag-microsoft","tag-privacy"],"_links":{"self":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/19535","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/comments?post=19535"}],"version-history":[{"count":0,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/19535\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/media?parent=19535"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/categories?post=19535"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/tags?post=19535"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}