{"id":22577,"date":"2023-07-31T10:30:03","date_gmt":"2023-07-31T18:30:03","guid":{"rendered":"https:\/\/www.palada.net\/index.php\/2023\/07\/31\/news-16307\/"},"modified":"2023-07-31T10:30:03","modified_gmt":"2023-07-31T18:30:03","slug":"news-16307","status":"publish","type":"post","link":"http:\/\/www.palada.net\/index.php\/2023\/07\/31\/news-16307\/","title":{"rendered":"EEOC Commissioner: AI system audits might not comply with federal anti-bias laws"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/article\/2023\/07\/shutterstock_2125941194-100944017-small.jpg\"\/><\/p>\n<p>Keith Sonderling, commissioner of the US Equal Employment Opportunity Commission (EEOC), has for years been sounding the alarm about the potential for artificial intelligence (AI) to run afoul of federal anti-discrimination laws such as the Civil Rights Act of 1964.<\/p>\n<p>It was not until the advent of ChatGPT, Bard, and other popular generative AI tools, however, that local, state and national lawmakers began taking notice \u2014 and companies became aware of the pitfalls posed by a technology that can automate efficiencies in the business process.<\/p>\n<p>Instead of speeches he&#8217;d typically make to groups of chief human resource officers or labor employment lawyers, Sonderling has found himself in recent months talking more and more about AI. His focus has been on how companies can stay compliant as they hand over more of the responsibility for hiring and other aspects of corporate HR to algorithms that are vastly faster and capable of parsing thousands of resumes in seconds.<\/p>\n<p><em>Computerworld<\/em> spoke with Sonderling about how companies can deal with the collection of local, state, federal, and international laws that have emerged to ensure AI&#8217;s potential biases are exposed and eliminated. The following are excerpts from that interview:<\/p>\n<p>EEOC Commissioner Keith Sonderling<\/p>\n<p><strong>How have you and the EEOC been involved in addressing AI\u2019s use in human resources and hiring? <\/strong>&#8220;I\u2019ve been talking about this for years, but now everyone wants to hear about what I\u2019ve been talking about.<\/p>\n<p>&#8220;We\u2019re the regulating body for HR. Usually, the demands on the EEOC commissioner are to talk about workplace trends, workplace discrimination and all those issues. With AI impacting HR specifically, now there\u2019s a lot of interest in that \u2014 not just in the terms of the traditional lawyer, or government affairs aspect but more broadly in terms of the technology as a whole.<\/p>\n<p>&#8220;It&#8217;s a technology most laypeople can understand because everyone\u2019s applied for a job, everyone\u2019s been in the workforce. If you\u2019re going to be in the workforce you\u2019re going to be subject to this technology, whether it\u2019s through resume screening\u2026or more advanced programs that determine what kind of worker you are or what positions you should be in. This extends all the way to automating the performance management side of the house. Really, it\u2019s impacted all aspects of HR, so there\u2019s a lot of demand in that.<\/p>\n<p>&#8220;More broadly, because I was one of the government officials to talk about this early on, now I talk about broad AI governance for corporations and what they can be doing to implement best practices, policies and procedures internally.<\/p>\n<p><strong>What is your opinion on how various nations and localities are addressing AI regulation. China has moved quickly because it sees both the threat posed by AI and its potential. They want to get their hooks into the tech. Who\u2019s doing the best job? <\/strong>\u201cThat\u2019s why it\u2019s so interesting thinking about how AI is going to be regulated and the different approaches different countries are taking; [there&#8217;s] the approach the United States broadly is taking, and also you\u2019re seeing cities and states try to address this on the local level.\u00a0 The biggest one is the EU and their proposed AI Act and the RISK-based approach.<\/p>\n<p>&#8220;To your point in the debate about regulating AI, will anyone build systems there? The UK is saying come to us because we\u2019re not going to overregulate it. Or are tech companies just going to go develop it in China and forget about all the others.&#8221;<\/p>\n<p><strong>Why is <a href=\"https:\/\/www.computerworld.com\/article\/3701908\/nyc-law-governing-ai-based-hiring-tools-goes-live.html\">New York\u2019s Local Law 144<\/a> important? <\/strong>\u201cTaking a step back, for cities, states, foreign countries \u2014for anyone who wants to take up the very complex area of algorithmic decision-making laws and trying to regulate it, obviously they should be committed because not only does it take a certain level of expertise of the underlying use of the tool, but also being able to understand how it works and how it will apply to their citizens.<\/p>\n<p>&#8220;What we\u2019re starting to see is a patchwork of different regulatory frameworks that can sometimes cause more confusion than clarity for employers who operate on a national or even international level. I think with a lot of these HR tools, and you see who the early adopters are or who they\u2019re marketed to, it\u2019s generally for larger companies with bigger work forces. Now, I\u2019m not saying there aren\u2019t AI tools made for smaller and mid-sized businesses, because there certainly are. But a lot of it is designed for [those who] need hiring scaled or promotions scaled and need to make employment decisions for a larger workforce. So, they\u2019re going to be subject to these other various requirements if they\u2019re operating in various jurisdictions.&#8221;<\/p>\n<p><strong>How should companies approach compliance considering some are local, some are state, and some are federal?\u00a0<\/strong>&#8220;What I\u2019m trying to warn companies using these products when it comes to compliance with these laws \u2014 or if they are in places where there are no laws on the books because legislators don\u2019t understand AI \u2014 is to take a step back. The laws we enforce here at the EEOC have been around since the 1960s. They deal with all aspects of employment decisions, from hiring, firing, promotions, wages, training, benefits \u2014 basically, all the terms and conditions of employment. Those laws protect against the big ticket items: race, sex, national origin, pregnancy, religion, LGBT, disability, age.<\/p>\n<p>&#8220;They have been regulated and they\u2019ll continue to be regulated by federal law. So you can\u2019t lose sight of the fact that no matter where you are and regardless of whether your state or city has engaged or will be engaging in algorithmic discrimination standards or laws, you still have federal law requirements.<\/p>\n<p>&#8220;New York is the first to come out and broadly regulate employment decisions by AI, but then it\u2019s limited to hiring and promotion. And then it\u2019s limited to sex, race and ethnicity for doing those audits before requiring consent from employees or doing an audit and publishing those audits. All those requirements will only be for hiring and promotions.<\/p>\n<p>&#8220;Now, there\u2019s a lot of hiring and promotion going on using these AI tools, but that doesn\u2019t mean it you\u2019re an employer that\u2019s not subject to New York\u2019s Local Law 144 that you shouldn\u2019t be doing audits to begin with. Or if you\u2019re saying, &#8216;OK, I have to do this because New York is requiring me to do [a] pre-deployment audit for race, sex and ethnicity,&#8217; well, the EEOC is still going to require compliance with all the laws I just mentioned across the board, regardless.&#8221;<\/p>\n<p><strong>So, if your AI-assisted applicant tracking system is audited, should you feel secure that you&#8217;re fully compliant?\u00a0<\/strong>&#8220;You shouldn\u2019t be lulled into false sense of security that your AI in employment is going to be completely compliant with federal law simply by complying with local laws. We saw this first in <a href=\"https:\/\/www.natlawreview.com\/article\/employers-take-heed-follow-illinois-biometric-privacy-rules-or-risk-losing-battle\" rel=\"nofollow noopener\" target=\"_blank\">Illinois in 2020<\/a> when they came out with the <a href=\"https:\/\/www.ilga.gov\/legislation\/ilcs\/ilcs3.asp?ActID=3004&amp;ChapterID=57\" rel=\"nofollow noopener\" target=\"_blank\">facial recognition act<\/a> in employment, which basically said if you\u2019re going to use facial recognition technology during an interview to assess if they\u2019re smiling or blinking, then you need to get consent. They made it more difficult to do [so] for that purpose.<\/p>\n<p>&#8220;You can see how fragmented the laws are, where Illinois is saying we\u2019re going to worry about this one aspect of an application for facial recognition in an interview setting. New York is saying our laws are designed for hiring and promotion in these categories. So, OK, I\u2019m not going to use facial recognition technology in Illinois, and I\u2019ll audit for hiring and promotion in New York. But, look, the federal government says you still have to be compliant with all the civil rights laws.<\/p>\n<p>&#8220;You could have been doing this since the 1960s, because all these tools are doing is scaling employment decisions. Whether the AI technology is making all the employment decisions or one of many factors in an employment decision; whether it\u2019s simply assisting you with information about a candidate or employer that otherwise you wouldn\u2019t have been able to ascertain without advanced machine learning looking for patterns that a human couldn\u2019t have fast enough. At the end of the day, it\u2019s an employment decision and at the end of the day, only an employer can make an employment decision.&#8221;<\/p>\n<p><strong>So, where does the liability for ensuring AI-infused or machine learning tools lie?\u00a0<\/strong>&#8220;All the liability rests with the employer in the same way it rested with HR using a pencil and paper back in the 1960s. You cannot lose sight that these are just employment decisions being made faster, more efficiently, with more data, and potentially with more transparency. But [hiring] has been regulated for a long time.<\/p>\n<p>&#8220;With the uncertain future of federal AI legislation and where it may go, where the EU\u2019s legislation may go, and as more states take this on \u2014 California, New Jersey, and New York State wants to get involved \u2014 you can\u2019t just sit back and say well, there\u2019s not certainty yet in AI law. You can\u2019t think there\u2019s no AI regulatory body that a senator wants to create; there\u2019s no EU law that will require me to do one, two, three before using it, and think, &#8216;We can just wait and implement this software like we do other software.&#8217; That\u2019s just not true.<\/p>\n<p>&#8220;When you&#8217;re dealing with HR, you\u2019re dealing with civil rights in the workplace. You\u2019re dealing with a person\u2019s ability to enter and thrive in the workforce and provide for their family, which is different from other uses. I\u2019m telling you that are laws in existence and will continue to be in existence that employers are familiar with, we just need to apply them to these HR tools in the same way we would with any other employment decision.&#8221;<\/p>\n<p><strong>Do you believe New York\u2019s Local Law 144 is a good baseline or foundation for other laws to mimic?\u00a0<\/strong>&#8220;I think Local Law 144 is raising the awareness of the ability for employers to do employment audits. I think it\u2019s a good thing, in the sense that now employers in New York who are hiring are being forced to do an audit. It raises awareness that whether or not you\u2019re being forced to do it, it\u2019s good compliance.<\/p>\n<p>&#8220;Just because a local government is forcing you to do an audit, doesn\u2019t mean you cannot do it yourself. In the sense that employers are now recognizing and investing in how to get AI compliant before it makes a decision involving someone\u2019s livelihood, it\u2019s developing this framework of how to audit AI pre-deployment, post-deployment and how [to] test it. How do we create the framework for AI broadly, whether it\u2019s being used in employment, housing, or credit? It gets companies more familiar with not only spending the resources needed to build these systems or buy them, but the implementation side of it has a compliance aspect to it.<\/p>\n<p>&#8220;I think it\u2019s raising awareness in a positive way of performing audits to prevent discrimination. If you find the job candidate recommendation algorithm has a factor in there that\u2019s not necessary for the job, but instead is eliminating a certain class of workers who are qualified but excluded because of age, or race, or national origin, or whatever the algorithm is picking up \u2014 if you can see that and prevent it and tweak it, whether by changing the job description or doing more recruiting for certain areas to ensure you have the inclusive job applicant pool or just ensuring the job parameters are necessary \u2014 that\u2019s preventing discrimination.<\/p>\n<p>&#8220;A big part of our mission here at the EEOC, even though people look at us as an enforcement agency \u2014 which we are \u2014 is to prevent discrimination and promote equal opportunity in the workplace. Doing these audits in the first place can prevent that.&#8221;<\/p>\n<p><strong>What makes an AI applicant tracking system problematic in the first place?\u00a0<\/strong>&#8220;A true ATS systems is just going to be a repository of applications and how you look at them. It\u2019s what you\u2019re doing with that data set that can lead to problems, and how you\u2019re implementing the AI on that data set, and what characteristics you\u2019re looking for within that pool and how does it get you the flow of candidates. In that funnel from the ATS to who you\u2019re going to select for the job is where AI can be helpful. Many times when we\u2019re looking at a job description or a job recommendation, or the requirements for that job, those in some cases haven\u2019t been updated in years or even decades. Or a lot of times they\u2019ve just been copied and pasted from a competitor. That has the potential to discriminate because you don\u2019t know if you\u2019re copying a job description that may have historical biases.<\/p>\n<p>&#8220;The EEOC is going to look at that and simply say, what were the results? If the results were discrimination, you have the burden of going through every aspect of the characteristics you put into that ATS and can you prove that\u2019s necessary for the job in that location based upon the applicant pool?<\/p>\n<p>&#8220;So, it\u2019s not as much the ATS systems that can be problematic, but what machine learning tools are scanning the ATS systems and if it wasn\u2019t a diverse pool of applicants in the first place. That\u2019s a long-winded way of asking: how are you getting into that ATS system and then once the applicant is in that system, what are they being rated on? You can see how historically biases can prevent some [people] from getting into those systems in the first place, and then once you\u2019re in the ATS system, and the next level is what skills or recommendations are not necessary but are discriminatory?&#8221;<\/p>\n<p><a href=\"https:\/\/www.computerworld.com\/article\/3703650\/eeoc-chief-ai-system-audits-might-comply-with-local-discrimination-laws-but-not-federal-ones.html#tk.rss_security\" target=\"bwo\" >http:\/\/www.computerworld.com\/category\/security\/index.rss<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/article\/2023\/07\/shutterstock_2125941194-100944017-small.jpg\"\/><\/p>\n<article>\n<section class=\"page\">\n<p>Keith Sonderling, commissioner of the US Equal Employment Opportunity Commission (EEOC), has for years been sounding the alarm about the potential for artificial intelligence (AI) to run afoul of federal anti-discrimination laws such as the Civil Rights Act of 1964.<\/p>\n<p>It was not until the advent of ChatGPT, Bard, and other popular generative AI tools, however, that local, state and national lawmakers began taking notice \u2014 and companies became aware of the pitfalls posed by a technology that can automate efficiencies in the business process.<\/p>\n<p>Instead of speeches he&#8217;d typically make to groups of chief human resource officers or labor employment lawyers, Sonderling has found himself in recent months talking more and more about AI. His focus has been on how companies can stay compliant as they hand over more of the responsibility for hiring and other aspects of corporate HR to algorithms that are vastly faster and capable of parsing thousands of resumes in seconds.<\/p>\n<p class=\"jumpTag\"><a href=\"\/article\/3703650\/eeoc-chief-ai-system-audits-might-comply-with-local-discrimination-laws-but-not-federal-ones.html#jump\">To read this article in full, please click here<\/a><\/p>\n<\/section>\n<\/article>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[11062,10643],"tags":[11113,11063,11070,29835,11067,18384],"class_list":["post-22577","post","type-post","status-publish","format-standard","hentry","category-computerworld","category-independent","tag-artificial-intelligence","tag-data-privacy","tag-emerging-technology","tag-generative-ai","tag-government-it","tag-it-leadership"],"_links":{"self":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/22577","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/comments?post=22577"}],"version-history":[{"count":0,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/22577\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/media?parent=22577"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/categories?post=22577"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/tags?post=22577"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}