{"id":22022,"date":"2023-05-16T14:30:07","date_gmt":"2023-05-16T22:30:07","guid":{"rendered":"http:\/\/www.palada.net\/index.php\/2023\/05\/16\/news-15752\/"},"modified":"2023-05-16T14:30:07","modified_gmt":"2023-05-16T22:30:07","slug":"news-15752","status":"publish","type":"post","link":"https:\/\/www.palada.net\/index.php\/2023\/05\/16\/news-15752\/","title":{"rendered":"Senate hearings see a clear and present danger from AI \u2014 and opportunities"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/idge\/imported\/imageapi\/2023\/02\/20\/09\/pcworld-the-complete-chatgpt-artificial-intelligence-openai-training-bundle-100937678-small.jpg\"\/><\/p>\n<p>There are vital national interests in advancing artificial intelligence (AI) to streamline public services and automate mundane tasks performed by government employees. But the government lacks in both IT talent and systems to support those efforts.<\/p>\n<p>\u201cThe federal government as a whole continues to face barriers in hiring, managing, and retaining staff with advanced technical skills \u2014 the very skills needed to design, develop, deploy, and monitor AI systems,\u201d said Taka Ariga, chief data scientist at the US Government Accountability Office.<\/p>\n<p>Daniel Ho, associate director for Institute for Human-Centered Artificial Intelligence (HAI) at Stanford University, agreed, saying that by one estimate the federal government would need to hire about 40,000 IT workers to address cybersecurity issues posed by AI.<\/p>\n<p>Artificial intelligence tools were the subject of two separate hearings on Capitol Hill.\u00a0Before the Homeland Security and Governmental Affairs Committee, a panel of five AI experts testified that while adoption of AI technology is inevitable, removing human oversight of it poses enormous risks. And at a hearing of the Senate Judiciary subcommittee on privacy, technology, and the law, OpenAI CEO Sam Altman was joined by IBM executive Christina Montgomery and New York University professor emeritus Gary Marcus in giving testimony.<\/p>\n<p>The overlapping hearings covered a variety of issues and concerns about the rapid rise and evolution of AI-based tools. Beyond the need for more skilled workers in the US government, officials raised\u00a0concerns about government agencies dealing with\u00a0biases based on faulty or corrupt data from the AI algorithms, fears about election disinformation, and the need for better transparency about how AI tools \u2014 and the underlying large language models \u2014 actually work.\u00a0<\/p>\n<p>In opening remarks, Homeland Security and Governmental Affairs committee Chairman Sen. Gary Peters (D-MI) said the US must take the global lead in AI development and regulation by setting standards that can \u201caddress potential risks and harms.\u201d<\/p>\n<p>One of the most obvious threats? The data used by AI chatbots such as OpenAI&#8217;sChatGPT to produce answers is often inaccessible to anyone outside the vendor community \u2014 and even engineers who design AI systems don&#8217;t always understand how the systems reach conclusions.<\/p>\n<p>In other words, AI systems can be black boxes using proprietary technology often backed by bad data to produce flawed results.<\/p>\n<p>Peters pointed to <a href=\"https:\/\/drive.google.com\/file\/d\/1kA7CG3cLq6eWmwBVgTDOIMhxuGZwRJ5O\/view\" rel=\"nofollow noopener\" target=\"_blank\">a recent study\u00a0by Stanford University<\/a>\u00a0that uncovered a flawed Internal Revenue Service AI algorithm used to determine who should be audited. The system chose Black taxpayers at five times the rate of other races.<\/p>\n<p>Peters also referenced AI-driven systems deployed by at least a dozen states to determine eligibility for disability benefits, \u201cwhich resulted in the system denying thousands of recipients this critical assistance that help them live independently,\u201d Peters said.<\/p>\n<p>Because the disability benefits system was considered \u201cproprietary technology\u201d by the states, citizens were unable to learn why they were denied benefits or to appeal the decision, according to Peters. Privacy laws that kept the data and process hidden weren&#8217;t designed to handle AI applications and issues.<\/p>\n<p>\u201cAs agencies use more AI tools, they need to ensure they\u2019re securing and appropriately using any data inputs to avoid accidental disclosures or unintended uses that harm Americans&#8217; rights or civil liberties,&#8221; Peters said.<\/p>\n<p>Richard Eppink, a lawyer with the American Civil Liberties Union of Idaho Foundation, noted\u00a0<a href=\"https:\/\/www.aclu.org\/news\/privacy-technology\/pitfalls-artificial-intelligence-decisionmaking-highlighted-idaho-aclu-case\" rel=\"nofollow noopener\" target=\"_blank\">a class action lawsuit<\/a>\u00a0has been brought by the ACLU representing about 4,000 Idahoans with developmental and intellectual disabilities who were denied funds by state&#8217;s Medicaid program because of an AI-based system. \u201cWe can\u2019t allow proprietary AI to hold due process rights hostage,\u201d Eppink said.<\/p>\n<p>At the other hearing on AI, Altman was asked whether citizens should be concerned that elections could be gamed by large language models (LLMs) such as GPT-4 and its chatbot application, ChatGPT.<\/p>\n<p>\u201cIt\u2019s one of my areas of greatest concern,\u201d he said. \u201cThe more general ability of these models to manipulate, persuade, to provide one-on-one interactive disinformation \u2014 given we\u2019re going to face an election next year and these models are getting better, I think this is a significant area of concern.\u201d<\/p>\n<p>Regulation, Altman said, would be \u201cwise\u201d because people need to know if they\u2019re talking to an AI system or looking at content \u2014 images, videos or documents \u2014 generated by a chatbot.\u00a0\u201cI think we\u2019ll also need rules and guidelines about what is expected in terms of disclosure from a company providing a model that could have these sorts of abilities we\u2019re talking about. So, I\u2019m nervous about it.&#8221;<\/p>\n<p>People, however, will adapt quickly, he added, pointing to Adobe\u2019s Photoshop software as something that at first fooled many until its capabilities were realized. \u201cAnd then pretty quickly [people] developed an understanding that images might have been Photoshopped,\u201d Altman said. \u201cThis will be like that, but on steroids.\u201d<\/p>\n<p>Lynne Parker, director of the AI Tennessee Initiative at the University of Tennessee, said one method of identifying content generated by AI tools is to include watermarks. The technology would allow users to fully understand the content\u2019s provenance or where it came from.<\/p>\n<p>Committee member Sen. Maggie Hassan (D-NH) said there would be a future hearing on the topic of watermarking AI content.<\/p>\n<p>Altman also suggested the US government follow a three-point AI oversight plan:<\/p>\n<p>Altman, however, didn\u2019t address transparency concerns about how LLMs are trained, something Sen. Marsha Blackburn (R-TN) and other committee members have suggested.<\/p>\n<p>Parker, too, called for federal action \u2014 guidelines that would allow the US government to responsibly leverage AI. She\u00a0then listed 10, including the protection of citizen rights, the use of established rules such as NIST\u2019s proposed <a href=\"https:\/\/www.nist.gov\/itl\/ai-risk-management-framework\" rel=\"nofollow noopener\" target=\"_blank\">AI Management Framework<\/a>, and the creation of a federal AI council.<\/p>\n<p>Onerous or heavy-handed oversight that hinders the development and deployment of AI systems isn\u2019t needed, Parker argued. Instead, existing proposed guidelines, such as the Office of Science and Technology\u2019s <a href=\"https:\/\/www.whitehouse.gov\/ostp\/ai-bill-of-rights\/\" rel=\"nofollow noopener\" target=\"_blank\">Blueprint for an AI Bill of\u00a0Rights<\/a> would address high-risk issues.<\/p>\n<p>Defining the responsible use of AI is also important, something for which agencies like the Office of Management and Budget should be given responsibility.<\/p>\n<p>One concern: vendors of chatbot and other AI technologies are working hard to obtain public information such as cell phone records and citizen addresses from state and federal agencies to assist in developing new applications. Those applications could track people and their online habits to better market to them.<\/p>\n<p>The Senate committee also heard concerns that China is leading in both AI development and standards. \u201cWe seem to be caught in a trap,&#8221; said\u00a0Jacob\u00a0Siegel, senior editor of news at <em>Tablet Magazine<\/em>. &#8220;There\u2019s a vital national interest in promoting the advancement of AI, yet at present the government\u2019s primary use of AI appears to be as a political weapon to censor information that it or its third-party partners deem harmful.\u201d<\/p>\n<p>Siegel, whose online magazine focuses on Jewish news and culture, served as an intelligence officer and a veteran of the Iraq and Afghanistan War.<\/p>\n<p>American AI governance to date, he argued, is emulating the Chinese model with a top down, political party-driven social control. \u201cContinuing in this direction will mean the end of our tradition of self-government and the American way of life.\u201d<\/p>\n<p>Siegel said his experiences in the war on terror provided him with a \u201cglimpse of the AI revolution.\u201d He said the technology is already \u201cremaking America\u2019s political system and culture in ways that have already proved incompatible with our system of democracy and self-government and may soon become irreversible.\u201d<\/p>\n<p>He called out testimony given earlier this month by Jen\u00a0Easterly, director\u00a0of\u00a0the\u00a0Cybersecurity\u00a0and\u00a0Infrastructure\u00a0Security\u00a0Agency (CISA), who said China has already established guardrails to ensure AI represents its values. \u201cAnd the US should do the same,\u201d Siegel said.<\/p>\n<p>The Judiciary Committee held a hearing on March to discuss the transformative potential of AI as well as its risks. Today\u2019s hearing focused on how AI can help the government offer services more efficiently while avoiding intrusions on privacy, free speech, and bias.<\/p>\n<p>Sen. Rand Paul (R-KY) painted a particularly ominous, Orwellian-like scenario where AI such as ChatGPT not only acts through erroneous data it\u2019s fed, but can also knowingly produce disinformation and censor free speech based on what the government determines is for the greater good.<\/p>\n<p>For example, Paul described how during the COVID-19 pandemic a private-public partnership worked in in concert with private companies, such as Twitter, to use AI to automate the discovery of controversial posts about vaccine origins and unapproved treatments and delete them.<\/p>\n<p>\u201cThe purpose, so they claimed, was to combat foreign malign influence. But, in reality, the government wasn\u2019t suppressing foreign misinformation or disinformation. It was working to censor domestic speech by Americans,\u201d Paul said. \u201cGeorge Orwell would be proud.\u201d<\/p>\n<p>Since 2020, Paul said, the federal government has awarded more than 500 contracts for proprietary AI systems. The senator claimed the contracts went to companies whose technology is used to \u201cmine the internet, identify conversations indicative of harmful narratives, track those threats, and develop countermeasures before messages go viral.\u201d<\/p>\n<p><a href=\"https:\/\/www.computerworld.com\/article\/3696317\/senate-hearings-see-a-clear-and-present-danger-from-ai-and-opportunities.html#tk.rss_security\" target=\"bwo\" >http:\/\/www.computerworld.com\/category\/security\/index.rss<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/idge\/imported\/imageapi\/2023\/02\/20\/09\/pcworld-the-complete-chatgpt-artificial-intelligence-openai-training-bundle-100937678-small.jpg\"\/><\/p>\n<article>\n<section class=\"page\">\n<p>There are vital national interests in advancing artificial intelligence (AI) to streamline public services and automate mundane tasks performed by government employees. But the government lacks in both IT talent and systems to support those efforts.<\/p>\n<p>\u201cThe federal government as a whole continues to face barriers in hiring, managing, and retaining staff with advanced technical skills \u2014 the very skills needed to design, develop, deploy, and monitor AI systems,\u201d said Taka Ariga, chief data scientist at the US Government Accountability Office.<\/p>\n<p>Daniel Ho, associate director for Institute for Human-Centered Artificial Intelligence (HAI) at Stanford University, agreed, saying that by one estimate the federal government would need to hire about 40,000 IT workers to address cybersecurity issues posed by AI.<\/p>\n<p class=\"jumpTag\"><a href=\"\/article\/3696317\/senate-hearings-see-a-clear-and-present-danger-from-ai-and-opportunities.html#jump\">To read this article in full, please click here<\/a><\/p>\n<\/section>\n<\/article>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[11062,10643],"tags":[11113,8746,13431,11063,11070,1328,11067,5897,12747],"class_list":["post-22022","post","type-post","status-publish","format-standard","hentry","category-computerworld","category-independent","tag-artificial-intelligence","tag-careers","tag-chatbots","tag-data-privacy","tag-emerging-technology","tag-government","tag-government-it","tag-privacy","tag-technology-industry"],"_links":{"self":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/22022","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/comments?post=22022"}],"version-history":[{"count":0,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/22022\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/media?parent=22022"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/categories?post=22022"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/tags?post=22022"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}