{"id":23253,"date":"2023-10-30T12:30:06","date_gmt":"2023-10-30T20:30:06","guid":{"rendered":"https:\/\/www.palada.net\/index.php\/2023\/10\/30\/news-16983\/"},"modified":"2023-10-30T12:30:06","modified_gmt":"2023-10-30T20:30:06","slug":"news-16983","status":"publish","type":"post","link":"https:\/\/www.palada.net\/index.php\/2023\/10\/30\/news-16983\/","title":{"rendered":"Biden lays down the law on AI"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/article\/2023\/10\/shutterstock_2322880179-100947895-small.jpg\"\/><\/p>\n<p>In a sweeping <a href=\"https:\/\/www.whitehouse.gov\/briefing-room\/statements-releases\/2023\/10\/30\/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence\/\" rel=\"nofollow noopener\" target=\"_blank\">executive order<\/a>, US President Joseph R. Biden Jr. on Monday set up a comprehensive series of standards, safety and privacy protections, and oversight measures for the development and use of artificial intelligence (AI).<\/p>\n<p>Among more than two dozen initiatives, Biden\u2019s &#8220;Safe, Secure, and Trustworthy Artificial Intelligence&#8221; order was a long time coming, according to many observers who\u2019ve been watching the AI space \u2014 especially with the rise of generative AI (genAI) in the past year.<\/p>\n<p>Along with security and safety measures, Biden\u2019s edict addresses Americans\u2019 privacy and genAI problems revolving around bias and civil rights. GenAI-based automated hiring systems, for example, have been found to have <a href=\"https:\/\/www.computerworld.com\/article\/3695508\/ai-deep-fakes-mistakes-and-biases-may-be-unavoidable-but-controllable.html\">baked-in biases<\/a> they can give some job applicants advantages based on their race or gender.<\/p>\n<p>Using existing guidance under the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Defense_Production_Act_of_1950\" rel=\"nofollow noopener\" target=\"_blank\">Defense Production Act<\/a>, a\u00a0Cold War\u2013era law that gives the president significant emergency authority to control domestic industries, the order requires leading genAI developers to share safety test results and other information with the government. The National Institute of Standards and Technology (NIST) is to create standards to ensure AI tools are safe and secure\u00a0before public release.<\/p>\n<p>\u201cThe order underscores a much-needed shift in global attention toward regulating AI, especially after the generative AI boom we have all witnessed this year,\u201d said Adnan Masood, chief AI architect at digital transformation services company UST. \u201cThe most salient aspect of this order is its clear acknowledgment that AI isn\u2019t just another technological advancement; it\u2019s a paradigm shift that can redefine societal norms.&#8221;<\/p>\n<p>Recognizing the ramifications of unchecked AI is a start, Masood noted, but the details matter more.<\/p>\n<p><iframe loading=\"lazy\" title=\"YouTube video player\" src=\"https:\/\/www.youtube.com\/embed\/U1QcmY973Rc?si=bXN0ZYacKd7J6Skw\" width=\"100%\" height=\"420\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\" data-mce-fragment=\"1\" style=\"\"> <\/iframe><\/p>\n<p>\u201cIt\u2019s a good first step, but we as AI practitioners are now tasked with the heavy lifting of filling in the intricate details. [It] requires developers to create standards, tools, and tests to help ensure that AI systems are safe and share the results of those tests with the public,\u201d Masood said.<\/p>\n<p>The order calls for the US government to establish an \u201cadvanced cybersecurity program\u201d to develop AI tools to find and fix vulnerabilities in critical software. Additionally, the National Security Council must coordinate with the White House chief of staff to ensure the military and intelligence community uses AI safely and ethically in any mission.<\/p>\n<p>And the US Department of Commerce was tasked with developing guidance for content authentication and watermarking to clearly label AI-generated content, a problem that\u2019s quickly growing as genAI tools become proficient at mimicking art and other content. \u201cFederal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic \u2014 and set an example for the private sector and governments around the world,\u201d the order stated.<\/p>\n<p>To date, independent software developers and university computer science departments have led the charge against AI\u2019s intentional or unintentional theft of intellectual property and art. Increasingly, developers have been building tools that can watermark unique content or even <a href=\"https:\/\/www.computerworld.com\/article\/3709609\/data-poisoning-anti-ai-theft-tools-emerge-but-are-they-ethical.html\">poison data ingested by genAI systems<\/a>, which scour the internet for information on which to train.<\/p>\n<p>Today, officials from the Group of Seven (G7) major industrial nations also agreed to <a href=\"https:\/\/www.firstpost.com\/tech\/news-analysis\/g7-industrial-countries-to-agree-to-a-basic-ai-code-of-conduct-for-tech-companies-13318122.html\" rel=\"nofollow noopener\" target=\"_blank\">an 11-point set of AI safety principles<\/a> and a voluntary code of conduct for AI developers. That order is similar to the \u201cvoluntary\u201d set of principles the Biden Administration <a href=\"https:\/\/www.computerworld.com\/article\/3703231\/white-house-promises-on-ai-regulation-called-vague-and-disappointing.html\">issued earlier this year<\/a>; the latter was criticized as too vague and generally disappointing.<\/p>\n<p>\u201cAs we advance this agenda at home, the Administration will work with allies and partners abroad on a strong international framework to govern the development and use of AI,\u201d Biden&#8217;s executive order stated. \u201cThe Administration has already consulted widely on AI governance frameworks over the past several months \u2014 engaging with Australia, Brazil, Canada, Chile, the European Union, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK.\u201d<\/p>\n<p>Biden\u2019s order also targets companies developing <a href=\"https:\/\/www.computerworld.com\/article\/3697649\/what-are-large-language-models-and-how-are-they-used-in-generative-ai.html\">large language models<\/a>\u00a0(LLMs) that could pose a serious risk to national security, economic security, or public health; they will be required to notify the federal government when training the model and must share the results of all safety tests.<\/p>\n<p>Avivah Litan, a vice president and distinguished analyst at Gartner Research, said while the new rules start off strong, with clarity and safety tests targeted at the largest AI developers, the mandates still fall short; that fact reflects the limitations of enforcing rules under an executive order and the need for Congress to set laws in place.<\/p>\n<p>She sees the new mandates falling short in several areas:<\/p>\n<p>\u201cAlso, it\u2019s not clear to me what the enforcement mechanisms will look like even when they do exist. Which agency will monitor and enforce these actions? What are the penalties for non-compliance?\u201d Litan said.<\/p>\n<p>Masood agreed, saying even though the White House took a &#8220;significant stride forward,&#8221;\u00a0the executive order only scratches the surface of an enmormous challenge. &#8220;By design it implores us to have more questions than answers \u2014 what constitutes a safety threat?&#8221; Masood said. &#8220;Who takes on the mantle of that decision-making? How exactly do we test for potential threats? More critically, how do we quash the hazardous capabilities at their inception?&#8221;<\/p>\n<p>One area of critical concern the order attemps to address is the use of AI in bioengineering. The mandate creates\u00a0standards to help ensure AI is not used to engineer harmful biological organisms \u2014 like deadly viruses or medicines that end up killing people \u2014 that can harm human populations. \u00a0<\/p>\n<p>\u201cThe order will enforce this provision only by using the emerging standards as a baseline for federal funding of life-science projects,\u201d Litan said. \u201cIt needs to go further and enforce these standards for private capital or any non-federal government funding bodies and sources (like venture capital). \u00a0It also needs to go further and explain who and how these standards will be enforced and what the penalties are for non-compliance.\u201d<\/p>\n<p>Ritu Jyoti, a vice president analyst at research firm IDC, said what stood out to her is the clear acknowledgement from Biden \u201cthat\u00a0we have an obligation to harness the power of AI for good, while protecting people from its potentially profound risks,.&#8221;<\/p>\n<p>Earlier this year, the EU Parliament approved <a href=\"https:\/\/www.computerworld.com\/article\/3699311\/eu-parliament-approves-ai-act-moving-it-closer-to-becoming-law.html\">a draft of the AI Act<\/a>. The proposed law requires generative AI systems like ChatGPT to comply with transparency requirements by disclosing whether content was AI-generated and to distinguish\u00a0<a href=\"https:\/\/www.infoworld.com\/article\/3574949\/what-are-deepfakes-ai-that-deceives.html\" rel=\"noopener\" target=\"_blank\">deep-fake\u00a0images<\/a>\u00a0from real ones.<\/p>\n<p>While the US may have followed Europe in creating rules to govern AI, Jyoti said the American government is not necessarily behind its allies or that Europe has done a better job at setting up guardrails. \u201cI think there is an opportunity for countries across the globe to work together on AI governance for social good,\u201d she said.<\/p>\n<p>Litan disagreed, saying the EU&#8217;s AI Act is ahead of the president\u2019s executive order because the European rules\u00a0clarify the scope of companies it applies to, \u201cwhich it can do as a regulation \u2014 i.e.,\u00a0it applies to any AI systems that are placed on the market, put into service or used in the EU,\u201d she \u00a0said.<\/p>\n<p>Caitlin Fennessy, vice president and chief knowledge officer of the\u00a0<a href=\"https:\/\/iapp.org\" rel=\"noopener nofollow\" target=\"_blank\">International Association of Privacy Professionals<\/a> (IAPP), a nonprofit advocacy group, said the White House mandates will set market expectations for responsible AI through the testing and transparency requirements.<\/p>\n<p>Fennessy also applauded US government efforts on digital watermarking for AI-generated content and AI safety standards for government procurement, among many other measures.<\/p>\n<p>\u201cNotably, the President paired the order with a call for Congress to pass bipartisan privacy legislation, highlighting the critical link between privacy and AI governance,\u201d Fennessy said. \u201cLeveraging the Defense Production Act to regulate AI makes clear the significance of the national security risks contemplated and the urgency the Administration feels to act.\u201d \u00a0<\/p>\n<p>The White House argued the order will help promote a \u201cfair, open, and competitive AI ecosystem,\u201d\u00a0ensuring small developers and entrepreneurs get access to technical assistance and resources, helping small businesses commercialize AI breakthroughs, and encouraging the Federal Trade Commission to exercise its authorities.<\/p>\n<p>Immigration and worker visas were also addressed by the White House, which said it will use existing immigration authorities to expand the ability of highly skilled immigrants and nonimmigrants with expertise in critical areas to study, stay, and work in the US, \u201cby modernizing and streamlining visa criteria, interviews, and reviews.\u201d<\/p>\n<p>The US government, Fennessy said, is leading by example by rapidly hiring professionals to build and govern AI and providing AI training across government agencies.<\/p>\n<p>\u201cThe focus on AI governance professionals and training will ensure AI safety measures are developed with the deep understanding of the technology and use context necessary to enable innovation to continue at pace in a way we can trust,\u201d he said.<\/p>\n<p>Jaysen Gillespie, head of analytics and data science at Poland-based AI-enabled advertising firm RTB House, said Biden is starting from a favorable position because even most AI business leaders agree that some regulation is necessary. He is likely also to benefit, Gillespie said, from any cross-pollination from the conversations Senate Majority Leader Chuck Schumer (D-NY) has held, and continues to hold, with key business leaders.<\/p>\n<p>\u201cAI regulation also appears to be one of the few topics where a bipartisan approach could be truly possible,\u201d said Gillespie, whose company uses AI in targeted advertising, including re-targeting and real-time bidding strategies. \u201cGiven the context behind his potential Executive Order, the President has a real opportunity to establish leadership \u2014 both personal and for the United States \u2014 on what may be the most important topic of this century.\u201d<\/p>\n<p><a href=\"https:\/\/www.computerworld.com\/article\/3709451\/biden-lays-down-the-law-on-ai.html#tk.rss_security\" target=\"bwo\" >http:\/\/www.computerworld.com\/category\/security\/index.rss<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/article\/2023\/10\/shutterstock_2322880179-100947895-small.jpg\"\/><\/p>\n<article>\n<section class=\"page\">\n<p>In a sweeping <a href=\"https:\/\/www.whitehouse.gov\/briefing-room\/statements-releases\/2023\/10\/30\/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence\/\" rel=\"nofollow noopener\" target=\"_blank\">executive order<\/a>, US President Joseph R. Biden Jr. on Monday set up a comprehensive series of standards, safety and privacy protections, and oversight measures for the development and use of artificial intelligence (AI).<\/p>\n<p>Among more than two dozen initiatives, Biden\u2019s &#8220;Safe, Secure, and Trustworthy Artificial Intelligence&#8221; order was a long time coming, according to many observers who\u2019ve been watching the AI space \u2014 especially with the rise of generative AI (genAI) in the past year.<\/p>\n<p class=\"jumpTag\"><a href=\"\/article\/3709451\/biden-lays-down-the-law-on-ai.html#jump\">To read this article in full, please click here<\/a><\/p>\n<\/section>\n<\/article>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[11062,10643],"tags":[11113,11063,11070,29835,1328,29256,5897,8698,714,14247],"class_list":["post-23253","post","type-post","status-publish","format-standard","hentry","category-computerworld","category-independent","tag-artificial-intelligence","tag-data-privacy","tag-emerging-technology","tag-generative-ai","tag-government","tag-healthcare-industry","tag-privacy","tag-regulation","tag-security","tag-software-development"],"_links":{"self":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/23253","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/comments?post=23253"}],"version-history":[{"count":0,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/23253\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/media?parent=23253"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/categories?post=23253"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/tags?post=23253"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}