{"id":23226,"date":"2023-10-30T08:41:37","date_gmt":"2023-10-30T16:41:37","guid":{"rendered":"https:\/\/www.palada.net\/index.php\/2023\/10\/30\/news-16956\/"},"modified":"2023-10-30T08:41:37","modified_gmt":"2023-10-30T16:41:37","slug":"news-16956","status":"publish","type":"post","link":"http:\/\/www.palada.net\/index.php\/2023\/10\/30\/news-16956\/","title":{"rendered":"White House to issue AI rules for federal employees"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/article\/2023\/10\/shutterstock_2166567463-100947718-small.jpg\"\/><\/p>\n<p>After earlier efforts to reign in generative artificial intelligence (genAI) were <a href=\"https:\/\/www.computerworld.com\/article\/3703231\/white-house-promises-on-ai-regulation-called-vague-and-disappointing.html\">criticized as too vague\u00a0and ineffective<\/a>, the Biden Administration is now expected to announce new, more restrictive rules for use of the technology by federal employees.<\/p>\n<p>The executive order, expected to be unveiled Monday, would also change immigration standards to allow a greater influx of technology workers to help accelerate US development efforts.<\/p>\n<p>On Tuesday night, the White House sent invitations for a \u201cSafe, Secure, and Trustworthy Artificial Intelligence\u201d event Monday hosted by President Joseph R. Biden Jr., according to <em><a href=\"https:\/\/www.washingtonpost.com\/technology\/2023\/10\/25\/artificial-intelligence-executive-order-biden\/\" rel=\"nofollow noopener\" target=\"_blank\">The Washington Post<\/a><\/em>.<\/p>\n<p>Generative AI, which has been advancing at breakneck speeds and\u00a0<a href=\"https:\/\/www.computerworld.com\/article\/3695568\/qa-googles-geoffrey-hinton-humanity-just-a-passing-phase-in-the-evolution-of-intelligence.html\">setting off alarm bells among industry experts<\/a>, spurred Biden <a href=\"https:\/\/www.computerworld.com\/article\/3695731\/white-house-unveils-ai-rules-to-address-safety-and-privacy.html\">to issue \u201cguidance\u201d<\/a> last May. Vice President Kamala Harris also met with the CEOs of Google, Microsoft, and OpenAI \u2014 the creator of the popular ChatGPT chatbot\u2014 to discuss potential issues with genAI, which include security, privacy, and <a href=\"https:\/\/www.computerworld.com\/article\/3691639\/tech-bigwigs-hit-the-brakes-on-ai-rollouts.html\">control problems<\/a>.<\/p>\n<p>Even before the launch of ChatGPT in November 2022, the\u00a0administration had unveiled a\u00a0<a href=\"https:\/\/www.whitehouse.gov\/ostp\/ai-bill-of-rights\/\" rel=\"nofollow noopener\" target=\"_blank\">blueprint for a so-called \u201cAI Bill of Rights\u201d<\/a>\u00a0as well as an AI Risk Management Framework; it also pushed a roadmap for standing up a National AI Research Resource.<\/p>\n<p>The new executive order is expected to elevate national cybersecurity defenses by requiring <a href=\"https:\/\/www.computerworld.com\/article\/3697649\/what-are-large-language-models-and-how-are-they-used-in-generative-ai.html\">large language models<\/a>\u00a0(LLMs) \u2014 the foundation of generative AI \u2014 to undergo assessments before they can be used by US government agencies. Those agencies include the US Defense Department, Energy Department and intelligence agencies, according to the <em>Post<\/em>.<\/p>\n<p>The new rules will bolster what was a voluntary commitment by 15 AI development companies to do what they could to ensure the evaluation of genAI systems that is consistent with responsible use.<\/p>\n<p>&#8220;I\u2019m afraid we don\u2019t have a very good track record there; I mean, see Facebook for details,\u201d Tom Siebel, CEO of enterprise AI application vendor\u00a0<a href=\"https:\/\/c3.ai\/\" rel=\"nofollow noopener\" target=\"_blank\">C3 AI<\/a>\u00a0and founder of Siebel Systems, told an audience at MIT\u2019s EmTech Conference last May. \u201cI\u2019d like to believe self-regulation would work, but power corrupts, and absolute power corrupts absolutely.&#8221;<\/p>\n<p>While genAI offers extensive benefits with its ability to automate tasks and create sophisticated text responses, images, video and even software code, the technology also has been known to go rogue \u2014 an anomaly known as hallucinations.<\/p>\n<p>\u201cHallucinations happen because LLMs, in their in most vanilla form, don\u2019t have an internal state representation of the world,&#8221; said Jonathan Siddharth, CEO of Turing, a Palo Alto, CA company that uses AI to find, hire, and onboard software engineers remotely. &#8220;There\u2019s no concept of fact. They\u2019re predicting the next word based on what they\u2019ve seen so far \u2014 it\u2019s a statistical estimate.&#8221;<\/p>\n<p>GenAI can also unexpectedly expose sensitive or personally identifiable data. At its most basic level, the tools can gather and analyze massive quantities of data from the Internet, corporations, and even government sources in order to more accurately and deeply offer content to users. The drawback is that the information gathered by AI isn\u2019t necessarily stored securely. AI applications and networks can make that sensitive information vulnerable to data exploitation by third parties.<\/p>\n<p>Smartphones and self-driving cars, for example, track users\u2019 locations and driving habits. While that tracking software is meant to help the technology better understand habits to more efficiently serve users, it also gathers personal information as part of big data sets used for training AI models.<\/p>\n<p>For companies developing AI, the executive order might necessitate an overhaul in how they approach their practices, according to Adnan Masood, chief AI architect at digital transformation services company UST. The new rules may also\u00a0driving up operational costs initially.<\/p>\n<p>&#8220;However, aligning with national standards could also streamline federal procurement processes for their products and foster trust among private consumers,&#8221; Masood said. &#8220;Ultimately, while regulation is necessary to mitigate AI\u2019s risks, it must be delicately balanced with maintaining an environment conducive to innovation.<\/p>\n<p>&#8220;If we tip the scales too far towards restrictive oversight, particularly in research, development, and open-source initiatives, we risk stifling innovation and conceding ground to more lenient jurisdictions globally,&#8221; Masood continued. &#8220;The key lies in making regulations that safeguard public and national interests while still fueling the engines of creativity and advancement in the AI sector.&#8221;<\/p>\n<p>Masood said the upcoming\u00a0regulations from the White House have been &#8220;a long time coming, and it\u2019s a good step [at] a critical juncture in the US government&#8217;s approach to harnessing and containing AI technology.<\/p>\n<p>&#8220;I hold reservations about extending regulatory reach into the realms of research and development,&#8221; Masood said. &#8220;The nature of AI research requires a level of openness and collective scrutiny that can be stifled by excessive regulation. Particularly, I oppose any constraints that could hamper open-source AI initiatives, which have been a driving force behind most innovations in the field. These collaborative platforms allow for rapid identification and remediation of flaws in AI models, fortifying their reliability and security.&#8221;<\/p>\n<p>GenAI is also\u00a0<a href=\"https:\/\/www.computerworld.com\/article\/3695508\/ai-deep-fakes-mistakes-and-biases-may-be-unavoidable-but-controllable.html\">vulnerable to baked-in biases<\/a>, such as AI-assisted hiring applications that tend to choose men versus women, or white candidates over minorities. And, as genAI tools get better at mimicking natural language, images and video, it will soon be impossible to discern fake results from real ones; that&#8217;s prompting companies to set up \u201cguardrails\u201d against the worst outcomes, whether they be accidental or intentional efforts by bad actors.<\/p>\n<p>US efforts to reign in AI followed similar efforts by European countries to ensure the technology isn&#8217;t generating content that violates EU laws; that could include child pornography or, in some EU countries, denial of the Holocaust. Italy <a href=\"https:\/\/www.bbc.com\/news\/technology-65139406\" rel=\"nofollow noopener\" target=\"_blank\">outright banned\u00a0further development of ChatGPT<\/a>\u00a0over privacy concerns after the natural language processing app experienced\u00a0a data breach involving user conversations and payment information.<\/p>\n<p>The European Union\u2019s \u201c<a href=\"https:\/\/artificialintelligenceact.eu\/\" rel=\"nofollow noopener\" target=\"_blank\">Artificial Intelligence Act<\/a>\u201d (AI Act) was the first of its kind by a western set of nations. The proposed legislation relies heavily on existing rules, such as the General Data Protection Regulation (GDPR), the Digital Services Act, and the Digital Markets Act. The AI Act was originally proposed by the European Commission in April 2021.<\/p>\n<p>States and municipalities <a href=\"https:\/\/www.computerworld.com\/article\/3691819\/legislation-to-rein-in-ais-use-in-hiring-grows.html\">are eyeing restrictions\u00a0of their own<\/a>\u00a0on the use of AI-based bots to find, screen, interview, and hire job candidates because of privacy and bias issues. Some states have already put laws on the books.<\/p>\n<p>The White House is also expected to lean on the National Institute of Standards and Technology to tighten industry guidelines on testing and evaluating AI systems \u2014 provisions that would build on the voluntary commitments on safety, security and trust that the Biden administration extracted from\u00a0<a href=\"https:\/\/www.whitehouse.gov\/briefing-room\/statements-releases\/2023\/09\/12\/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-eight-additional-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai\/\" rel=\"nofollow noopener\" target=\"_blank\">15 major tech companies<\/a>\u00a0this year on AI.<\/p>\n<p>Biden&#8217;s move is especially critical as genAI experiences an ongoing boom, leading to unprecedented capabilities in creating content, deepfakes, and potentially new forms of cyber threats, Masood said.<\/p>\n<p>&#8220;This landscape makes it evident that the government\u2019s role isn&#8217;t just a regulator, but [also as] a facilitator and consumer of AI technology,&#8221; he added. &#8220;By mandating federal assessments of AI and emphasizing its role in cybersecurity, the US government acknowledges the dual nature of AI as both a strategic asset and a potential risk.&#8221;<\/p>\n<p>Masood said he&#8217;s a staunch advocate for a nuanced approach to AI regulation, as overseeing the deployment of AI products is essential to ensure they meet safety and ethical standards.<\/p>\n<p>&#8220;For instance, advanced AI models used in healthcare or autonomous vehicles must undergo rigorous testing and compliance checks to protect public well-being,&#8221; he said.<\/p>\n<p><a href=\"https:\/\/www.computerworld.com\/article\/3709528\/white-house-to-issue-ai-rules-for-federal-employees.html#tk.rss_security\" target=\"bwo\" >http:\/\/www.computerworld.com\/category\/security\/index.rss<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/article\/2023\/10\/shutterstock_2166567463-100947718-small.jpg\"\/><\/p>\n<article>\n<section class=\"page\">\n<p>After earlier efforts to reign in generative artificial intelligence (genAI) were <a href=\"https:\/\/www.computerworld.com\/article\/3703231\/white-house-promises-on-ai-regulation-called-vague-and-disappointing.html\">criticized as too vague\u00a0and ineffective<\/a>, the Biden Administration is now expected to announce new, more restrictive rules for use of the technology by federal employees.<\/p>\n<p>The executive order, expected to be unveiled Monday, would also change immigration standards to allow a greater influx of technology workers to help accelerate US development efforts.<\/p>\n<p>On Tuesday night, the White House sent invitations for a \u201cSafe, Secure, and Trustworthy Artificial Intelligence\u201d event Monday hosted by President Joseph R. Biden Jr., according to <em><a href=\"https:\/\/www.washingtonpost.com\/technology\/2023\/10\/25\/artificial-intelligence-executive-order-biden\/\" rel=\"nofollow noopener\" target=\"_blank\">The Washington Post<\/a><\/em>.<\/p>\n<p class=\"jumpTag\"><a href=\"\/article\/3709528\/white-house-to-issue-ai-rules-for-federal-employees.html#jump\">To read this article in full, please click here<\/a><\/p>\n<\/section>\n<\/article>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[11062,10643],"tags":[11113,13431,11070,29835,1328,11067,8698,714],"class_list":["post-23226","post","type-post","status-publish","format-standard","hentry","category-computerworld","category-independent","tag-artificial-intelligence","tag-chatbots","tag-emerging-technology","tag-generative-ai","tag-government","tag-government-it","tag-regulation","tag-security"],"_links":{"self":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/23226","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/comments?post=23226"}],"version-history":[{"count":0,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/23226\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/media?parent=23226"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/categories?post=23226"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/tags?post=23226"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}