{"id":22154,"date":"2023-06-05T02:30:06","date_gmt":"2023-06-05T10:30:06","guid":{"rendered":"http:\/\/www.palada.net\/index.php\/2023\/06\/05\/news-15884\/"},"modified":"2023-06-05T02:30:06","modified_gmt":"2023-06-05T10:30:06","slug":"news-15884","status":"publish","type":"post","link":"https:\/\/www.palada.net\/index.php\/2023\/06\/05\/news-15884\/","title":{"rendered":"Governments worldwide grapple with regulation to rein in AI dangers"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/article\/2019\/11\/man_concerned_artificial_intelligence_ai_sign_by_dny59_gettyimages_959737582-100817807-small.jpg\"\/><\/p>\n<p>Ever since <a href=\"https:\/\/www.infoworld.com\/article\/3689973\/what-is-generative-ai-the-evolution-of-artificial-intelligence.html\" rel=\"noopener\" target=\"_blank\">generative AI<\/a> exploded into public consciousness with the launch of <a href=\"https:\/\/www.computerworld.com\/article\/3682143\/chatgpt-finally-an-ai-chatbot-worth-talking-to.html\">ChatGPT<\/a> at the end of last year, calls to regulate the technology to stop it from causing undue harm have risen to fever pitch around the world. The stakes are high \u2014 just last week, technology leaders signed an <a href=\"https:\/\/www.computerworld.com\/article\/3697738\/chatgpt-creators-plead-to-reduce-risk-of-global-extinction-from-their-tech.html\">open public letter<\/a> saying that if government officials get it wrong, the consequence could be the extinction of the human race.<\/p>\n<p>While most consumers are just having fun testing the limits of <a href=\"https:\/\/www.computerworld.com\/article\/3697649\/what-are-large-language-models-and-how-are-they-used-in-generative-ai.html\">large language models<\/a> such as ChatGPT, a number of worrying stories have circulated about the technology making up supposed facts (also known as &#8220;hallucinating&#8221;) and making inappropriate suggestions to users, as when an AI-powered version of Bing <a href=\"https:\/\/www.nytimes.com\/2023\/02\/16\/technology\/bing-chatbot-microsoft-chatgpt.html\" rel=\"nofollow noopener\" target=\"_blank\">told a New York Times reporter<\/a> to divorce his spouse.<\/p>\n<p>Tech industry insiders and legal experts also note a raft of other concerns, including the ability of generative AI to <a href=\"https:\/\/www.csoonline.com\/article\/3694931\/5-ways-threat-actors-can-use-chatgpt-to-enhance-attacks.html\" rel=\"noopener\" target=\"_blank\">enhance the attacks of threat actors on cybersecurity defenses,\u00a0<\/a>the possibility of copyright and data-privacy violations \u2014 since large language models are trained on all sorts of information \u2014 and the potential for discrimination as humans encode their own biases into algorithms.\u00a0<\/p>\n<p>Possibly the biggest area of concern is that generative AI programs are essentially self-learning, demonstrating increasing capability as they ingest data, and that their creators don&#8217;t know exactly what is happening within them. This may mean, as ex-Google AI leader Geoffrey Hinton has said, that <a href=\"https:\/\/www.computerworld.com\/article\/3695568\/qa-googles-geoffrey-hinton-humanity-just-a-passing-phase-in-the-evolution-of-intelligence.html\">humanity may just be a passing phase in the evolution of intelligence<\/a> and that AI systems could develop their own goals that humans know nothing about.<\/p>\n<p>All this has prompted <a href=\"https:\/\/www.computerworld.com\/article\/3697154\/g7-leaders-warn-of-ai-dangers-say-the-time-to-act-is-now.html\">governments around the world to call for protective regulations<\/a>. But, as with most technology regulation, there is rarely a one-size-fits-all approach, with different governments looking to regulate generative AI in a way that best suits their own political landscape.<\/p>\n<p>\u201c[When it comes to] tech issues, even though every country is free to make its own rules, in the past what we have seen is there\u2019s been some form of harmonization between the US, EU, and most Western countries,\u201d said Sophie Goossens, a partner at law firm Reed Smith who specializes in AI, copyright, and IP issues. \u201cIt&#8217;s rare to see legislation that completely contradicts the legislation of someone else.\u201d<\/p>\n<p>While the details of the legislation put forward by each jurisdiction might differ, there is one overarching theme that unites all governments that have so far outlined proposals: how the benefits of AI can be realized while minimizing the risks it presents to society. Indeed, EU and US lawmakers are <a href=\"https:\/\/www.computerworld.com\/article\/3698474\/eu-us-lawmakers-propose-ai-code-of-conduct-in-absence-of-regulation.html\">drawing up an AI code of conduct<\/a> to bridge the gap until any legislation has been legally passed.<\/p>\n<p>Generative AI is an umbrella term for any kind of automated process that uses algorithms to produce, manipulate, or synthesize data, often in the form of images or human-readable text. It\u2019s called generative because it creates something that didn\u2019t previously exist. It&#8217;s not a new technology, and conversations around regulation are not new either.<\/p>\n<p>Generative AI has arguably been around (in a very basic chatbot form, at least) since the mid-1960s, when <a href=\"https:\/\/www.csail.mit.edu\/news\/eliza-wins-peabody-award\" rel=\"nofollow\">an MIT professor created ELIZA<\/a>, an application programmed to use pattern matching and language substitution methodology to issue responses fashioned to make users feel like they were talking to a therapist. But generative AI&#8217;s recent advent into the public domain has allowed people who might not have had access to the technology before to create sophisticated content on just about any topic, based off a few basic prompts.<\/p>\n<p>As generative AI applications become more powerful and prevalent, there is growing pressure for regulation.<\/p>\n<p>\u201cThe risk is definitely higher because now these companies have decided to release extremely powerful tools on the open internet for everyone to use, and I think there is definitely a risk that technology could be used with bad intentions,\u201d Goossens said.<\/p>\n<p>Although discussions by the European Commission around an AI regulatory act began in 2019, the UK government was one of the first to announce its intentions, <a href=\"https:\/\/www.computerworld.com\/article\/3691901\/uk-governments-ai-strategy-to-rely-on-existing-regulations-instead-of-new-laws.html\">publishing a white paper<\/a> in March this year that outlined five principles it wants companies to follow: safety, security, and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress.<\/p>\n<p>In an effort to to avoid what it called \u201cheavy-handed legislation,\u201d however, the UK government has called on existing regulatory bodies to use current regulations to ensure that AI applications adhere to guidelines, rather than draft new laws.<\/p>\n<p>Since then, the European Commission has published the <a href=\"https:\/\/www.computerworld.com\/article\/3695009\/eu-closes-in-on-ai-act-with-last-minute-chatgpt-related-adjustments.html\">first draft of its AI Act<\/a>, which was delayed due to the need to include provisions for regulating the more recent generative AI applications. The draft legislation includes requirements for generative AI models to reasonably mitigate against foreseeable risks to health, safety, fundamental rights, the environment, democracy, and the rule of law, with the involvement of independent experts.<\/p>\n<p>The legislation proposed by the EU would forbid the use of AI when it could become a threat to safety, livelihoods, or people\u2019s rights, with stipulations around the use of artificial intelligence becoming less restrictive based on the perceived risk it might pose to someone coming into contact with it \u2014 for example, interacting with a chatbot in a customer service setting would be considered low risk. AI systems that present such limited and minimal risks may be used with few requirements. AI systems posing higher levels of bias or risk, such as those used for government social-scoring systems and biometric identification systems, will generally not be allowed, with few exceptions.<\/p>\n<p>However, even before the legislation had been finalized, ChatGPT in particular had already come under scrutiny from a number of individual European countries for possible <a href=\"https:\/\/www.csoonline.com\/article\/3202771\/general-data-protection-regulation-gdpr-requirements-deadlines-and-facts.html\" rel=\"noopener\" target=\"_blank\">GDPR<\/a>\u00a0data protection violations. The Italian data regulator <a href=\"https:\/\/www.csoonline.com\/article\/3692432\/italian-privacy-regulator-bans-chatgpt-over-collection-storage-of-personal-data.html\" rel=\"noopener\" target=\"_blank\">initially banned ChatGPT<\/a> over alleged privacy violations relating to the chatbot\u2019s collection and storage of personal data, but reinstated use of the technology after Microsoft-backed OpenAI, the creator of ChatGPT, clarified its privacy policy and made it more accessible, and offered a new tool to verify the age of users.<\/p>\n<p>Other European countries, including France and Spain, have filed complaints about ChatGPT similar to those issued by Italy, although no decisions relating to those grievances\u00a0have been made.<\/p>\n<p>All regulation reflects the politics, ethics, and culture of the society you\u2019re in, said Martha Bennett, vice president and principal analyst at Forrester, noting that in the US, for instance, there\u2019s an instinctive reluctance to regulate unless there is tremendous pressure to do so, whereas in Europe there is a much stronger culture of regulation for the common good.<\/p>\n<p>\u201cThere is nothing wrong with having a different approach, because yes, you do not want to stifle innovation,\u201d Bennett said. Alluding to the comments made by the UK \u00a0government, Bennett said it is understandable to not want to stifle innovation, but she doesn\u2019t agree with the idea that by relying largely on current laws and being less stringent than the EU AI Act, the UK government can provide the country with a competitive advantage \u2014 particularly if this comes at the expense of data protection laws.<\/p>\n<p>\u201cIf the UK gets a reputation of playing fast and loose with personal data, that\u2019s also not appropriate,\u201d she said.<\/p>\n<p>While Bennett believes that differing legislative approaches can have their benefits, she notes that AI regulations <a href=\"https:\/\/www.computerworld.com\/article\/3693017\/us-and-china-take-first-steps-toward-regulating-generative-ai.html\">implemented by the Chinese government<\/a> would be completely unacceptable in North America or Western Europe.<\/p>\n<p>Under Chinese law, AI firms will be required to submit security assessments to the government before launching their AI tools to the public, and any content generated by generative AI must be in line with the country\u2019s core socialist values. Failure to comply with the rules will results in providers being fined, having their services suspended, or facing criminal investigations.<\/p>\n<p>Although a number of countries have begun to draft AI regulations, such efforts are hampered by the reality that lawmakers constantly have to play catchup to new technologies, trying to understand their risks and rewards.<\/p>\n<p>\u201cIf we refer back to most technological advancements, such as the internet or artificial intelligence, it\u2019s like a double-edged sword, as you can use it for both lawful and unlawful purposes,\u201d said Felipe Romero Moreno, a principal lecturer at the University of Hertfordshire\u2019s Law School whose work focuses on legal issues and regulation of emerging technologies, including AI.<\/p>\n<p>AI systems may also do harm inadvertently, since humans who program them can be biased, and the data the programs are trained with may contain bias or inaccurate information. \u201cWe need artificial intelligence that has been trained with unbiased data,\u201d Romero Moreno said. \u201cOtherwise, decisions made by AI will be inaccurate as well as discriminatory.\u201d<\/p>\n<p>Accountability on the part of vendors is essential, he said, stating that users should be able to challenge the outcome of any artificial intelligence decision and compel AI developers to explain the logic or the rationale behind the technology\u2019s reasoning. (A recent example of a related case is a class-action <a href=\"https:\/\/www.boston.com\/news\/the-boston-globe\/2023\/05\/22\/milton-residents-lawsuit-cvs-ai-lie-detectors\/?p1=hp_secondary\" rel=\"nofollow noopener\" target=\"_blank\">lawsuit filed by US man who was rejected from a job<\/a> because AI video software judged him to be untrustworthy.)<\/p>\n<p>Tech companies need to make artificial intelligence systems auditable so that they can be subject to independent and external checks from regulatory bodies \u2014 and users should have access to legal recourse to challenge the impact of a decision made by artificial intelligence, with final oversight always being given to a human, not a machine, Romero Moreno said.<\/p>\n<p>Another major regulatory issue that needs to be navigated is copyright. The EU\u2019s AI Act includes a provision that would make creators of generative AI tools disclose any copyrighted material used to develop their systems.<\/p>\n<p>\u201cCopyright is everywhere, so when you have a gigantic amount of data somewhere on a server, and you\u2019re going to use that data in order to train a model, chances are that at least some of that data will be protected by copyright,\u201d Goossens said, adding that the most difficult issues to resolve will be around the training sets on which AI tools are developed.<\/p>\n<p>When this problem first arose, lawmakers in countries including Japan, Taiwan, and Singapore made an exception for copyrighted material that found its way into training sets, stating that copyright should not stand in the way of technological advancements.<\/p>\n<p>However, Goossens said, a lot of these copyright exceptions are now almost seven years old. The issue is further complicated by the fact that in the EU, while these same exceptions exist, anyone who is a rights holder can opt out of having their data used in training sets.<\/p>\n<p>Currently, because there is no incentive to having your data included, huge swathes of people are now opting out, meaning the EU is a less desirable jurisdiction for AI vendors to operate from.<\/p>\n<p>In the UK, an exception currently exists for research purposes, but the plan to introduce an exception that includes commercial AI technologies was scrapped, with the government yet to announce an alternative plan.<\/p>\n<p>So far, China is the only country that has passed laws and launched prosecutions relating to generative AI \u2014 in May, <a href=\"https:\/\/www.scmp.com\/news\/china\/politics\/article\/3219764\/china-announces-first-known-chatgpt-arrest-over-alleged-fake-train-crash-news\" rel=\"nofollow noopener\" target=\"_blank\">Chinese authorities detained a man<\/a> in Northern China for allegedly using ChatGPT to write fake news articles.<\/p>\n<p>Elsewhere, the UK government has said that regulators will issue practical guidance to organizations, setting out how to implement the principles outlined in its white paper over the next 12 months, while the EU Commission is expected to vote imminently to finalize the text of its AI Act.<\/p>\n<p>By comparison, the US still appears to be in the fact-finding stages, although President Joe Biden and Vice President Kamala Harris recently <a href=\"https:\/\/www.computerworld.com\/article\/3695731\/white-house-unveils-ai-rules-to-address-safety-and-privacy.html\">met with executives<\/a> from leading AI companies to discuss the potential dangers of AI.<\/p>\n<p>Last month, two <a href=\"https:\/\/www.computerworld.com\/article\/3696317\/senate-hearings-see-a-clear-and-present-danger-from-ai-and-opportunities.html\">Senate committees also met<\/a> with industry experts, including OpenAI CEO Sam Altman. Speaking to lawmakers, Altman said regulation would be \u201cwise\u201d because people need to know if they\u2019re talking to an AI system or looking at content \u2014 images, videos, or documents \u2014 generated by a chatbot.<\/p>\n<p>\u201cI think we\u2019ll also need rules and guidelines about what is expected in terms of disclosure from a company providing a model that could have these sorts of abilities we\u2019re talking about,\u201d Altman said.<\/p>\n<p>This is a sentiment Forrester\u2019s Bennett agrees with, arguing that the biggest danger generative AI presents to society is the ease with which misinformation and disinformation can be created.<\/p>\n<p>\u201c[This issue] goes hand in hand with ensuring that providers of these large language models and generative AI tools are abiding by existing rules around copyright, intellectual property, personal data, etc. and looking at how we make sure those rules are really enforced,\u201d she said.<\/p>\n<p>Romero Moreno argues that education holds the key to tackling the technology\u2019s ability to create and spread disinformation, particularly among young people or those who are less technologically savvy. Pop-up notifications that remind users that content might not be accurate would encourage people to think more critically about how they engage with online content, he said, adding that something like the current cookie disclaimer messages that show up on web pages would not be suitable, as they are often long and convoluted and therefore rarely read.<\/p>\n<p>Ultimately, Bennett said, irrespective of what final legislation looks like, regulators and governments across the world need to act now. Otherwise we\u2019ll end up in a situation where the technology has been exploited to such an extreme that we\u2019re fighting a battle we can never win.<\/p>\n<p><a href=\"https:\/\/www.computerworld.com\/article\/3698191\/governments-worldwide-grapple-with-regulation-to-rein-in-ai-dangers.html#tk.rss_security\" target=\"bwo\" >http:\/\/www.computerworld.com\/category\/security\/index.rss<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/article\/2019\/11\/man_concerned_artificial_intelligence_ai_sign_by_dny59_gettyimages_959737582-100817807-small.jpg\"\/><\/p>\n<article>\n<section class=\"page\">\n<p>Ever since <a href=\"https:\/\/www.infoworld.com\/article\/3689973\/what-is-generative-ai-the-evolution-of-artificial-intelligence.html\" rel=\"noopener\" target=\"_blank\">generative AI<\/a> exploded into public consciousness with the launch of <a href=\"https:\/\/www.computerworld.com\/article\/3682143\/chatgpt-finally-an-ai-chatbot-worth-talking-to.html\">ChatGPT<\/a> at the end of last year, calls to regulate the technology to stop it from causing undue harm have risen to fever pitch around the world. The stakes are high \u2014 just last week, technology leaders signed an <a href=\"https:\/\/www.computerworld.com\/article\/3697738\/chatgpt-creators-plead-to-reduce-risk-of-global-extinction-from-their-tech.html\">open public letter<\/a> saying that if government officials get it wrong, the consequence could be the extinction of the human race.<\/p>\n<p class=\"jumpTag\"><a href=\"\/article\/3698191\/governments-worldwide-grapple-with-regulation-to-rein-in-ai-dangers.html#jump\">To read this article in full, please click here<\/a><\/p>\n<\/section>\n<\/article>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[11062,10643],"tags":[11113,13431,11063,8698],"class_list":["post-22154","post","type-post","status-publish","format-standard","hentry","category-computerworld","category-independent","tag-artificial-intelligence","tag-chatbots","tag-data-privacy","tag-regulation"],"_links":{"self":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/22154","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/comments?post=22154"}],"version-history":[{"count":0,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/22154\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/media?parent=22154"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/categories?post=22154"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/tags?post=22154"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}