{"id":22063,"date":"2023-05-22T12:30:04","date_gmt":"2023-05-22T20:30:04","guid":{"rendered":"https:\/\/www.palada.net\/index.php\/2023\/05\/22\/news-15793\/"},"modified":"2023-05-22T12:30:04","modified_gmt":"2023-05-22T20:30:04","slug":"news-15793","status":"publish","type":"post","link":"http:\/\/www.palada.net\/index.php\/2023\/05\/22\/news-15793\/","title":{"rendered":"G7 leaders warn of AI dangers, say the time to act is now"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/article\/2023\/05\/shutterstock_1960378384-100941278-small.jpg\"\/><\/p>\n<p>Leaders of the Group of Seven (G7) nations on Saturday called for the creation of technical standards to keep artificial intelligence (AI) in check, saying AI has outpaced oversight for safety and security.<\/p>\n<p>Meeting in Hiroshima, Japan, the leaders\u00a0said nations must <a href=\"https:\/\/www.geo.tv\/latest\/488413-g7-meeting-decides-to-set-up-working-group-to-tackle-ai-issues\" rel=\"noopener nofollow\" target=\"_blank\">come together on a common vision and goal of trustworthy AI<\/a>, even while those solutions may vary. But any solution for digital technologies such as AI should be \u201cin line with our shared democratic values,\u201d they <a href=\"https:\/\/www.consilium.europa.eu\/en\/press\/press-releases\/2023\/05\/20\/g7-hiroshima-leaders-communique\/\" rel=\"noopener nofollow\" target=\"_blank\">said in a statement<\/a>.<\/p>\n<p>The G7, which includes include the U.S., Japan, Germany, Britain, France, Italy, Canada and the EU, stressed that efforts to create trustworthy AI need to include \u201cgovernance, safeguard of intellectual property rights including copyrights, promotion of transparency, [and] response to foreign information manipulation, including disinformation.<\/p>\n<p>&#8220;We recognize the need to immediately take stock of the opportunities and challenges of generative AI, which is increasingly prominent across countries and sectors,&#8221; the G7 leaders said. More specifically, they\u00a0called for the creation of a G7 working group by the end of the year to tackle possible generative AI solutions.<\/p>\n<p>The G7 summit followed a \u201cdigital ministers\u201d meeting last month, where members called for &#8220;risk-based&#8221; AI rules.<\/p>\n<p>AI poses a number of threats to humanity, so it&#8217;s important to ensure it continues to serve humans and not the other way around, according to Avivah\u00a0Litan, a vice president and distinguished analyst at Gartner Research.<\/p>\n<p>Everyday threats include a lack of transparency in generative AI models, which makes them unpredictable; even vendor \u201cdon\u2019t understand everything about how they work internally,\u201d Litan said in <a href=\"https:\/\/blogs.gartner.com\/avivah-litan\/2023\/05\/17\/regulating-ai-requires-international-cooperation-and-pragmatic-actions\/\" rel=\"nofollow noopener\" target=\"_blank\">a blog post last week<\/a>. And, because there\u2019s no verifiable data governance or protection assurances, generative AI can steal content at will and reproduce it, violating intellectual property and copyright laws.<\/p>\n<p>Additionally, chatbots and other AI-based tools can produce inaccurate or fabricated \u201challucinations\u201d because their output is only as good as the data input, and that ingestion process is often tied to the internet. The result: disinformation, \u201cmalinformation\u201d and misinformation, Litan noted.<\/p>\n<p>\u201cRegulators should set timeframes by which AI model vendors must use standards to authenticate provenance of content, software, and other digital assets used in their systems. See standards from\u00a0<a href=\"https:\/\/c2pa.org\/\" rel=\"nofollow noopener\" target=\"_blank\">C2PA<\/a>,\u00a0<a href=\"https:\/\/scitt.io\/\" rel=\"nofollow noopener\" target=\"_blank\">Scitt.io,<\/a>\u00a0<a href=\"https:\/\/www.ietf.org\/\" rel=\"nofollow noopener\" target=\"_blank\">IETF<\/a>\u00a0for examples,\u201d Litan said.<\/p>\n<p>\u201cWe just need to act, and act soon,\u201d she said.<\/p>\n<p>Even AI experts such asMax Tegmark, \u00a0MIT physicist, cosmologist and machine learning researcher, and Geoffrey Hinton, the so-called \u201cthe godfather of AI,\u201d are stumped to find a workable solution to the existential threat to humanity, Litan said.<\/p>\n<p>At an AI conference at MIT earlier this month, Hinton warned that because AI can be self-learning, it will become exponentially smarter over time and will begin thinking for itself. Once that happens, there\u2019s little to stop what Hinton believes is inevitable \u2014\u00a0the extinction of humans.<\/p>\n<p>\u201cThese things will have learned from us by reading all the novels that ever where and everything Machiavelli ever wrote [about] how to manipulate people,\u201d Hinton told a packed house during <a href=\"https:\/\/www.computerworld.com\/article\/3695568\/qa-googles-geoffrey-hinton-humanity-just-a-passing-phase-in-the-evolution-of-intelligence.html\">a Q&amp;A exchange<\/a>. \u201cAnd if they\u2019re much smarter than us, they\u2019ll be very good at manipulating us. You won\u2019t realize what\u2019s going on. You\u2019ll be like a two-year-old who\u2019s being asked, \u2018Do you want the peas or the cauliflower,&#8217; and doesn\u2019t realize you don\u2019t have to have either. And you\u2019ll be that easy to manipulate.&#8221;<\/p>\n<p>The G7 statement came after the European Union agreed on <a href=\"https:\/\/www.computerworld.com\/article\/3695009\/eu-closes-in-on-ai-act-with-last-minute-chatgpt-related-adjustments.html\">the creation of the AI Act<\/a>, which would reign in generative tools such as ChatGPT, DALL-E, and Midjourney in terms of design and deployment, to align with EU law and fundamental rights, including the need for AI makers to disclose any copyrighted material used to develop their systems.<\/p>\n<p>\u201cWe want AI systems to be accurate, reliable, safe and non-discriminatory, regardless of their origin,\u201d European Commission President Ursula von der Leyen said Friday.<\/p>\n<p>Earlier this month, the <a href=\"https:\/\/www.computerworld.com\/article\/3695731\/white-house-unveils-ai-rules-to-address-safety-and-privacy.html\">White House also unveiled AI rules<\/a> to address safety and privacy. The latest effort by the Biden Administration built on previous attempts to promote some form of responsible innovation, but to date Congress has not advanced any laws that would regulate AI. Last October, the\u00a0administration unveiled a\u00a0<a href=\"https:\/\/www.whitehouse.gov\/ostp\/ai-bill-of-rights\/\" rel=\"nofollow noopener\" target=\"_blank\">blueprint for an \u201cAI Bill of Rights\u201d<\/a>\u00a0as well as an AI Risk Management Framework; more recently, it pushed for a roadmap for standing up a National AI Research Resource.<\/p>\n<p>The measures, however, don\u2019t have any legal teeth &#8220;and they\u2019re not what we need now,&#8221; according to Litan.<\/p>\n<p>The United States has been something of a follower in developing AI rules. China has led the world in rolling out\u00a0<a href=\"https:\/\/carnegieendowment.org\/2022\/01\/04\/china-s-new-ai-governance-initiatives-shouldn-t-be-ignored-pub-86127\" rel=\"nofollow\">several initiatives\u00a0for AI governance<\/a>, though most of those initiatives relate to citizen privacy and not necessarily safety.<\/p>\n<p>\u201cWe need clear guidelines on development of safe, fair and responsible AI from the US regulators,\u201d Litan said in an earlier interview. \u201cWe need meaningful regulations such as we see being developed in the\u00a0EU with the AI Act. While they are not getting it all perfect at once, at least they are moving forward and are willing to iterate. US regulators need to step up their game and pace.&#8221;<\/p>\n<p>In March, Apple co-founder and form chief engineer Steve Wozniak, SpaceX CEO Elon Musk, hundreds of AI experts and thousands of others put their names on <a href=\"https:\/\/www.computerworld.com\/article\/3691639\/tech-bigwigs-hit-the-brakes-on-ai-rollouts.html\">an open letter<\/a> calling for a six-month pause in developing more powerful AI systems, citing potential risks to society. A month later, EU lawmakers urged world leaders to find ways to control AI technologies, saying it is developing faster than expected.<\/p>\n<p>Last week, the US Senate held two separate hearings during which members and experts who testified said they see AI as <a href=\"https:\/\/www.computerworld.com\/article\/3696317\/senate-hearings-see-a-clear-and-present-danger-from-ai-and-opportunities.html\">a clear and present danger<\/a> to security, privacy and copyrights. Generative AI technology, such as ChatGPT can and does use data and information from any number of sometimes unchecked sources.<\/p>\n<p>Sam Altman, CEO of ChatGPT-creator OpenAI, was joined by IBM executive Christina Montgomery and New York University professor emeritus Gary Marcus in <a href=\"https:\/\/www.computerworld.com\/article\/3696317\/senate-hearings-see-a-clear-and-present-danger-from-ai-and-opportunities.html\">testifying before the Senate<\/a> on the threats and opportunities chatbots present. \u201cIt\u2019s one of my areas of greatest concern,\u201d Altman said. \u201cThe more general ability of these models to manipulate, persuade, to provide one-on-one interactive disinformation \u2014 given we\u2019re going to face an election next year and these models are getting better, I think this is a significant area of concern.\u201d<\/p>\n<p>Regulation, Altman said, would be \u201cwise\u201d because people need to know if they\u2019re talking to an AI system or looking at content \u2014 images, videos or documents \u2014 generated by a chatbot.\u00a0\u201cI think we\u2019ll also need rules and guidelines about what is expected in terms of disclosure from a company providing a model that could have these sorts of abilities we\u2019re talking about. So, I\u2019m nervous about it.&#8221;<\/p>\n<p>Altman suggested the US government craft a three-point AI oversight plan:<\/p>\n<p>The Senate also heard testimony that the use of &#8220;watermarks&#8221; could help users identify where content generate from chatbots comes from. Lynne Parker, director of the AI Tennessee Initiative at the University of Tennessee, said requiring AI creators to insert metadata breadcrumbs in content would allow users to better understand the content\u2019s provenance.<\/p>\n<p><span style=\"font-size: 15px;\">The senate plans a future hearing on the topic of watermarking AI content.<\/span><\/p>\n<p><a href=\"https:\/\/www.computerworld.com\/article\/3697154\/g7-leaders-warn-of-ai-dangers-say-the-time-to-act-is-now.html#tk.rss_security\" target=\"bwo\" >http:\/\/www.computerworld.com\/category\/security\/index.rss<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/article\/2023\/05\/shutterstock_1960378384-100941278-small.jpg\"\/><\/p>\n<article>\n<section class=\"page\">\n<p>Leaders of the Group of Seven (G7) nations on Saturday called for the creation of technical standards to keep artificial intelligence (AI) in check, saying AI has outpaced oversight for safety and security.<\/p>\n<p>Meeting in Hiroshima, Japan, the leaders\u00a0said nations must <a href=\"https:\/\/www.geo.tv\/latest\/488413-g7-meeting-decides-to-set-up-working-group-to-tackle-ai-issues\" rel=\"noopener nofollow\" target=\"_blank\">come together on a common vision and goal of trustworthy AI<\/a>, even while those solutions may vary. But any solution for digital technologies such as AI should be \u201cin line with our shared democratic values,\u201d they <a href=\"https:\/\/www.consilium.europa.eu\/en\/press\/press-releases\/2023\/05\/20\/g7-hiroshima-leaders-communique\/\" rel=\"noopener nofollow\" target=\"_blank\">said in a statement<\/a>.<\/p>\n<p class=\"jumpTag\"><a href=\"\/article\/3697154\/g7-leaders-warn-of-ai-dangers-say-the-time-to-act-is-now.html#jump\">To read this article in full, please click here<\/a><\/p>\n<\/section>\n<\/article>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[11062,10643],"tags":[11113,13431,11070,1328,5897,714],"class_list":["post-22063","post","type-post","status-publish","format-standard","hentry","category-computerworld","category-independent","tag-artificial-intelligence","tag-chatbots","tag-emerging-technology","tag-government","tag-privacy","tag-security"],"_links":{"self":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/22063","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/comments?post=22063"}],"version-history":[{"count":0,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/22063\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/media?parent=22063"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/categories?post=22063"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/tags?post=22063"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}