{"id":23274,"date":"2023-10-30T22:30:06","date_gmt":"2023-10-31T06:30:06","guid":{"rendered":"https:\/\/www.palada.net\/index.php\/2023\/10\/30\/news-17004\/"},"modified":"2023-10-30T22:30:06","modified_gmt":"2023-10-31T06:30:06","slug":"news-17004","status":"publish","type":"post","link":"https:\/\/www.palada.net\/index.php\/2023\/10\/30\/news-17004\/","title":{"rendered":"What exactly will the UK government&#039;s global AI Safety Summit achieve?"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/idge\/imported\/imageapi\/2022\/09\/07\/10\/artificial-intelligence-698122_1280-100698891-small-100932024-small.jpg\"\/><\/p>\n<p>From tomorrow, the UK government is hosting the first global AI Safety Summit, bringing together about 100 people from industry and government to develop a shared understanding of the emerging risks of leading-edge AI while unlocking its benefits.\u00a0<\/p>\n<p>The <a href=\"https:\/\/www.computerworld.com\/article\/3705488\/uk-government-confirms-november-global-ai-summit.html\">event<\/a> will be held at Bletchley Park, a site in Milton Keynes that became the home of code breakers during World War II and saw the development of\u00a0Colossus, the world\u2019s first programmable digital electronic computer, used to decrypt the Nazi Party\u2019s Enigma code, shortening the war by at least two years.<\/p>\n<p>\u201cAI will bring new knowledge, new opportunities for economic growth, new advances in human capability, and the chance to solve problems we once thought beyond us. But it also brings new dangers and new fears,\u201d said UK Prime Minister Rishi Sunak in a speech last week, adding that one the aims of the summit will be an attempted agreement on the first ever international statement about the nature of the risks posed by AI.<\/p>\n<p>In September, the UK government <a href=\"https:\/\/www.computerworld.com\/article\/3706051\/uk-government-outlines-five-objectives-for-ai-safety-summit.html\">released an agenda<\/a> ahead of the summit, which included the development of a shared understanding of the risks posed by frontier AI, alongside calls for a process of international collaboration on AI safety, including how best to support national and international frameworks.<\/p>\n<p>These talking points were reinforced by a discussion paper that was <a href=\"https:\/\/www.computerworld.com\/article\/3709510\/uk-govt-outlines-ai-risks-in-new-report-ahead-of-ai-safety-summit.html\">published by the government<\/a> last week, due to be distributed to attendees of the summit with the aim of informing discussions.<\/p>\n<p>\u201cThe UK wants to be seen as an innovation hub and [AI technologies are] clearly going to be a massive area of growth and development, both for the economy and the workforce,\u201d said Philip Blows, CEO of StreaksAI, a UK-based developer of AI technology.<\/p>\n<p>However, while the general consensus seems to be in favor of an event where the risks of the technology are discussed, the format of the AI Safety Summit has faced some criticism. While some high profile attendees have been announced, such as US Vice President Kamala Harris, conformation of the full guest list has not yet been made public.<\/p>\n<p>Who gets to sit at the table and make decisions about the most important safety issues and potential harms is really critical, said Michael Bak, executive director of the Forum on Information and Democracy.<\/p>\n<p>\u201cIf that&#8217;s a close-knit group of people, dominated by the private sector\u2026 that would concern me,\u201d Bak said. \u201cMy desire would be that there would be recognition of the value that civil society brings to the table, in addition to the benefit of technologists who are developing these products for private interests.\u201d<\/p>\n<p>Hosting an AI Safety Summit is a \u201cpositive first step\u201d as it means governments are \u201cacknowledging that there are risks attached to this technology,\u201d said Shweta Singh, assistant professor at the University of Warwick whose research includes ethical and responsible AI.<\/p>\n<p>There\u2019s a concern, however, that ahead of the summit the talking points have been focused on some of the more headline-grabbing existential threats of AI, threats which the government itself have said are very unlikely to happen, and less of a discussion around harms such as bias and disinformation which we\u2019re already seeing happen in real time.<\/p>\n<p>For example, when <a href=\"https:\/\/www.computerworld.com\/article\/3697649\/what-are-large-language-models-and-how-are-they-used-in-generative-ai.html\">Large Language Models (LLMs)<\/a> behind popular generatve AI tools scrape the internet to form the building blocks of their learning, they&#8217;re bringing with them the biases that already exist within that content. In one instance, an Asian woman <a href=\"https:\/\/twitter.com\/ronawang\/status\/1679867848741765122?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1679867848741765122%7Ctwgr%5E3e68c8aba727a69d5b749d896f8b6dfe917b9ed5%7Ctwcon%5Es1_&amp;ref_url=https%3A%2F%2Ffuturism.com%2Fthe-byte%2Fasian-woman-ai-generator-white\" rel=\"nofollow\">posted on social media<\/a> that when she asked AI image generator Playground AI to turn a selfie she\u2019d taken into \u201ca professional LinkedIn profile photo,\u201d it made her look like a white woman.<\/p>\n<p>The current <a href=\"https:\/\/www.computerworld.com\/article\/3698191\/governments-worldwide-grapple-with-regulation-to-rein-in-ai-dangers.html\">lack of any global consensus<\/a> on how to regulate AI demonstrates just how complex an issue it is.<\/p>\n<p>When it comes to regulating technology, getting the balance right is really important, said Sarah Pearce, partner at Hunton Andrews Kurth, who has seen a tripling of the number of incoming requests related to AI governance in the last year.<\/p>\n<p>\u201cWhen you hear people like [Prime Minister] Rishi Sunak say that the legislators need to learn to understand the technology more in order to be able to put together the appropriate regulation, for me that that makes sense at this moment in time and I think it&#8217;s the right approach.\u201d<\/p>\n<p>In March, the UK government published a white paper\u00a0<a href=\"https:\/\/www.computerworld.com\/article\/3691901\/uk-governments-ai-strategy-to-rely-on-existing-regulations-instead-of-new-laws.html\">outlining its AI strategy<\/a>, stating it was seeking to avoid what it called \u201cheavy-handed legislation,\u201d and will instead call on existing regulatory bodies to use current regulations to ensure that AI applications adhere to guidelines, rather than draft new laws.<\/p>\n<p>However, Pearce adds that\u2019s also not to say that the EU\u2019s \u201cmore advanced\u2026 and more prescriptive\u201d approach to AI regulation \u2014 as set out in its <a href=\"https:\/\/www.computerworld.com\/article\/3699311\/eu-parliament-approves-ai-act-moving-it-closer-to-becoming-law.html\">draft AI Act<\/a> \u2014 is wrong or will stifle innovation.<\/p>\n<p>\u201cI&#8217;m a realist as well as an idealist and while I think that having global regulation would be the ideal, if I were to put on my realist hat, I can acknowledge that that might not be possible. However, that\u2019s where this kind of summit could prove highly useful and I do hope we see a lot more is harmony as a result,\u201d she said.<\/p>\n<p>There has to be more cooperation and more coordination across tech companies and governments, said the University of Warwick&#8217;s Singh, arguing that while we wait for the law to catch up, there needs to be more of a consensus around developing a set of ethical principles and guidelines that focus on the prevention of harm.<\/p>\n<p>With elections set to take place in the UK and US next year,\u00a0 Singh said tackling this issue is \u201cthe need of the hour,\u201d adding that the kind of harms this technology might be capable of regarding the disruption of the democratic process is something that should worry us all.<\/p>\n<p>However, while more clearly needs to be done, Singh said that the fact that the summit is even happening is itself an acknowledgment that the risk exists and something needs to be done about it, noting that its likely that the role played by deepfakes and disinformation during an election campaign will be a turning point for many politicians getting serious about tackling this issue.<\/p>\n<p>While no one is under any illusions that global governments are going to suddenly announce a unified regulatory framework for AI in the aftermath of the summit, there does seem to be widespread consensus that this summit shouldn\u2019t be a &#8220;one and done&#8221; event.<\/p>\n<p>This is something that could kickstart a series of summits and ultimately lead to a form of regulation, Pearce said, adding that she\u2019d like to see it setting paving the way for future global summits that ensure there is some kind of global alignment on our approach to AI development and use.<\/p>\n<p>Future summits that focus on growth and innovation would also be welcome, said Blows, who acknowledged that while discussions around the risks and concerns of AI are valid, it would be nice to see the conversation balance out in the future via events and media headlines that focus on the technology\u2019s potential for good.<\/p>\n<p>\u201cWe do need to look at what impact AI is going to have on the current economy and the jobs that we currently do, and hopefully balance that with what opportunities, new industries, and new jobs AI is going to create,\u201d Blows said<\/p>\n<p>Leadership in this space also needs to emerge in the coming months, said the Forum on Information and Democracy&#8217;s Bak, who added that while he applauded the UK government for trying to grasp this particular nettle, any future policy or regulatory work that takes place to address the impact of these frontier technologies needs to reflect more than just the views of those who can afford a seat at the table, and focus on the power imbalances that exist between civil society and the corporate world.<\/p>\n<p>\u201cWe need to understand that even though the technology may be developed in the global north, its impacts are felt across the world and there&#8217;s an added responsibility for those who are creating it, those who are implementing it, and therefore governments who want to take an active role in it,\u201d he said.<\/p>\n<p><a href=\"https:\/\/www.computerworld.com\/article\/3709749\/what-exactly-will-the-uk-governments-global-ai-safety-summit-achieve.html#tk.rss_security\" target=\"bwo\" >http:\/\/www.computerworld.com\/category\/security\/index.rss<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/idge\/imported\/imageapi\/2022\/09\/07\/10\/artificial-intelligence-698122_1280-100698891-small-100932024-small.jpg\"\/><\/p>\n<article>\n<section class=\"page\">\n<p>From tomorrow, the UK government is hosting the first global AI Safety Summit, bringing together about 100 people from industry and government to develop a shared understanding of the emerging risks of leading-edge AI while unlocking its benefits.\u00a0<\/p>\n<p>The <a href=\"https:\/\/www.computerworld.com\/article\/3705488\/uk-government-confirms-november-global-ai-summit.html\">event<\/a> will be held at Bletchley Park, a site in Milton Keynes that became the home of code breakers during World War II and saw the development of\u00a0Colossus, the world\u2019s first programmable digital electronic computer, used to decrypt the Nazi Party\u2019s Enigma code, shortening the war by at least two years.<\/p>\n<p class=\"jumpTag\"><a href=\"\/article\/3709749\/what-exactly-will-the-uk-governments-global-ai-safety-summit-achieve.html#jump\">To read this article in full, please click here<\/a><\/p>\n<\/section>\n<\/article>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[11062,10643],"tags":[11113,1328,8698,714],"class_list":["post-23274","post","type-post","status-publish","format-standard","hentry","category-computerworld","category-independent","tag-artificial-intelligence","tag-government","tag-regulation","tag-security"],"_links":{"self":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/23274","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/comments?post=23274"}],"version-history":[{"count":0,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/23274\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/media?parent=23274"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/categories?post=23274"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/tags?post=23274"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}