{"id":22124,"date":"2023-05-30T12:30:06","date_gmt":"2023-05-30T20:30:06","guid":{"rendered":"https:\/\/www.palada.net\/index.php\/2023\/05\/30\/news-15854\/"},"modified":"2023-05-30T12:30:06","modified_gmt":"2023-05-30T20:30:06","slug":"news-15854","status":"publish","type":"post","link":"http:\/\/www.palada.net\/index.php\/2023\/05\/30\/news-15854\/","title":{"rendered":"ChatGPT creators and others plead to reduce risk of global extinction from their tech"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/article\/2023\/05\/shutterstock_508983325-100941569-small.jpg\"\/><\/p>\n<p>Hundreds of tech industry leaders, academics, and others public figures signed <a href=\"https:\/\/www.safe.ai\/statement-on-ai-risk\" rel=\"nofollow noopener\" target=\"_blank\">an open letter<\/a> warning that artificial intelligence (AI) evolution could lead to an extinction event and saying that controlling the tech should be a top global priority.<\/p>\n<p>\u201cMitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,\u201d read the statement published by San Francisco-based <a href=\"https:\/\/www.safe.ai\/\" rel=\"nofollow noopener\" target=\"_blank\">Center for AI Safety<\/a>.<\/p>\n<p>The brief statement in the letter reads almost like a mea culpa for the technology about which its creators are now joining together to warn the world.<\/p>\n<p>Ironically, the most prominent signatories at the top of the letter included Sam Altman, CEO of OpenAI, the company that created the wildly popular generative AI chatbot ChatGPT, as well as Kevin Scott, CTO of Microsoft, OpenAI\u2019s biggest investor. A number of OpenAI founders and executives were also joined by executives, engineers, and scientists from Google\u2019s AI research lab, <a href=\"https:\/\/www.deepmind.com\/\" rel=\"nofollow noopener\" target=\"_blank\">DeepMind<\/a>.<\/p>\n<p>Geoffrey Hinton, considered the father of AI for his contributions to the tech over the past 40 years or so, also signed today\u2019s letter. During <a href=\"https:\/\/www.computerworld.com\/article\/3695568\/qa-googles-geoffrey-hinton-humanity-just-a-passing-phase-in-the-evolution-of-intelligence.html\">a Q&amp;A at MIT earlier this month<\/a>, Hinton went so far as to say humans are nothing more than a passing phase in the development of AI. He also said it was perfectly reasonable back in the \u201970s and \u201980s to do research on how to make artificial neural networks. But today\u2019s technology is as if genetic engineers decided to improve grizzly bears, allowing them to speak English and improve their \u201cIQ to 210.\u201d<\/p>\n<p>Hinton, however, said he felt no regrets over being instrumental in creating AI. \u201cIt wasn\u2019t really foreseeable \u2014 this stage of it wasn\u2019t foreseeable. Until very recently, I thought this existential crisis was a long way off. So, I don\u2019t really have any regrets over what I did,\u201d Hinton said.<\/p>\n<p>Earlier this month, leaders of the Group of Seven (G7) nations <a href=\"https:\/\/www.computerworld.com\/article\/3697154\/g7-leaders-warn-of-ai-dangers-say-the-time-to-act-is-now.html\">called for the creation of technical standards<\/a> to keep artificial intelligence (AI) in check, saying AI has outpaced oversight for safety and security. US Senate hearings earlier this month, which included testimony from OpenAI\u2019s Altman, also <a href=\"https:\/\/www.computerworld.com\/article\/3696317\/senate-hearings-see-a-clear-and-present-danger-from-ai-and-opportunities.html\">illustrated many clear and present dangers<\/a> emerging from AI evolution.<\/p>\n<p>\u201cThe statement signed by the Center for AI Safety is indeed ominous and without precedent in the tech industry. When have you ever heard of tech entrepreneurs telling the public that the technology they are working on can wipe out the human race if left unchecked?\u201d said Avivah Litan, a vice president and distinguished analyst at Gartner. \u201cYet they continue to work on it because of competitive pressures.\u201d<\/p>\n<p>While a tertiary to extinction, Litan also pointed out that businesses also face\u00a0\u201cshort-term and imminent\u201d risks from the use of AI. \u201cThey involve risks in misinformation and disinformation and the potential of cyberattacks or societal manipulations that scale much more quickly than what we saw in the past decade with social media and online commerce,\u201d she said. \u201cThese short-term risks can easily spin out of control if left unchecked.\u201d<\/p>\n<p>The shorter-term risks posed by AI can be addressed and mitigated with guardrails and technical solutions. The longer-term existential risks can be addressed through international government cooperation and regulation, she noted.\u00a0<\/p>\n<p>\u201cGovernments are moving very slowly, but technical innovation and solutions \u2014 where possible \u2014 are moving at lightning speed, as you would expect,\u201d Litan said. \u201cSo, it\u2019s anyone\u2019s guess what lies ahead.\u201d<\/p>\n<p>Today\u2019s letter follows <a href=\"https:\/\/www.computerworld.com\/article\/3691639\/tech-bigwigs-hit-the-brakes-on-ai-rollouts.html\">a similar one released in March<\/a> by the Future of Life Institute. <a href=\"https:\/\/futureoflife.org\/open-letter\/pause-giant-ai-experiments\/\" rel=\"nofollow noopener\" target=\"_blank\">That letter<\/a>, which was signed by Apple co-founder Steve Wozniak, SpaceX CEO Elon Musk, and nearly 32,000 others, called for a six-month pause in the development of ChatGPT to allow better controls to be put in place.<\/p>\n<p>The March letter called for oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.<\/p>\n<p>Dan Hendrycks, director of the Center for AI Safety, wrote in <a href=\"https:\/\/twitter.com\/DanHendrycks\/status\/1663474795865059329\" rel=\"nofollow noopener\" target=\"_blank\">a follow-on tweet thread<\/a> today that there are \u201cmany ways AI development could go wrong, just as pandemics can come from mismanagement, poor public health systems, wildlife, etc. Consider sharing your initial thoughts on AI risk with a tweet thread or post to help start the conversation and so that we can collectively explore these risk sources.\u201d<\/p>\n<p>Hendrycks also quoted <a href=\"https:\/\/www.history.co.uk\/articles\/robert-oppenheimer-father-of-the-atomic-bomb\" rel=\"nofollow noopener\" target=\"_blank\">Robert Oppenheimer<\/a>, theoretical physicist\u00a0and father of the atomic bomb: \u201cWe knew the world would not be the same.\u201d Hendrycks, however, didn\u2019t mention that the atomic bomb was created to stop the tyranny the world was facing from dominance by the Axis powers of World War II.<\/p>\n<p>The Center for AI Safety is a San Francisco-based nonprofit research organization whose stated mission is \u201cto ensure the safe development and deployment of AI.\u201d<\/p>\n<p>\u201cWe believe that artificial intelligence has the potential to profoundly benefit the world, provided that we can develop and use it safely,\u201d the group\u2019s web page states.<\/p>\n<p><a href=\"https:\/\/www.computerworld.com\/article\/3697738\/chatgpt-creators-plead-to-reduce-risk-of-global-extinction-from-their-tech.html#tk.rss_security\" target=\"bwo\" >http:\/\/www.computerworld.com\/category\/security\/index.rss<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/article\/2023\/05\/shutterstock_508983325-100941569-small.jpg\"\/><\/p>\n<article>\n<section class=\"page\">\n<p>Hundreds of tech industry leaders, academics, and others public figures signed <a href=\"https:\/\/www.safe.ai\/statement-on-ai-risk\" rel=\"nofollow noopener\" target=\"_blank\">an open letter<\/a> warning that artificial intelligence (AI) evolution could lead to an extinction event and saying that controlling the tech should be a top global priority.<\/p>\n<p>\u201cMitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,\u201d read the statement published by San Francisco-based <a href=\"https:\/\/www.safe.ai\/\" rel=\"nofollow noopener\" target=\"_blank\">Center for AI Safety<\/a>.<\/p>\n<p>The brief statement in the letter reads almost like a mea culpa for the technology about which its creators are now joining together to warn the world.<\/p>\n<p class=\"jumpTag\"><a href=\"\/article\/3697738\/chatgpt-creators-plead-to-reduce-risk-of-global-extinction-from-their-tech.html#jump\">To read this article in full, please click here<\/a><\/p>\n<\/section>\n<\/article>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[11062,10643],"tags":[11113,13431,11070,714],"class_list":["post-22124","post","type-post","status-publish","format-standard","hentry","category-computerworld","category-independent","tag-artificial-intelligence","tag-chatbots","tag-emerging-technology","tag-security"],"_links":{"self":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/22124","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/comments?post=22124"}],"version-history":[{"count":0,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/22124\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/media?parent=22124"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/categories?post=22124"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/tags?post=22124"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}