{"id":21697,"date":"2023-04-10T12:30:05","date_gmt":"2023-04-10T20:30:05","guid":{"rendered":"https:\/\/www.palada.net\/index.php\/2023\/04\/10\/news-15428\/"},"modified":"2023-04-10T12:30:05","modified_gmt":"2023-04-10T20:30:05","slug":"news-15428","status":"publish","type":"post","link":"https:\/\/www.palada.net\/index.php\/2023\/04\/10\/news-15428\/","title":{"rendered":"Tech bigwigs: Hit the brakes on AI rollouts"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/idge\/imported\/imageapi\/2022\/01\/25\/07\/artificial_intelligence_virtual_digital_identity_binary_stream_thinkstock-100528010-small-100917127-small.jpg\"\/><\/p>\n<p>More than 1,100 technology luminaries, leaders, and scientists have issued a warning against labs performing large-scale experiments with artificial intelligence (AI) more powerful than ChatGPT, saying the technology poses a grave threat to humanity.<\/p>\n<p>In an <a href=\"https:\/\/futureoflife.org\/open-letter\/pause-giant-ai-experiments\/\" rel=\"noopener nofollow\" target=\"_blank\">open letter<\/a> published by<a href=\"https:\/\/futureoflife.org\/\" rel=\"nofollow noopener\" target=\"_blank\">\u00a0Future\u00a0of\u00a0Life\u00a0Institute<\/a>, a nonprofit\u00a0organization with the mission to reduce global catastrophic and existential risks to humanity, Apple co-founder Steve Wozniak and SpaceX and Tesla CEO Elon Musk joined other signatories in agreeing AI poses \u201cprofound risks to society and humanity, as shown by extensive research\u00a0and acknowledged by top AI labs.\u201d<\/p>\n<p>The petition called for a six-month pause on upgrades to generative AI platforms, such as\u00a0<a href=\"https:\/\/www.computerworld.com\/article\/3690323\/openai-unveils-gpt-4-a-new-foundation-for-chatgpt.html\">GPT-4<\/a>, which is the large language model (LLM) powering the popular <a href=\"https:\/\/www.computerworld.com\/article\/3687614\/how-enterprises-can-use-chatgpt-and-gpt-3.html\">ChatGPT<\/a> natural language processing chatbot. The letter, in part, depicted a dystopian future reminiscent of those created by artificial neural networks in science fiction movies, such as <em>The Terminator<\/em> and <em>The Matrix<\/em>. The letter pointedly questions whether advanced AI could lead to a \u201closs of control of our civilization.\u201d<\/p>\n<p><iframe loading=\"lazy\" title=\"YouTube video player\" src=\"https:\/\/www.youtube.com\/embed\/z_0X0W6-6p0\" width=\"100%\" height=\"420\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\" data-mce-fragment=\"1\" style=\"\"> <\/iframe><\/p>\n<p>The missive also warns of political disruptions \u201cespecially to democracy\u201d from AI: chatbots acting as humans could flood social media and other networks with propaganda and untruths. And it warned that AI could \u201cautomate away all the jobs, including the fulfilling ones.\u201d<\/p>\n<p>The letter called on civic leaders \u2014 not the technology community \u2014 to take charge of decisions around the breadth of AI deployments.<\/p>\n<p>Policymakers should work with the AI community to dramatically accelerate development of robust AI governance systems that, at a minimum, include new AI regulatory authorities, oversight, and tracking of highly capable AI systems and large pools of computational capability. The letter also suggested provenance and watermarking systems be used to help distinguish real from synthetic content and to track model leaks, along with a robust auditing and certification ecosystem.<\/p>\n<p>\u201cContemporary AI systems are now becoming human-competitive at general tasks,\u201d the letter said. \u201cShould\u00a0we develop nonhuman minds that might eventually outnumber, outsmart,\u00a0obsolete and replace\u00a0us?\u00a0Should\u00a0we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.\u201d<\/p>\n<p>(The UK government <a href=\"https:\/\/www.computerworld.com\/article\/3691901\/uk-government-s-ai-strategy-to-rely-on-existing-regulations-instead-of-new-laws.html\">today published a\u00a0white paper<\/a>\u00a0outlining plans to regulate general-purpose AI, saying it would \u201cavoid heavy-handed legislation which could stifle innovation,\u201d and instead rely on existing laws.)<\/p>\n<p>Avivah Litan, a vice president and distinguished analyst at Gartner Research, said the warning from tech leaders is spot on, and currently there is no technology to ensure authenticity or accuracy of the information being generated by AI tools such as GPT-4.<\/p>\n<p>The greater concern, she said, is that OpenAI already plans to release GPT-4.5 in about six months, and GPT-5 about six months after that. &#8220;So, I\u2019m guessing that\u2019s the six-month urgency mentioned in the letter,&#8221; Litan said.\u00a0&#8220;They\u2019re just moving full steam ahead.&#8221;<\/p>\n<p>The <a href=\"https:\/\/www.digitaltrends.com\/computing\/gpt-5-artificial-general-intelligence\/\" rel=\"nofollow\">expectation of GPT-5<\/a> is it will be an <a href=\"https:\/\/en.wikipedia.org\/wiki\/Artificial_general_intelligence\" rel=\"nofollow\">artificial general intelligence<\/a>, or AGI, where the AI becomes sentient and can start thinking for itself. At that point, it continues to grow exponentially smarter over time.\u00a0<\/p>\n<p>&#8220;Once you get to AGI, it\u2019s like game over for human beings, because once the AI is as smart as a human, it\u2019s as smart as [Albert] Einstein, then once it becomes as smart as Einstein, it becomes as smart as 100 Einsteins in a year,&#8221; Litan said. &#8220;It escalates completely out of control once you get to AGI. So that\u2019s the big fear. At that point, humans have no control. It\u2019s just out of our hands.&#8221;<\/p>\n<p>Anthony Aguirre, a professor of physics at UC Santa Cruz and executive vice president of Future of Life, said only the labs themselves know what computations they are running.<\/p>\n<p>&#8220;But the trend is unmistakable,&#8221; he said in an email reply to <em>Computerworld<\/em>. &#8220;The largest-scale computations are increasing size by about 2.5 times per year. GPT-4\u2019s parameters were not disclosed by OpenAI, but there is no reason to think this trend has stopped or even slowed.&#8221;<\/p>\n<p>The Future of Life Institute argued that AI labs are locked in an out-of-control race to develop and deploy \u201cever more powerful digital minds that no one \u2014 not even their creators \u2014 can understand, predict, or reliably control.\u201d<\/p>\n<p>Signatories included scientists at <a href=\"https:\/\/www.deepmind.com\/\" rel=\"nofollow noopener\" target=\"_blank\">DeepMind Technologies<\/a>, a British AI research lab and a subsidiary Google parent firm Alphabet. Google recently announced Bard, an AI-based conversational chatbot it developed using the LaMDA family of LLMs.<\/p>\n<p>LLMs are deep learning algorithms \u2014 computer programs for natural\u00a0language\u00a0processing \u2014\u00a0<a href=\"https:\/\/www.computerworld.com\/article\/3688920\/bings-ai-chatbot-came-to-work-for-me-i-had-to-fire-it.html\">that can produce human-like responses to queries<\/a>. The generative AI technology can also produce computer code, images, video and sound.<\/p>\n<p>Microsoft, which has invested <a href=\"https:\/\/www.bloomberg.com\/news\/articles\/2023-01-23\/microsoft-makes-multibillion-dollar-investment-in-openai#xj4y7vzkg\" rel=\"nofollow noopener\" target=\"_blank\">more than $10 billion\u00a0in ChatGPT and GPT-4 creator OpenAI<\/a>, said it had no comment at this time. OpenAI and Google also did not immediately respond to a request for comment.<\/p>\n<p>Jack Gold, principal analyst with industry resarch firm J. Gold Associates, believes the biggest risk is training the LLMs with biases. So, for example, a developer could purposely train a model with bias against \u201cwokeness,&#8221; or against conservatism, or make it socialist friendly or support white supremacy.<\/p>\n<p>&#8220;These are extreme examples, but it certainly is possible (and probable) that the models will have biases,&#8221; Gold said in an email reply to <em>Computerworld<\/em>. &#8220;I see that as a bigger short-to-middle-term risk than job loss \u2014 especially if we assume the Gen AI is accurate and to be trusted. So the fundamental question around trusting the model is, I think, critical to how to use the outputs.&#8221;<\/p>\n<p>Andrzej Arendt, CEO of IT consultancy Cyber Geeks, said while generative AI tools are not yet able to deliver the highest quality software as a final product on their own, \u201ctheir assistance in generating pieces of code, system configurations or unit tests can significantly speed up the programmer&#8217;s work.<\/p>\n<p>\u201cWill it make the developers redundant? Not necessarily \u2014 partly because the results served by such tools cannot be used without question; programmer verification is necessary,\u201d Arendt continued. \u201cIn fact, changes in working methods have accompanied programmers since the beginning of the profession. Developers&#8217; work will simply shift to interacting with AI systems to some extent.\u201d<\/p>\n<p>The biggest changes will come with the introduction of full-scale AI systems, Arendt said, which can be compared to the industrial revolution in the 1800s that replaced an economy based on crafts, agriculture, and manufacturing.<\/p>\n<p>\u201cWith AI, the technological leap could be just as great, if not greater. At present, we cannot predict all the consequences,\u201d he said.<\/p>\n<p>Vlad Tushkanov, lead data scientist at Moscow-based cybersecurity firm Kaspersky, said integrating LLM algorithms into more services can bring new threats. In fact, LLM technologists, are already investigating attacks, such as <a href=\"https:\/\/en.wikipedia.org\/wiki\/Prompt_engineering#Prompt_injection\" rel=\"nofollow\">prompt injection<\/a>,\u00a0that can be used against LLMs and the services they power.<\/p>\n<p>\u201cAs the situation changes rapidly, it is hard to estimate what will happen next and whether these LLM peculiarities turn out to be the side effect of their immaturity or if they are their inherent vulnerability,\u201d Tushkanov said. \u201cHowever, businesses might want to include them into their threat models when planning to integrate LLMs into consumer-facing applications.\u201d<\/p>\n<p>That said, LLMs and AI technologies are useful and already automating an enormous amounts of \u201cgrunt work\u201d that is needed but neither enjoyable nor interesting for people to do. Chatbots, for example, can sift through millions of alerts, emails, probable phishing web pages and potentially malicious executables daily.<\/p>\n<p>\u201cThis volume of work would be impossible to do without automation,&#8221; Tushkanov said. &#8220;&#8230;Despite all the advances and cutting-edge technologies, there is still an acute shortage of cybersecurity talent. According to estimates, the industry needs millions more professionals, and in this very creative field, we cannot waste the people we have on monotonous, repetitive tasks.&#8221;<\/p>\n<p>Generative AI and machine learning won\u2019t replace all IT jobs, including people who fight cybersecurity threats, Tushkanov said. Solutions for those threats are being developed in an adversarial environment, where cybercriminals work against organizations to evade detection.<\/p>\n<p>\u201cThis makes it very difficult to automate them, because cybercriminals adapt to every new tool and approach,\u201d Tushkanov said. \u201cAlso, with cybersecurity precision and quality are very important, and right now large language models are, for example, prone to hallucinations (as our tests show, cybersecurity tasks are no exception).\u201d\u00a0<\/p>\n<p>The Future of Life Institute said in its letter that with guardrails, humanity can enjoy a flourishing future with AI.\u00a0<\/p>\n<p>\u201cEngineer these systems for the clear benefit of all, and give society a chance to adapt,\u201d the letter said. \u201cSociety has hit pause on other technologies with potentially catastrophic effects on society.\u00a0We can do so here.\u00a0Let&#8217;s enjoy a long AI summer, not rush unprepared into a fall.\u201d<\/p>\n<p><a href=\"https:\/\/www.computerworld.com\/article\/3691639\/tech-bigwigs-hit-the-brakes-on-ai-rollouts.html#tk.rss_security\" target=\"bwo\" >http:\/\/www.computerworld.com\/category\/security\/index.rss<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/idge\/imported\/imageapi\/2022\/01\/25\/07\/artificial_intelligence_virtual_digital_identity_binary_stream_thinkstock-100528010-small-100917127-small.jpg\"\/><\/p>\n<article>\n<section class=\"page\">\n<p>More than 1,100 technology luminaries, leaders, and scientists have issued a warning against labs performing large-scale experiments with artificial intelligence (AI) more powerful than ChatGPT, saying the technology poses a grave threat to humanity.<\/p>\n<p>In an <a href=\"https:\/\/futureoflife.org\/open-letter\/pause-giant-ai-experiments\/\" rel=\"noopener nofollow\" target=\"_blank\">open letter<\/a> published by<a href=\"https:\/\/futureoflife.org\/\" rel=\"nofollow noopener\" target=\"_blank\">\u00a0Future\u00a0of\u00a0Life\u00a0Institute<\/a>, a nonprofit\u00a0organization with the mission to reduce global catastrophic and existential risks to humanity, Apple co-founder Steve Wozniak and SpaceX and Tesla CEO Elon Musk joined other signatories in agreeing AI poses \u201cprofound risks to society and humanity, as shown by extensive research\u00a0and acknowledged by top AI labs.\u201d<\/p>\n<p class=\"jumpTag\"><a href=\"\/article\/3691639\/tech-bigwigs-hit-the-brakes-on-ai-rollouts.html#jump\">To read this article in full, please click here<\/a><\/p>\n<\/section>\n<\/article>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[11062,10643],"tags":[11113,13431,11070,1670,10516,5897,714],"class_list":["post-21697","post","type-post","status-publish","format-standard","hentry","category-computerworld","category-independent","tag-artificial-intelligence","tag-chatbots","tag-emerging-technology","tag-google","tag-microsoft","tag-privacy","tag-security"],"_links":{"self":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/21697","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/comments?post=21697"}],"version-history":[{"count":0,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/21697\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/media?parent=21697"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/categories?post=21697"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/tags?post=21697"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}