{"id":21591,"date":"2023-03-29T10:30:16","date_gmt":"2023-03-29T18:30:16","guid":{"rendered":"https:\/\/www.palada.net\/index.php\/2023\/03\/29\/news-15322\/"},"modified":"2023-03-29T10:30:16","modified_gmt":"2023-03-29T18:30:16","slug":"news-15322","status":"publish","type":"post","link":"http:\/\/www.palada.net\/index.php\/2023\/03\/29\/news-15322\/","title":{"rendered":"Tech big wigs: Hit the brakes on AI rollouts"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/idge\/imported\/imageapi\/2022\/01\/25\/07\/artificial_intelligence_virtual_digital_identity_binary_stream_thinkstock-100528010-small-100917127-small.jpg\"\/><\/p>\n<p>More than 1,100 technology luminaries, leaders and scientists have issued a warning against labs performing large-scale experiments with artificial intelligence (AI) more powerful than ChatGPT, saying the technology poses a grave threat to humanity.<\/p>\n<p>In an <a href=\"https:\/\/futureoflife.org\/open-letter\/pause-giant-ai-experiments\/\" rel=\"noopener nofollow\" target=\"_blank\">open letter<\/a> published by <a href=\"https:\/\/futureoflife.org\/\" rel=\"nofollow noopener\" target=\"_blank\">The\u00a0Future\u00a0of\u00a0Life\u00a0Institute<\/a>, a nonprofit\u00a0organization\u00a0that aims is to reduce global catastrophic and existential risks to humanity, Apple co-founder Steve Wozniak, SpaceX and Tesla CEO Elon Musk, and MIT Future of Life Institute President Max Tegmark joined other signatories in saying\u00a0AI poses \u201cprofound risks to society and humanity, as shown by extensive research\u00a0and acknowledged by top AI labs.\u201d<\/p>\n<p>The signatories called for a six-month pause on the rollout of the training of AI systems more powerful than <a href=\"https:\/\/www.computerworld.com\/article\/3690323\/openai-unveils-gpt-4-a-new-foundation-for-chatgpt.html\">GPT-4<\/a>, which is the large language model (LLM) powering the popular <a href=\"https:\/\/www.computerworld.com\/article\/3687614\/how-enterprises-can-use-chatgpt-and-gpt-3.html\">ChatGPT<\/a> natural language processing chatbot. The letter, in part, depicted a dystopian future reminiscent of those created by artificial neural networks in science fiction movies, such as <em>The Terminator<\/em> and <em>The Matrix<\/em>. The letter pointedly questions whether advanced AI could lead to a \u201closs of control of our civilization.\u201d<\/p>\n<p>The missive also warns of political disruptions \u201cespecially to democracy\u201d from AI: chatbots acting as humans could flood social media and other networks with propaganda and untruths. And it warned that AI could \u201cautomate away all the jobs, including the fulfilling ones.\u201d<\/p>\n<p>The group called on civic leaders \u2014 not the technology community \u2014 to take charge of decisions around the breadth of AI deployments.<\/p>\n<p>Policymakers should work with the AI community to dramatically accelerate development of robust AI governance systems that, at a minimum, include new AI regulatory authorities, oversight, and tracking of highly capable AI systems and large pools of computational capability. The letter also suggested provenance and watermarking systems be used to help distinguish real from synthetic content and to track model leaks, along with a robust auditing and certification ecosystem.<\/p>\n<p>\u201cContemporary AI systems are now becoming human-competitive at general tasks,\u201d the letter said. \u201cShould\u00a0we develop nonhuman minds that might eventually outnumber, outsmart,\u00a0obsolete and replace\u00a0us?\u00a0Should\u00a0we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.\u201d<\/p>\n<p>(The UK government <a href=\"https:\/\/www.computerworld.com\/article\/3691901\/uk-government-s-ai-strategy-to-rely-on-existing-regulations-instead-of-new-laws.html\">today published a\u00a0white paper<\/a>\u00a0outlining plans to regulate general-purpose AI, saying it would \u201cavoid heavy-handed legislation which could stifle innovation,\u201d and instead rely on existing laws.)<\/p>\n<p>Avivah Litan, a vice president and distinguished analyst at Gartner Resaerch said The Future of Life Institute&#8217;s letter is spot on.<\/p>\n<p>The Future of Life Institute argued that AI labs are locked in an out-of-control race to develop and deploy \u201cever more powerful digital minds that no one \u2014 not even their creators \u2014 can understand, predict, or reliably control.\u201d<\/p>\n<p>Signatories included scientists at <a href=\"https:\/\/www.deepmind.com\/\" rel=\"nofollow noopener\" target=\"_blank\">DeepMind Technologies<\/a>, a British AI research lab and a subsidiary Google parent firm Alphabet. Google recently announced Bard, an AI-based conversational chatbot it developed using the LaMDA family of LLMs.<\/p>\n<p>LLMs are deep learning algorithms \u2014 computer programs for natural\u00a0language\u00a0processing \u2014\u00a0<a href=\"https:\/\/www.computerworld.com\/article\/3688920\/bings-ai-chatbot-came-to-work-for-me-i-had-to-fire-it.html\">that can produce human-like responses to queries<\/a>. The generative AI technology can also produce computer code, images, video and sound.<\/p>\n<p>Microsoft, which has invested <a href=\"https:\/\/www.bloomberg.com\/news\/articles\/2023-01-23\/microsoft-makes-multibillion-dollar-investment-in-openai#xj4y7vzkg\" rel=\"nofollow noopener\" target=\"_blank\">more than $10 billion\u00a0in ChatGPT and GPT-4 creator OpenAI<\/a>, said it had no comment at this time. OpenAI and Google also did not immediately respond to a request for comment.<\/p>\n<p>Andrzej Arendt, CEO of IT consultancy Cyber Geeks, said while generative AI tools are not yet able to deliver the highest quality software as a final product on their own, \u201ctheir assistance in generating pieces of code, system configurations or unit tests can significantly speed up the programmer&#8217;s work.<\/p>\n<p>\u201cWill it make the developers redundant? Not necessarily \u2014 partly because the results served by such tools cannot be used without question; programmer verification is necessary,\u201d Arendt continued. \u201cIn fact, changes in working methods have accompanied programmers since the beginning of the profession. Developers&#8217; work will simply shift to interacting with AI systems to some extent.\u201d<\/p>\n<p>The biggest changes will come with the introduction of full-scale AI systems, Arendt said, which can be compared to the industrial revolution in the 1800s that replaced an economy based on crafts, agriculture, and manufacturing.<\/p>\n<p>\u201cWith AI, the technological leap could be just as great, if not greater. At present, we cannot predict all the consequences,\u201d he said.<\/p>\n<p>Vlad Tushkanov, lead data scientist at Moscow-based cybersecurity firm Kaspersky, said integrating LLM algorithms into more services can bring new threats. In fact, LLM technologists, are already investigating attacks, such as <a href=\"https:\/\/en.wikipedia.org\/wiki\/Prompt_engineering#Prompt_injection\" rel=\"nofollow\">prompt injection<\/a>,\u00a0that can be used against LLMs and the services they power.<\/p>\n<p>\u201cAs the situation changes rapidly, it is hard to estimate what will happen next and whether these LLM peculiarities turn out to be the side effect of their immaturity or if they are their inherent vulnerability,\u201d Tushkanov said. \u201cHowever, businesses might want to include them into their threat models when planning to integrate LLMs into consumer-facing applications.\u201d<\/p>\n<p>That said, LLMs and AI technologies are useful and already automating an enormous amounts of \u201cgrunt work\u201d that is needed but neither enjoyable nor interesting for people to do. Chatbots, for example, can sift through millions of alerts, emails, probable phishing web pages and potentially malicious executables daily.<\/p>\n<p>\u201cThis volume of work would be impossible to do without automation,&#8221; Tushkanov said. &#8220;&#8230;Despite all the advances and cutting-edge technologies, there is still an acute shortage of cybersecurity talent. According to estimates, the industry needs millions more professionals, and in this very creative field, we cannot waste the people we have on monotonous, repetitive tasks.&#8221;<\/p>\n<p>Generative AI and machine learning won\u2019t replace all IT jobs, including people who fight cybersecurity threats, Tushkanov said. Solutions for those threats are being developed in an adversarial environment, where cybercriminals work against organizations to evade detection.<\/p>\n<p>\u201cThis makes it very difficult to automate them, because cybercriminals adapt to every new tool and approach,\u201d Tushkanov said. \u201cAlso, with cybersecurity precision and quality are very important, and right now large language models are, for example, prone to hallucinations (as our tests show, cybersecurity tasks are no exception).\u201d\u00a0<\/p>\n<p>The Future of Life Institute said in its letter that with guardrails, humanity can enjoy a flourishing future with AI.\u00a0<\/p>\n<p>\u201cEngineer these systems for the clear benefit of all, and give society a chance to adapt,\u201d the letter said. \u201cSociety has hit pause on other technologies with potentially catastrophic effects on society.\u00a0We can do so here.\u00a0Let&#8217;s enjoy a long AI summer, not rush unprepared into a fall.\u201d<\/p>\n<p><a href=\"https:\/\/www.computerworld.com\/article\/3691639\/tech-big-wigs-hit-the-brakes-on-ai-rollouts.html#tk.rss_security\" target=\"bwo\" >http:\/\/www.computerworld.com\/category\/security\/index.rss<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/idge\/imported\/imageapi\/2022\/01\/25\/07\/artificial_intelligence_virtual_digital_identity_binary_stream_thinkstock-100528010-small-100917127-small.jpg\"\/><\/p>\n<article>\n<section class=\"page\">\n<p>More than 1,100 technology luminaries, leaders and scientists have issued a warning against labs performing large-scale experiments with artificial intelligence (AI) more powerful than ChatGPT, saying the technology poses a grave threat to humanity.<\/p>\n<p>In an <a href=\"https:\/\/futureoflife.org\/open-letter\/pause-giant-ai-experiments\/\" rel=\"noopener nofollow\" target=\"_blank\">open letter<\/a> published by <a href=\"https:\/\/futureoflife.org\/\" rel=\"nofollow noopener\" target=\"_blank\">The\u00a0Future\u00a0of\u00a0Life\u00a0Institute<\/a>, a nonprofit\u00a0organization\u00a0that aims is to reduce global catastrophic and existential risks to humanity, Apple co-founder Steve Wozniak, SpaceX and Tesla CEO Elon Musk, and MIT Future of Life Institute President Max Tegmark joined other signatories in saying\u00a0AI poses \u201cprofound risks to society and humanity, as shown by extensive research\u00a0and acknowledged by top AI labs.\u201d<\/p>\n<p class=\"jumpTag\"><a href=\"\/article\/3691639\/tech-big-wigs-hit-the-brakes-on-ai-rollouts.html#jump\">To read this article in full, please click here<\/a><\/p>\n<\/section>\n<\/article>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[11062,10643],"tags":[11113,13431,11070,1670,10516,5897,714],"class_list":["post-21591","post","type-post","status-publish","format-standard","hentry","category-computerworld","category-independent","tag-artificial-intelligence","tag-chatbots","tag-emerging-technology","tag-google","tag-microsoft","tag-privacy","tag-security"],"_links":{"self":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/21591","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/comments?post=21591"}],"version-history":[{"count":0,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/21591\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/media?parent=21591"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/categories?post=21591"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/tags?post=21591"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}