{"id":21903,"date":"2023-05-02T14:30:59","date_gmt":"2023-05-02T22:30:59","guid":{"rendered":"http:\/\/www.palada.net\/index.php\/2023\/05\/02\/news-15634\/"},"modified":"2023-05-02T14:30:59","modified_gmt":"2023-05-02T22:30:59","slug":"news-15634","status":"publish","type":"post","link":"https:\/\/www.palada.net\/index.php\/2023\/05\/02\/news-15634\/","title":{"rendered":"Q&amp;A: At MIT event, Tom Siebel sees \u2018terrifying\u2019 consequences from using AI"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/idge\/imported\/imageapi\/2023\/04\/28\/14\/flying-through-the-futuristic-tunnel-abstract-3d-animation-data-network-virtual-reality.jpgs1024x1024wisk20cvoxwpgc5osm-hy4rpoh304asy7r07g9gaicnfhuwvxg-100940446-small.jpg\"\/><\/p>\n<p>Speakers ranging from artificial intelligence (AI) developers to law firms grappled this week with questions about the efficacy and ethics of AI during <a href=\"https:\/\/event.technologyreview.com\/emtech-digital-2023\/\" rel=\"noopener nofollow\" target=\"_blank\">MIT Technology Review&#8217;s EmTech Digital conference<\/a>. Among those who had a somewhat alarmist view of the technology (and regulatory efforts to rein it in) was\u00a0Tom Siebel, CEO <a href=\"https:\/\/c3.ai\/\" rel=\"nofollow noopener\" target=\"_blank\">C3 AI<\/a> and founder of CRM vendor Siebel Systems.<\/p>\n<p>Siebel was on hand to talk about how businesses can prepare for an incoming wave of AI regulations, but in his comments Tuesday he touched on various facets of the debate of generative AI, including the ethics of using it, how it could evolve, and why it could be dangerous.<\/p>\n<p>For about 30 minutes, <em>MIT\u00a0Technology\u00a0Review<\/em> Editor-in-Chief\u00a0Mat Honan and several conference attendees posed questions to Siebel, beginning with what are the ethical and unethical uses of AI. The\u00a0conversation quickly turned to AI\u2019s potential to cause damage on a global scale, as well as the nearly impossible task of setting up guardrails against its use for unintended and intended nefarious purposes.<\/p>\n<p>The following are excerpts from that conversation.<\/p>\n<p><strong>[Honan] What is ethical AI, what are ethical uses of AI or even unethical uses of AI? <\/strong>&#8220;The last 15 years we\u2019ve spent a couple billion dollars building a software stack we used to design, develop, provision, and operate at massive scale enterprise predictive analytics applications. So, what are applications of these technologies I where I don\u2019t think we have to deal with bias and we don\u2019t have ethical issues?<\/p>\n<p>&#8220;I think anytime we\u2019re dealing with physical systems, we\u2019re dealing with pressure, temperature, velocity, torque, rotational velocity. I don\u2019t think we have a problem with ethics. For example, we\u2019re\u2026using it for one of the largest commercial applications for AI, the area of predictive maintenance.<\/p>\n<p>&#8220;Whether it\u2019s for power generation and distribution assets in the power grid or predictive maintenance for offshore oil rigs, where the data are extraordinarily large data sets we\u2019re arriving at with very rapid velocity, &#8230;we\u2019re building machine-learning models that are going to identify device failure before it happens \u2014 avoiding a failure of, say, an offshore oil rig of Shell. The cost of that would be incalculable. I don\u2019t think there are any ethical issues. I think we can agree on that.<\/p>\n<p>&#8220;Now, anytime we get to the intersection of artificial intelligence and sociology, it gets pretty slippery, pretty fast. This is where we get into perpetuating cultural bias. I can give you specific examples, but it seems like it was yesterday \u2014 it was earlier this year \u2014 that this business came out of generative AI. And is generative AI an interesting technology? It\u2019s really an interesting technology. Are these large language models important? They\u2019re hugely important.<\/p>\n<p>&#8220;Now all of a sudden, somebody woke up and found, gee, there are ethical situations associated with AI. I mean, people, we\u2019ve had ethical situations with AI going back many, many years. I don\u2019t happen to have a smartphone in my pocket because they striped it from me on the way in, but how about social media? Social media may be the most destructive invention in the history of mankind. And everybody knows it. We don\u2019t need ChatGPT for that.<\/p>\n<p>&#8220;So, I think that\u2019s absolutely an unethical application of AI. I mean we\u2019re using these smartphones in everybody\u2019s pocket to manipulate two to three billion people at the level of the limpid brain, where we\u2019re using this to regulate the release of dopamine. We have people addicted to these technologies. We know it causes an enormous health problem, particularly among young women. We know it causes suicide, depression, loneliness, body image issues \u2013 documented. We know these systems are the primary exchange for the slave trade in the Middle East and Asia. These systems call in to question our ability to conduct a free and open Democratic society.<\/p>\n<p>&#8220;Does anyone have an ethical problem with that? And that\u2019s the old stuff. Now we get into the new stuff.&#8221;<\/p>\n<p><strong>Siebel spoke about government requests made of his company.\u00a0<\/strong>&#8220;Where have I [seen] problems that we\u2019ve been posed? OK. So, I\u2019m in Washington DC. and I won\u2019t say in whose office or what administration, but it\u2019s a big office. We do a lot of work in the Beltway, in things like contested logistics, AI predictive maintenance for assets in the United States Air Force, command-and-control dashboards, what have you, for SOCOM [Special Operations Command], TransCom [Transportation Command], National Guard, things like this.<\/p>\n<p>&#8220;And, I\u2019m in this important office and this person turns his office over to his civilian advisor who\u2019s a PhD in behavioral psychology\u2026, and she starts asking me these increasingly uncomfortable questions. The third question was, \u2018Tom, can we use your system to identify extremists in the United States population.\u2019<\/p>\n<p>&#8220;I\u2019m like holy moly; what\u2019s an extremist? Maybe a white male Christian? I just said, &#8216;I\u2019m sorry, I don\u2019t feel comfortable with this conversation. You\u2019re talking to the wrong people. And this is not a conversation I want to have.&#8217; Now, I have a competitor who will do that transaction in a heartbeat.<\/p>\n<p>&#8220;Now, to the extent we have the opportunity to do work for the United States government, we do so. I\u2019m in a meeting \u2014 not this administration \u2014 but with the Undersecretary of the Army in California, and he says, \u2018Tom, we want to use your system to build an AI-based human resource system for the Department of the Army.&#8217;<\/p>\n<p>&#8220;I said, &#8216;OK, tell me what the scale of this system is.&#8217; The Department of the Army is about a million and a half people by the time you get into the reserves. I said, &#8216;What is this system going to do?&#8217; He says we\u2019re going to make decisions about who to assign to a billet and who to promote. I said, &#8216;Mr. Secretary, this is a really bad idea. The problem is, yes we can build the system, and yes we can have it at scale of the Department of the Army say in six months. The problem is we have this thing in the data called cultural bias. The problem is no matter what the question is, the answer is going to be: white, male, went to West Point.\u2019<\/p>\n<p>&#8220;In 2020 or 2021 \u2014 whatever year it was &#8212; that\u2019s just not going to fly. Then we\u2019ve got to read about ourselves on the front page of <em>The New York Times<\/em>; then we\u2019ve got to get dragged before Congress to testify, and I\u2019m not going with you.<\/p>\n<p>&#8220;So, this is what I\u2019d describe as the unethical use of AI.&#8221;<\/p>\n<p><em>[Siebel also spoke about AI&#8217;s use in predictive health.]<\/em><\/p>\n<p>&#8220;Let\u2019s talk about one I\u2019m particularly concerned about. The largest commercial application of AI \u2013 hard stop \u2013 will be precision health. There\u2019s no question about that.<\/p>\n<p>&#8220;There\u2019s a big project going on in the UK, right now, which may be on the order of 400 million pounds. There\u2019s a billion dollar project going on in the [US] Veterans Administration. An example of precision medicine &#8230; [would be to] aggregate the genome sequences and the healthcare records of the population of the UK or the United States or France, or whatever nation it may be\u2026, and then build machine-learning models that will predict with very high levels of precision and recall who\u2019s going to be diagnosed with what disease in the next five years.<\/p>\n<p>&#8220;This is not really disease detection; this is disease prediction. And this gives us the opportunity to intervene clinically and avoid the diagnosis. I mean, what could go wrong? Then we combine that with the cellphone, where we can reach previously underserved communities and in the future every one of us and how many people have devices emitting telemetry? Heart arrhythmia, pulse, blood glucose levels, blood chemicals, whatever it may be.<\/p>\n<p>&#8220;We have these devices today and we\u2019ll have more of them in the future. We\u2019ll be able to provide medical care to largely underserved [people]\u2026, so, net-net we have a healthier population, we\u2019re delivering more efficacious medicine\u2026 at a lower cost to a larger population. What could go wrong here? Let\u2019s think about it.<\/p>\n<p>&#8220;Who cares about pre-existing conditions when we know what you\u2019ll diagnosed with in the next five years. The idea that it won\u2019t be used to set rates \u2014 get over it, because it will.<\/p>\n<p>&#8220;Even worse, it doesn\u2019t matter which side of the fence you\u2019re on. Whether you believe in a single-care provider or a quasi-free market system like we have in the United States. The idea that this government entity or this private sector company is going to act beneficially, you can get over that because they\u2019re not going to act beneficially. And these systems absolutely \u2013\u2014 hard stop \u2014 will be used to ration healthcare. They\u2019ll be used in the Unites States; they\u2019ll be used in the UK; they\u2019ll be used in the Veterans Administration. I don\u2019t know if you find that disturning, but I do.<\/p>\n<p>&#8220;Now, we ration healthcare today\u2026, perhaps in an equally horrible way, but this strikes me as a particularly horrible use of AI.&#8221;<\/p>\n<p><strong>[Honan] There\u2019s a bill [in California] that would do things to try to combat algorithmic discrimination, to inform consumers that AI has been used in a decision-making process. There\u2019s other things happening in Europe with data collection. People have been talking about algorithmic bias for a long time now. Do you think this stuff will become effectively regulated, or do you think it\u2019s just going to be out there in the wild? These things are coming but do you think this shouldn&#8217;t be regulated?\u00a0<\/strong>&#8220;I think that when we\u2019re dealing with AI, where it is today and where it\u2019s going, we\u2019re dealing with something extraordinarily powerful. This is more powerful than the steam engine. Remember, the steam engine brought us the industrial revolution, brought us World War I, World War II, communism.<\/p>\n<p>&#8220;This is big. And, the deleterious consequences of this are just terrifying. It makes an Orwellian future look like the Garden of Eden compared to what is capable of happening here.<\/p>\n<p>&#8220;We need to discuss what the implications of this are. We need to deal with the privacy implications. I mean, pretty soon it\u2019s going to be impossible to determine the difference between fake news and real news.<\/p>\n<p>&#8220;It might be very difficult to carry on a free and open democratic society. This does need to be discussed. It needs to be discussed in the academy. It needs to be discussed in government.<\/p>\n<p>&#8220;Now, the regulatory proposals that I\u2019ve seen are kind of crazy. We\u2019ve got this current proposal that everybody\u2019s aware of from <a href=\"https:\/\/www.reuters.com\/world\/us\/senate-leader-schumer-pushes-ai-regulatory-regime-after-china-action-2023-04-13\/\" rel=\"nofollow noopener\" target=\"_blank\">a senior senator from New York<\/a>\u00a0[Senate Majority Leader Chuck Schumer, D-NY] where we\u2019re basically going to form a regulatory agency that\u2019s going to approve and regulate [AI] algorithms before they can be published. Someone tell me in this room where we draw the line between AI and not AI. I don\u2019t think there\u2019s any two of us who will agree.<\/p>\n<p>&#8220;We\u2019re going to set up something like a federal algorithm association to whom we\u2019re going to submit our algorithms for approval? How many millions of algorithms \u2014 hundreds of millions? \u2014 are generated in the United States every day. We\u2019re basically going to criminalize science. Or, we\u2019re forcing all science outside the United States. That\u2019s just whacked.<\/p>\n<p>&#8220;The other alternatives are \u2014 and I don\u2019t want to take any shots at this guy because I think he may be one of the smartest people on the planet \u2014 but this idea that we\u2019re going to stop research for six months? I mean c\u2019mon. You\u2019re going to stop research at MIT for six months? I don\u2019t think so. You\u2019re going to stop research in Shanghai \u2014 in Beijing \u2014 for six months? No way, no how.<\/p>\n<p>&#8220;I just haven\u2019t heard anything that makes any sense. Do we need to have dialogue? Are these dialogues we\u2019re having here important? They\u2019re critically important. We need to get in the room and we need to agree; we need to disagree; we need to fight it out. Whatever the solutions are, they\u2019re not easy.&#8221;<\/p>\n<p><strong>Before we see anything federal happening here\u2026, is there a case that the industry should be leading the charge on regulation? <\/strong>&#8220;There is a case, but I\u2019m afraid we don\u2019t have a very good track record there; I mean, see Facebook for details. I\u2019d like to believe self-regulation would work, but power corrupts and absolute power corrupts absolutely.<\/p>\n<p>&#8220;What has happened in social media in the last decade, these companies have not regulated themselves. They\u2019ve done enormous damage to billions of people around the world.&#8221;<\/p>\n<p><strong>I\u2019ve been in healthcare for a long time. You mentioned regulations round AI. Different institutions in healthcare, they don\u2019t even understand HIPPA. How are we going to migrate an AI regulation in healthcare? <\/strong>&#8220;We can protect the data. HIPPA was one of the best data protection laws out there. That\u2019s not a difficult problem \u2014 to be HIPPA compliant.<\/p>\n<p><strong>[Audience member] Do you foresee C3 AI implementing generative AI on top of&#8230;the next [enterprise application] that\u2019s going to show up and how do I solve that<\/strong>? &#8220;We\u2019re using generative AI for <a href=\"https:\/\/en.wikipedia.org\/wiki\/Generative_pre-trained_transformer\" rel=\"nofollow noopener\" target=\"_blank\">pre-trained generative transformers<\/a> and these large language models for a non-obvious use. We\u2019re using it to fundamentally change the nature of the human-computer interface for enterprise application software.<\/p>\n<p>&#8220;Over the last 50 years, from IBM hologram cards to Fortran\u2026to Windows devices to PCs, if you look at the human-computer iteration model for ERP systems, for CRM systems, for manufacturing systems&#8230;, they\u2019re all kind of equally dreadful and unusable.<\/p>\n<p>&#8220;Now, there is a user interface out there that about three billion people know how to use and that\u2019s the Internet browser. First, it came out of the University of Illinois and its most recent progeny is the Google site. Everybody knows how to use it.<\/p>\n<p><a href=\"https:\/\/www.computerworld.com\/article\/3695073\/qa-at-mit-tom-siebel-labels-the-consequences-of-ai-as-terrifying.html#tk.rss_security\" target=\"bwo\" >http:\/\/www.computerworld.com\/category\/security\/index.rss<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/idge\/imported\/imageapi\/2023\/04\/28\/14\/flying-through-the-futuristic-tunnel-abstract-3d-animation-data-network-virtual-reality.jpgs1024x1024wisk20cvoxwpgc5osm-hy4rpoh304asy7r07g9gaicnfhuwvxg-100940446-small.jpg\"\/><\/p>\n<article>\n<section class=\"page\">\n<p>Speakers ranging from artificial intelligence (AI) developers to law firms grappled this week with questions about the efficacy and ethics of AI during <a href=\"https:\/\/event.technologyreview.com\/emtech-digital-2023\/\" rel=\"noopener nofollow\" target=\"_blank\">MIT Technology Review&#8217;s EmTech Digital conference<\/a>. Among those who had a somewhat alarmist view of the technology (and regulatory efforts to rein it in) was\u00a0Tom Siebel, CEO <a href=\"https:\/\/c3.ai\/\" rel=\"nofollow noopener\" target=\"_blank\">C3 AI<\/a> and founder of CRM vendor Siebel Systems.<\/p>\n<p>Siebel was on hand to talk about how businesses can prepare for an incoming wave of AI regulations, but in his comments Tuesday he touched on various facets of the debate of generative AI, including the ethics of using it, how it could evolve, and why it could be dangerous.<\/p>\n<p class=\"jumpTag\"><a href=\"\/article\/3695073\/qa-at-mit-tom-siebel-labels-the-consequences-of-ai-as-terrifying.html#jump\">To read this article in full, please click here<\/a><\/p>\n<\/section>\n<\/article>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[11062,10643],"tags":[11113,12014,13431,11063,11070,21359,1328,29256,11094,14247,12728],"class_list":["post-21903","post","type-post","status-publish","format-standard","hentry","category-computerworld","category-independent","tag-artificial-intelligence","tag-browsers","tag-chatbots","tag-data-privacy","tag-emerging-technology","tag-financial-services-industry","tag-government","tag-healthcare-industry","tag-smartphones","tag-software-development","tag-utilities"],"_links":{"self":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/21903","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/comments?post=21903"}],"version-history":[{"count":0,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/21903\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/media?parent=21903"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/categories?post=21903"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/tags?post=21903"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}