{"id":24094,"date":"2024-03-05T15:43:00","date_gmt":"2024-03-05T23:43:00","guid":{"rendered":"http:\/\/www.palada.net\/index.php\/2024\/03\/05\/news-17824\/"},"modified":"2024-03-05T15:43:00","modified_gmt":"2024-03-05T23:43:00","slug":"news-17824","status":"publish","type":"post","link":"https:\/\/www.palada.net\/index.php\/2024\/03\/05\/news-17824\/","title":{"rendered":"Researchers, legal experts want AI firms to open up for safety checks"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/article\/2024\/03\/shutterstock_2287556297-100962242-small.jpg\"\/><\/p>\n<p>More than 150 leading artificial intelligence (AI) researchers, ethicists and others have signed an\u00a0<a href=\"https:\/\/sites.mit.edu\/ai-safe-harbor\/\" rel=\"nofollow noopener\" target=\"_blank\">open letter<\/a>\u00a0calling on generative AI (genAI) companies to submit to independent evaluations of their systems, the lack of which has led to concerns about basic protections.<\/p>\n<p>The letter,\u00a0drafted by researchers from MIT, Princeton, and Stanford University, called for legal and technical protections for\u00a0<a href=\"https:\/\/substack.com\/redirect\/34e4ef63-73a8-4559-8c91-968546804f9d?j=eyJ1IjoiMmN3eXlpIn0.-XionA57hnMoMt9ryxqfe913wQYanc0bgQWTzBRe3Ow\" rel=\"nofollow noopener\" target=\"_blank\">good-faith\u00a0research on genAI models<\/a>, which they said is hampering safety measures that could help protect the public.<\/p>\n<p>The letter, and <a href=\"chrome-extension:\/\/efaidnbmnnnibpcajpcglclefindmkaj\/https:\/bpb-us-e1.wpmucdn.com\/sites.mit.edu\/dist\/6\/336\/files\/2024\/03\/Safe-Harbor-0e192065dccf6d83.pdf\" rel=\"nofollow noopener\" target=\"_blank\">a study behind it<\/a>, was created with the help of nearly two dozen professors and researchers who called for a legal \u201csafe harbor\u201d for independent evaluation of genAI products.<\/p>\n<p>The letter was sent to companies including OpenAI, Anthropic, Google, Meta, and\u00a0Midjourney, and asks them to allow researchers to investigate their products to ensure consumers are protected from bias, alleged copyright infringement, and non-consensual intimate imagery.<\/p>\n<p>\u201cIndependent evaluation of AI models that are already deployed is widely regarded as essential for ensuring safety, security, and trust,\u201d two of the researchers responsible for the letter <a href=\"https:\/\/knightcolumbia.org\/blog\/a-safe-harbor-for-ai-evaluation-and-red-teaming\" rel=\"nofollow noopener\" target=\"_blank\">wrote in a blog post<\/a>. \u201cIndependent red-teaming research of AI models has uncovered vulnerabilities related to low resource languages, bypassing safety measure, and a wide range of jailbreaks.<\/p>\n<p>\u201cThese evaluations investigate a broad set of often unanticipated model flaws, related to misuse, bias, copyright, and other issues,\u201d they said.<\/p>\n<p>Last April, a who\u2019s who of technologists called for AI labs to\u00a0<a href=\"https:\/\/www.computerworld.com\/article\/3691639\/tech-big-wigs-hit-the-brakes-on-ai-rollouts.html\">stop training the most powerful systems<\/a>\u00a0for at least six months, citing &#8220;profound risks to society and humanity.&#8221;<\/p>\n<p>That\u00a0<a href=\"https:\/\/futureoflife.org\/open-letter\/pause-giant-ai-experiments\/\" rel=\"nofollow noopener\" target=\"_blank\">open letter<\/a>\u00a0now has more than 3,100 signatories, including Apple co-founder Steve Wozniak; tech leaders called out San Francisco-based OpenAI Lab\u2019s recently announced\u00a0<a href=\"https:\/\/www.computerworld.com\/article\/3690323\/openai-unveils-gpt-4-a-new-foundation-for-chatgpt.html\" rel=\"noopener\" target=\"_blank\">GPT-4 algorithm<\/a>\u00a0in particular, saying the company should halt further development until oversight standards were in place.<\/p>\n<p>The latest letter said AI\u00a0<a href=\"https:\/\/openai.com\/policies\/sharing-publication-policy#research\" rel=\"nofollow noopener\" target=\"_blank\">companies<\/a>,\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/2307.03718.pdf\" rel=\"nofollow noopener\" target=\"_blank\">academic researchers<\/a>, and\u00a0<a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3531146.3533213\" rel=\"nofollow noopener\" target=\"_blank\">civil society<\/a>\u00a0\u201cagree that generative AI systems pose notable risks and that independent evaluation of these risks is an essential form of accountability.\u201d<\/p>\n<p>The signatories include professors from Ivy League schools and other prominent universities, including MIT, as well as executives from companies such as Hugging Face and Mozilla. The list also includes researchers and ethicists such as Dhanaraj Thakur, research director at the Center for Democracy and Technology, and Subhabrata Majumdar, president of the AI Risk and Vulnerability Alliance.<\/p>\n<p>While the letter acknowledges and even praises the fact that some genAI makers have special programs to give researchers access to their systems, it also calls them out for being subjective about who can or cannot see their tech.<\/p>\n<p>In particular, the researchers called out AI companies Cohere and OpenAI as \u00a0exceptions to the rule, \u201cthough some ambiguity remains as to the scope of protected activities.\u201d<\/p>\n<p><a href=\"https:\/\/email.mg2.substack.com\/c\/eJxMkD2P1DAUAH-N3WVlP3_FhYtDKNIhLegQEkeFXpy3G98lcbC9WsKvR4Jm25lqJmKjay5HuFUqXaF9OfgU9CR703MK0glvrTfK8DnY3hFGEkZAjJqcA-swgrdInqw0PAUQoIUSBoQEaU699v7iJgcRxigQmRbrFU71NtaG8f0U88qXMLe2V6aeGAwMhkfJYCg0pUKxMRiEvjiYLHYjge-08b5D76HTBiwqVNZeeqaGN6Y-0vFJPr_ldF4_K3pd9udNnLrXlLcn4-btnM_Nl-P3rwt5qe4vP3CLYry-fP_258NXUl_ufM-1_UxTkBoUGCvkf9KOncJG97pQa1R4CRUPrDPT4rpiWv4F1ds45RXTFjDVDd8pp4W3h8F_AwAA__9ynnX9\" rel=\"nofollow noopener\" target=\"_blank\">Cohere allows<\/a>\u00a0\u201cintentional stress testing of the API and adversarial attacks\u201d provided appropriate vulnerability disclosure (without explicit legal promises). And OpenAI expanded its safe harbor to include \u201cmodel vulnerability research\u201d and \u201cacademic model safety research\u201d in response to an early draft of our proposal.<\/p>\n<p>In other cases, genAI firms have already suspended researcher accounts and even changed their terms of service to deter <a href=\"\/cms\/article\/https:\/bpb-us-e1.wpmucdn.com\/sites.mit.edu\/dist\/6\/336\/files\/2024\/03\/Safe-Harbor-0e192065dccf6d83.pdf\" rel=\"nofollow noopener\" target=\"_blank\">some types of evaluation<\/a>, according to the researchers, \u201cdisempowering independent researchers is not in AI companies\u2019 own interests.\u201d<\/p>\n<p>Independent evaluators who do investigate genAI products fear account suspension (without an opportunity for appeal) and legal risks, \u201cboth of which can have chilling effects on research,\u201d the letter argued.<\/p>\n<p>To help protect users, the signatories want AI companies to provide two levels of protection to research:<\/p>\n<p>Computerworld reached out to OpenAI and Google for a response, but neither company had immediate comment.<\/p>\n<p><a href=\"https:\/\/www.computerworld.com\/article\/3714180\/researchers-legal-experts-want-ai-firms-to-open-up-for-safety-checks.html#tk.rss_security\" target=\"bwo\" >http:\/\/www.computerworld.com\/category\/security\/index.rss<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/images.idgesg.net\/images\/article\/2024\/03\/shutterstock_2287556297-100962242-small.jpg\"\/><\/p>\n<article>\n<section class=\"page\">\n<p>More than 150 leading artificial intelligence (AI) researchers, ethicists and others have signed an\u00a0<a href=\"https:\/\/sites.mit.edu\/ai-safe-harbor\/\" rel=\"nofollow noopener\" target=\"_blank\">open letter<\/a>\u00a0calling on generative AI (genAI) companies to submit to independent evaluations of their systems, the lack of which has led to concerns about basic protections.<\/p>\n<p>The letter,\u00a0drafted by researchers from MIT, Princeton, and Stanford University, called for legal and technical protections for\u00a0<a href=\"https:\/\/substack.com\/redirect\/34e4ef63-73a8-4559-8c91-968546804f9d?j=eyJ1IjoiMmN3eXlpIn0.-XionA57hnMoMt9ryxqfe913wQYanc0bgQWTzBRe3Ow\" rel=\"nofollow noopener\" target=\"_blank\">good-faith\u00a0research on genAI models<\/a>, which they said is hampering safety measures that could help protect the public.<\/p>\n<p class=\"jumpTag\"><a href=\"\/article\/3714180\/researchers-legal-experts-want-ai-firms-to-open-up-for-safety-checks.html#jump\">To read this article in full, please click here<\/a><\/p>\n<\/section>\n<\/article>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[11062,10643],"tags":[11113,13431,11063,11070,29835,714],"class_list":["post-24094","post","type-post","status-publish","format-standard","hentry","category-computerworld","category-independent","tag-artificial-intelligence","tag-chatbots","tag-data-privacy","tag-emerging-technology","tag-generative-ai","tag-security"],"_links":{"self":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/24094","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/comments?post=24094"}],"version-history":[{"count":0,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/posts\/24094\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/media?parent=24094"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/categories?post=24094"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.palada.net\/index.php\/wp-json\/wp\/v2\/tags?post=24094"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}