Researchers, legal experts want AI firms to open up for safety checks

More than 150 leading artificial intelligence (AI) researchers, ethicists and others have signed an open letter calling on generative AI (genAI) companies to submit to independent evaluations of their systems, the lack of which has led to concerns about basic protections.

The letter, drafted by researchers from MIT, Princeton, and Stanford University, called for legal and technical protections for good-faith research on genAI models, which they said is hampering safety measures that could help protect the public.

To read this article in full, please click here

Read more

The arrival of genAI could cover critical skills gaps, reshape IT job market

Generative artificial intelligence (genAI) is likely to play a critical role in addressing skills shortages in today’s marketplace, according to a new study by London-based Kaspersky Research. It showed that 40% of 2,000 C-level executives surveyed plan to use genAI tools such as ChatGPT to cover critical skills shortages through the automation of tasks.

The European-based study found genAI to be firmly on the business agenda, with 95% of respondents regularly discussing ways to maximize value from the technology at the most senior level, even as 91% admitted they don’t really know how it works.

To read this article in full, please click here

Read more

GenAI is highly inaccurate for business use — and getting more opaque

Large language models (LLMs), the algorithmic platforms on which generative AI (genAI) tools like ChatGPT are built, are highly inaccurate when connected to corporate databases and becoming less transparent, according to two studies.

One study by Stanford University showed that as LLMs continue to ingest massive amounts of information and grow in size, the genesis of the data they use is becoming harder to track down. That, in turn, makes it difficult for businesses to know whether they can safely build applications that use commercial genAI foundation models and for academics to rely on them for research.

To read this article in full, please click here

Read more

Q&A: Cisco CIO sees AI embedded in every product and process

Less than a year after OpenAI’s ChatGPT was released to the public, Cisco Systems is already well into the process of embedding generative artificial intelligence (genAI) into its entire product portfolio and internal backend systems.

The plan is to use it in virtually every corner of the business, from automating network functions and monitoring security to creating new software products.

But Cisco’s CIO, Fletcher Previn, is also dealing with a scarcity of IT talent to create and tweak large language model (LLM) platforms for domain-specific AI applications. As a result, IT workers are learning as they go, while discovering new places and ways the ever-evolving technology can create value.

To read this article in full, please click here

Read more

Biden lays down the law on AI

In a sweeping executive order, US President Joseph R. Biden Jr. on Monday set up a comprehensive series of standards, safety and privacy protections, and oversight measures for the development and use of artificial intelligence (AI).

Among more than two dozen initiatives, Biden’s “Safe, Secure, and Trustworthy Artificial Intelligence” order was a long time coming, according to many observers who’ve been watching the AI space — especially with the rise of generative AI (genAI) in the past year.

To read this article in full, please click here

Read more

‘Data poisoning’ anti-AI theft tools emerge — but are they ethical?

Technologists are helping artists fight back against what they see as intellectual property (IP) theft by generative artificial intelligence (genAI) tools  whose training algorithms automatically scrape the internet and other places for content.

The fight over what constitutes fair use of content found online is at the heart of what has been an ongoing court battle. The fight goes beyond artwork to whether genAi companies like Microsoft and its partner, OpenAI, can incorporate software code and other published content into their models.

To read this article in full, please click here

Read more

White House to issue AI rules for federal employees

After earlier efforts to reign in generative artificial intelligence (genAI) were criticized as too vague and ineffective, the Biden Administration is now expected to announce new, more restrictive rules for use of the technology by federal employees.

The executive order, expected to be unveiled Monday, would also change immigration standards to allow a greater influx of technology workers to help accelerate US development efforts.

On Tuesday night, the White House sent invitations for a “Safe, Secure, and Trustworthy Artificial Intelligence” event Monday hosted by President Joseph R. Biden Jr., according to The Washington Post.

To read this article in full, please click here

Read more

Q&A: How one CSO secured his environment from generative AI risks

In February, travel and expense management company Navan (formerly TripActions) chose to go all-in on generative AI technology for a myriad of business and customer assistance uses.

The Palo Alto, CA company turned to ChatGPT from OpenAI and coding assistance tools from GitHub Copilot to write, test, and fix code; the decision has boosted Navan’s operational efficiency and reduced overhead costs.

GenAI tools have also been used to build a conversational experience for the company’s client virtual assistant, Ava. Ava, a travel and expense chatbot assistant, offers customers answers to questions and a conversational booking experience. It can also offer data to business travelers, such as company travel spend, volume, and granular carbon emissions details.

To read this article in full, please click here

Read more