How to read encrypted messages from ChatGPT and other AI chatbots | Kaspersky official blog

Credit to Author: Alanna Titterington| Date: Wed, 24 Apr 2024 11:27:49 +0000

Researchers have developed a method for reading messages intercepted from OpenAI ChatGPT, Microsoft Copilot, and other AI chatbots. We explain how it works.

Read more

Researchers, legal experts want AI firms to open up for safety checks

More than 150 leading artificial intelligence (AI) researchers, ethicists and others have signed an open letter calling on generative AI (genAI) companies to submit to independent evaluations of their systems, the lack of which has led to concerns about basic protections.

The letter, drafted by researchers from MIT, Princeton, and Stanford University, called for legal and technical protections for good-faith research on genAI models, which they said is hampering safety measures that could help protect the public.

To read this article in full, please click here

Read more

The arrival of genAI could cover critical skills gaps, reshape IT job market

Generative artificial intelligence (genAI) is likely to play a critical role in addressing skills shortages in today’s marketplace, according to a new study by London-based Kaspersky Research. It showed that 40% of 2,000 C-level executives surveyed plan to use genAI tools such as ChatGPT to cover critical skills shortages through the automation of tasks.

The European-based study found genAI to be firmly on the business agenda, with 95% of respondents regularly discussing ways to maximize value from the technology at the most senior level, even as 91% admitted they don’t really know how it works.

To read this article in full, please click here

Read more

GenAI is highly inaccurate for business use — and getting more opaque

Large language models (LLMs), the algorithmic platforms on which generative AI (genAI) tools like ChatGPT are built, are highly inaccurate when connected to corporate databases and becoming less transparent, according to two studies.

One study by Stanford University showed that as LLMs continue to ingest massive amounts of information and grow in size, the genesis of the data they use is becoming harder to track down. That, in turn, makes it difficult for businesses to know whether they can safely build applications that use commercial genAI foundation models and for academics to rely on them for research.

To read this article in full, please click here

Read more

White House to issue AI rules for federal employees

After earlier efforts to reign in generative artificial intelligence (genAI) were criticized as too vague and ineffective, the Biden Administration is now expected to announce new, more restrictive rules for use of the technology by federal employees.

The executive order, expected to be unveiled Monday, would also change immigration standards to allow a greater influx of technology workers to help accelerate US development efforts.

On Tuesday night, the White House sent invitations for a “Safe, Secure, and Trustworthy Artificial Intelligence” event Monday hosted by President Joseph R. Biden Jr., according to The Washington Post.

To read this article in full, please click here

Read more

Q&A: How one CSO secured his environment from generative AI risks

In February, travel and expense management company Navan (formerly TripActions) chose to go all-in on generative AI technology for a myriad of business and customer assistance uses.

The Palo Alto, CA company turned to ChatGPT from OpenAI and coding assistance tools from GitHub Copilot to write, test, and fix code; the decision has boosted Navan’s operational efficiency and reduced overhead costs.

GenAI tools have also been used to build a conversational experience for the company’s client virtual assistant, Ava. Ava, a travel and expense chatbot assistant, offers customers answers to questions and a conversational booking experience. It can also offer data to business travelers, such as company travel spend, volume, and granular carbon emissions details.

To read this article in full, please click here

Read more