RSS Reader for Computer Security Articles
Meta’s Project Llama aims to help developers filter out specific items that might cause their AI model to produce inappropriate content.
Read MoreLarge language models (LLMs), the algorithmic platforms on which generative AI (genAI) tools like ChatGPT are built, are highly inaccurate when connected to corporate databases and becoming less transparent, according to two studies.
One study by Stanford University showed that as LLMs continue to ingest massive amounts of information and grow in size, the genesis of the data they use is becoming harder to track down. That, in turn, makes it difficult for businesses to know whether they can safely build applications that use commercial genAI foundation models and for academics to rely on them for research.
Credit to Author: gallagherseanm| Date: Mon, 27 Nov 2023 11:30:18 +0000
Generative artificial intelligence technologies such as OpenAI’s ChatGPT and DALL-E have created a great deal of disruption across much of our digital lives. Creating credible text, images and even audio, these AI tools can be used for both good and ill. That includes their application in the cybersecurity space. While Sophos AI has been working […]
Read MoreLess than a year after OpenAI’s ChatGPT was released to the public, Cisco Systems is already well into the process of embedding generative artificial intelligence (genAI) into its entire product portfolio and internal backend systems.
The plan is to use it in virtually every corner of the business, from automating network functions and monitoring security to creating new software products.
But Cisco’s CIO, Fletcher Previn, is also dealing with a scarcity of IT talent to create and tweak large language model (LLM) platforms for domain-specific AI applications. As a result, IT workers are learning as they go, while discovering new places and ways the ever-evolving technology can create value.
In a sweeping executive order, US President Joseph R. Biden Jr. on Monday set up a comprehensive series of standards, safety and privacy protections, and oversight measures for the development and use of artificial intelligence (AI).
Among more than two dozen initiatives, Biden’s “Safe, Secure, and Trustworthy Artificial Intelligence” order was a long time coming, according to many observers who’ve been watching the AI space — especially with the rise of generative AI (genAI) in the past year.
Technologists are helping artists fight back against what they see as intellectual property (IP) theft by generative artificial intelligence (genAI) tools whose training algorithms automatically scrape the internet and other places for content.
The fight over what constitutes fair use of content found online is at the heart of what has been an ongoing court battle. The fight goes beyond artwork to whether genAi companies like Microsoft and its partner, OpenAI, can incorporate software code and other published content into their models.
After earlier efforts to reign in generative artificial intelligence (genAI) were criticized as too vague and ineffective, the Biden Administration is now expected to announce new, more restrictive rules for use of the technology by federal employees.
The executive order, expected to be unveiled Monday, would also change immigration standards to allow a greater influx of technology workers to help accelerate US development efforts.
On Tuesday night, the White House sent invitations for a “Safe, Secure, and Trustworthy Artificial Intelligence” event Monday hosted by President Joseph R. Biden Jr., according to The Washington Post.