Researchers, legal experts want AI firms to open up for safety checks

More than 150 leading artificial intelligence (AI) researchers, ethicists and others have signed an open letter calling on generative AI (genAI) companies to submit to independent evaluations of their systems, the lack of which has led to concerns about basic protections.

The letter, drafted by researchers from MIT, Princeton, and Stanford University, called for legal and technical protections for good-faith research on genAI models, which they said is hampering safety measures that could help protect the public.

To read this article in full, please click here

Read more

Enterprise mobility 2024: Welcome, genAI

Generative artificial intelligence (genAI) has become a focal point for many organizations over the past year, so it should come as no surprise that the technology is moving into the enterprise mobility space, including unified endpoint management (UEM).

“Generative AI is the latest trend to impact the UEM space,” says Andrew Hewitt, principal analyst, Forrester. “This has been the main topic of interest in the last year. We see generative AI having impacts in multiple areas, such as script creation, knowledge-based article creation, NLP [natural language processing]-based querying of endpoint data, and help desk chatbots. All of these are considerations for inclusion within the UEM stack.”

To read this article in full, please click here

Read more

Microsoft, OpenAI move to fend off genAI-aided hackers — for now

Of all the potential nightmares about the dangerous effects of generative AI (genAI) tools like OpenAI’s ChatGPT and Microsoft’s Copilot, one is near the top of the list: their use by hackers to craft hard-to-detect malicious code. Even worse is the fear that genAI could help rogue states like Russia, Iran, and North Korea unleash unstoppable cyberattacks against the US and its allies.

The bad news: nation states have already begun using genAI to attack the US and its friends. The good news: so far, the attacks haven’t been particularly dangerous or especially effective. Even better news: Microsoft and OpenAI are taking the threat seriously. They’re being transparent about it, openly describing the attacks and sharing what can be done about them.

To read this article in full, please click here

Read more

Microsoft and the Taylor Swift genAI deepfake problem

The last few weeks have been a PR bonanza for Taylor Swift in both good ways and bad. On the good side, her boyfriend Travis Kelce was on the winning team at the Super Bowl, and her reactions during the game got plenty of air time. On the much, much worse side, generative AI-created fake nude images of her have recently flooded the internet.

As you would expect, condemnation of the creation and distribution of those images followed swiftly, including from generative AI (genAI) companies and, notably, Microsoft CEO Satya Nadella. In addition to denouncing what happened, Nadella shared his thoughts on a solution: “I go back to what I think’s our responsibility, which is all of the guardrails that we need to place around the technology so that there’s more safe content that’s being produced.”

To read this article in full, please click here

Read more

The AI data-poisoning cat-and-mouse game — this time, IT will win

Credit to Author: eschuman@thecontentfirm.com| Date: Mon, 12 Feb 2024 03:00:00 -0800

The IT community of late has been freaking out about AI data poisoning. For some, it’s a sneaky mechanism that could act as a backdoor into enterprise systems by  surreptitiously infecting the data large language models (LLMs) train on and then getting  pulled into enterprise systems. For others, it’s a way to combat LLMs that try to do an end run around trademark and copyright protections.

To read this article in full, please click here

Read more

How OpenAI plans to handle genAI election fears

OpenAI is hoping to alleviate concerns about its technology’s influence on elections, as more than a third of the world’s population is gearing up for voting this year. Among the countries where elections are scheduled are the United States, Pakistan, India, South Africa, and the European Parliament.

“We want to make sure that our AI systems are built, deployed, and used safely. Like any new technology, these tools come with benefits and challenges,” OpenAI wrote Monday in a blog post. “They are also unprecedented, and we will keep evolving our approach as we learn more about how our tools are used.”

To read this article in full, please click here

Read more

Will super chips disrupt the 'everything to the cloud' IT mentality?

Credit to Author: eschuman@thecontentfirm.com| Date: Wed, 10 Jan 2024 03:00:00 -0800

Enterprise IT for the last couple of years has grown disappointed in the economics — not to mention the cybersecurity and compliance impact — of corporate clouds. In general, with a few exceptions, enterprises have done little about it; most saw the scalability and efficiencies too seductive.

Might that change in 2024 and 2025?

Apple has begun talking about efforts to add higher-end compute capabilities to its chip, following similar efforts from Intel and NVIDIA. Although those new capabilities are aimed at enabling more large language model (LLM) capabilities on-device, anything that can deliver that level of data-crunching and analytics can also handle almost every other enterprise IT task. 

To read this article in full, please click here

Read more

The top 10 tech stories of 2023

The top technology stories of 2023 highlight fundamental changes in culture and geopolitics as well as tech itself: It’s clear that generative AI will affect all aspects of technology and society, while geopolitical tensions are sparking cybersecurity attacks globally. General unease about the dominance of big tech, meanwhile, is pushing regulators to get tougher on mopolistic business practices and multibillion-dollar mergers.

Fired! Rehired! Sam Altman’s ouster and return to OpenAI

sam altman openai Shutterstock

The ouster of Sam Altman as CEO of OpenAI, which sparked the modern era of generative AI when it launched ChatGPT a year earlier, was the tech industry shocker of the year. After the board issued a mysterious statement on November 17 saying that it had fired Altman for not being “consistently candid,” Microsoft announced that it would hire Altman and any other OpenAI employees who wanted to follow him out the door — which turned out to be almost all of them. OpenAI backed down and rehired Altman.

To read this article in full, please click here

Read more