ServiceNow embeds AI-powered customer-assist features throughout products

Read more

Why and how to create corporate genAI policies

As a large number of companies continue to test and deploy generative artificial intelligence (genAI) tools, many are at risk of AI errors, malicious attacks, and running afoul of regulators — not to mention the potential exposure of sensitive data.

For example, in April, after Samsung’s semiconductor division allowed engineers to use ChatGPT, workers using the platform leaked trade secrets on least three instances, according to published accounts. One employee pasted confidential source code into the chat to check for errors, while another worker shared code with ChatGPT and “requested code optimization.”

To read this article in full, please click here

Read more

Q&A: TIAA's CIO touts top AI projects, details worker skills needed now

Artificial intelligence (AI) is already having a significant effect on businesses and organizations across a variety of industries, even as many businesses are still just kicking the tires on the technology.

Those that have fully adopted AI claim a 35% increase in innovation and a 33% increase in sustainability over the past three years, according to research firm IDC. Customer and employee retention has also been reported as improving by 32% after investing in AI.

To read this article in full, please click here

Read more

EEOC Commissioner: AI system audits might not comply with federal anti-bias laws

Keith Sonderling, commissioner of the US Equal Employment Opportunity Commission (EEOC), has for years been sounding the alarm about the potential for artificial intelligence (AI) to run afoul of federal anti-discrimination laws such as the Civil Rights Act of 1964.

It was not until the advent of ChatGPT, Bard, and other popular generative AI tools, however, that local, state and national lawmakers began taking notice — and companies became aware of the pitfalls posed by a technology that can automate efficiencies in the business process.

Instead of speeches he’d typically make to groups of chief human resource officers or labor employment lawyers, Sonderling has found himself in recent months talking more and more about AI. His focus has been on how companies can stay compliant as they hand over more of the responsibility for hiring and other aspects of corporate HR to algorithms that are vastly faster and capable of parsing thousands of resumes in seconds.

To read this article in full, please click here

Read more

EEOC chief: AI system audits might comply with local anti-bias laws, but not federal ones

Keith Sonderling, commissioner of the US Equal Employment Opportunity Commission (EEOC), has for years been sounding the alarm about the potential for artificial intelligence (AI) to run afoul of federal anti-discrimination laws such as the Civil Rights Act of 1964.

It was not until the advent of ChatGPT, Bard, and other popular generative AI tools, however, that local, state and national lawmakers began taking notice — and companies became aware of the pitfalls posed by a technology that can automate efficiencies in the business process.

Instead of speeches he’d typically make to groups of chief human resource officers or labor employment lawyers, Sonderling has found himself in recent months talking more and more about AI. His focus has been on how companies can stay compliant as they hand over more of the responsibility for hiring and other aspects of corporate HR to algorithms that are vastly faster and capable of parsing thousands of resumes in seconds.

To read this article in full, please click here

Read more

ChatGPT creators and others plead to reduce risk of global extinction from their tech

Hundreds of tech industry leaders, academics, and others public figures signed an open letter warning that artificial intelligence (AI) evolution could lead to an extinction event and saying that controlling the tech should be a top global priority.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” read the statement published by San Francisco-based Center for AI Safety.

The brief statement in the letter reads almost like a mea culpa for the technology about which its creators are now joining together to warn the world.

To read this article in full, please click here

Read more

G7 leaders warn of AI dangers, say the time to act is now

Leaders of the Group of Seven (G7) nations on Saturday called for the creation of technical standards to keep artificial intelligence (AI) in check, saying AI has outpaced oversight for safety and security.

Meeting in Hiroshima, Japan, the leaders said nations must come together on a common vision and goal of trustworthy AI, even while those solutions may vary. But any solution for digital technologies such as AI should be “in line with our shared democratic values,” they said in a statement.

To read this article in full, please click here

Read more

Senate hearings see a clear and present danger from AI — and opportunities

There are vital national interests in advancing artificial intelligence (AI) to streamline public services and automate mundane tasks performed by government employees. But the government lacks in both IT talent and systems to support those efforts.

“The federal government as a whole continues to face barriers in hiring, managing, and retaining staff with advanced technical skills — the very skills needed to design, develop, deploy, and monitor AI systems,” said Taka Ariga, chief data scientist at the US Government Accountability Office.

Daniel Ho, associate director for Institute for Human-Centered Artificial Intelligence (HAI) at Stanford University, agreed, saying that by one estimate the federal government would need to hire about 40,000 IT workers to address cybersecurity issues posed by AI.

To read this article in full, please click here

Read more