ComputerWorld

ComputerWorldIndependent

Researchers build a scary Mac attack using AI and sound

A UK research team based at Durham University has identified an exploit that could allow attackers to figure out what you type on your MacBook Pro — based on the sound each keyboard tap makes.

These kinds of attacks aren’t particularly new. The researchers found research dating back to the 1950s into using acoustics to identify what people write. They also note that the first paper detailing use of such an attack surface was written for the US National Security Agency (NSA) in 1972, prompting speculation such attacks may already be in place.

“(The) governmental origin of AS- CAs creates speculation that such an attack may already be possible on modern devices, but remains classified,” the researchers wrote.

To read this article in full, please click here

Read More
ComputerWorldIndependent

Has Microsoft cut security corners once too often?

Credit to Author: eschuman@thecontentfirm.com| Date: Mon, 07 Aug 2023 10:00:00 -0700

As Microsoft revealed tidbits of its post-mortem investigation into a Chinese attack against US government agencies via Microsoft, two details stand out: the company violated its own policy and did not store security keys within a Hardware Security Module (HSM) — and the keys were successfully used by attackers even though they had expired years earlier. 

This is simply the latest example of Microsoft quietly cutting corners on cybersecurity and then only telling anyone when it gets caught. 

To read this article in full, please click here

Read More
ComputerWorldIndependent

UK intelligence agencies seek to weaken data protection safeguards

UK intelligence agencies are campaigning for the government to weaken surveillance laws, arguing that the current safeguards limit their ability to train AI models due to the large amount of personal data required.

GCHQ, MI5, and MI6 have been increasingly using AI technologies to analyze data sets, including bulk personal data sets (BPDs), which can often contain sensitive information about people not of interest to the security services.

Currently, a judge has to approve the examination and retention of BPDs, a process that intelligence agencies have described as “disproportionately burdensome” when applied to “publicly available datasets, specifically those containing data in respect of which the subject has little or no reasonable expectation of privacy.”

To read this article in full, please click here

Read More
ComputerWorldIndependent

EEOC Commissioner: AI system audits might not comply with federal anti-bias laws

Keith Sonderling, commissioner of the US Equal Employment Opportunity Commission (EEOC), has for years been sounding the alarm about the potential for artificial intelligence (AI) to run afoul of federal anti-discrimination laws such as the Civil Rights Act of 1964.

It was not until the advent of ChatGPT, Bard, and other popular generative AI tools, however, that local, state and national lawmakers began taking notice — and companies became aware of the pitfalls posed by a technology that can automate efficiencies in the business process.

Instead of speeches he’d typically make to groups of chief human resource officers or labor employment lawyers, Sonderling has found himself in recent months talking more and more about AI. His focus has been on how companies can stay compliant as they hand over more of the responsibility for hiring and other aspects of corporate HR to algorithms that are vastly faster and capable of parsing thousands of resumes in seconds.

To read this article in full, please click here

Read More
ComputerWorldIndependent

EEOC chief: AI system audits might comply with local anti-bias laws, but not federal ones

Keith Sonderling, commissioner of the US Equal Employment Opportunity Commission (EEOC), has for years been sounding the alarm about the potential for artificial intelligence (AI) to run afoul of federal anti-discrimination laws such as the Civil Rights Act of 1964.

It was not until the advent of ChatGPT, Bard, and other popular generative AI tools, however, that local, state and national lawmakers began taking notice — and companies became aware of the pitfalls posed by a technology that can automate efficiencies in the business process.

Instead of speeches he’d typically make to groups of chief human resource officers or labor employment lawyers, Sonderling has found himself in recent months talking more and more about AI. His focus has been on how companies can stay compliant as they hand over more of the responsibility for hiring and other aspects of corporate HR to algorithms that are vastly faster and capable of parsing thousands of resumes in seconds.

To read this article in full, please click here

Read More
ComputerWorldIndependent

Apple toughens up app security with API control

Apple is at war with device fingerprinting — the use of fragments of unique device-specific information to track users online. This fall, it will put in place yet another important limitation to prevent unauthorized use of this kind of tech.

Apple at WWDC 2023 announced a new initiative designed to make apps that do track users more obvious while giving users additional transparency into such use. Now it has told developers a little more about how this will work in practice.

To read this article in full, please click here

Read More
ComputerWorldIndependent

Was Steve Jobs right about this?

Perhaps Steve Jobs was right to limit the amount of time he let his children use iPhones and iPads — a tradition Apple maintains with its Screen Time tool, which lets parents set limits on device use. Now, an extensive UNESCO report suggests that letting kids spend too much time on these devices can be bad for them.

Baked in inequality and lack of social skills

That’s the headline claim, but there’s a lot more to the report in terms of exploring data privacy, misuse of tech, and failed digital transformation experiments.

To read this article in full, please click here

Read More
ComputerWorldIndependent

Apple: Proposed UK law is a ‘serious, direct threat’ to security, privacy

New UK government surveillance laws are so over-reaching that tech companies can’t possibly meet all of their requirements, according to Apple, which argues the measures will make the online world far less safe

Apple, WhatsApp, Meta all threaten to quit UK messaging

The UK Home Office is pushing proposals to extend the Investigatory Powers Act (IPA) with a range of proposals that effectively require messaging providers such as Apple, WhatsApp, or Meta to install backdoors into their services. All three services are now threatening to withdraw messaging apps from the UK market if the changes move forward.

To read this article in full, please click here

Read More