What happened in privacy in 2022

Annual reviews of any year’s developments in privacy rarely lend themselves to pithy wrap-ups, but 2022 was different, providing the clearest example yet for so many people—American women in particular—that their privacy was not theirs to determine, and that the often-repeated refrain that privacy doesn’t matter for those who have “nothing to hide” only holds sway until the world around those words decides: “Now you do.

The US Supreme Court’s decision to overturn Roe v. Wade and the associated Constitutional right to choose to have an abortion redefined personal privacy, as individual states can now treat previously benign health data as possible evidence in a criminal investigation. Where perhaps millions of women previously had “nothing to hide,” now, they do.

The Supreme Court’s decision was also an event that has few modern equivalents, changing the behaviors of three, core parts of society—government, corporate, and public.

In the wake of a leaked draft of the decision, Federal legislators introduced a new, targeted data privacy bill to protect reproductive health data. Immediately following the decision, countless individuals dropped their current period-tracking apps in search for another app that would promise to better protect their data. And months after the decision, companies are still announcing changes to what types of data they will no longer store.

In looking back at 2022, it isn’t that nothing else happened in data privacy—it’s that nothing else like this has happened for a long time.

Here’s a look at what happened in privacy in 2022.

Roe v. Wade overturned

The privacy fallout from the US Supreme Court’s decision in Dobbs v. Jackson Women’s Health Organization is both succinct and enormous: It changed what people want to keep private. 

By allowing a state-by-state approach to the legality of obtaining and providing abortion services, the US Supreme Court’s decision introduced a new uncertainty about what types of online activity—be it Google searches, Facebook posts, TikTok videos, or location histories revealing trips to an abortion clinic—could now be considered as potential evidence of a crime.

Would law enforcement, for example, be able to pull Google search requests on someone they suspected of seeking an abortion? Would text messages between friends be brought before an investigator? What about Instagram stories that included offers to pay for the travel and lodging of complete strangers seeking abortion from other states? And what about period-tracking app data? What if cops could see in a person’s data that, for a week or two, they were pregnant, and then soon after, they were not? 

In the absence of immediate best practices, individuals made their own choices. Period-tracking app users scrambled to find the most secure app online, and one period-tracking app maker promised to encrypt user data so that, even if law enforcement made a request for their data, the data would be unintelligible. (Those promises, one investigation found, were shaky.)

But it wasn’t just period-tracking apps that reconsidered data collection and storage policies.

In July, Google announced that it would automatically delete location histories that revealed physical trips to any sensitive locations, which Google defined as “medical facilities like counseling centers, domestic violence shelters, abortion clinics, fertility centers, addiction treatment facilities, weight loss clinics, [and] cosmetic surgery clinics.” Samsung, too, recently announced that it would delete “Women’s health data” collected through Samsung Health.

To address this corporate, piecemeal approach to user privacy, Congressmembers introduced the My Body, My Data Act, which would place new, national restrictions on how companies handle reproductive health data.

Web wins (and one wayward rollout)

When the developers of the privacy-forward browser Brave released their own search engine last year, they touted that, unlike other search engines, “Brave Search” would pull from an entirely separate index of the Internet, only relying on Google’s index when it could not fill in the gaps for user searches. Upon Brave Search’s launch, this “independence score,” as the company put it, was 87 percent.

One year later, that score has reportedly increased to 92 percent for all Brave Search users, and Brave Search itself has exited out of beta. But perhaps the biggest privacy draw of all for the tool is simple: It doesn’t track users or their searches.

Privacy-minded users has another choice in web browsers last year, as well, as Mozilla claimed that a new feature in Firefox made it the “most private and secure major browser available across Windows, Mac, and Linux.”

The feature, called Total Cookie Protection, promises to create “cookie jars” for every website that users visit. This will reportedly put browsing activity into separate silos so that user activity cannot be stitched together across websites, which is often done to help build in-depth profiles of who to target with ads.

But why spend so much time on cookie restrictions when, according to Google’s own words several years ago, it, too, would retire third-party tracking in its browser, Chrome?

Probably because that major rollout was delayed yet again—this time to 2024.

A question of corporate commitment to privacy

If you can believe it, there was a time last year when the biggest thing happening to Twitter had nothing to do with its new owner, Elon Musk.

In August, the company confirmed a data breach that affected 5.4 million users, their email addresses and phone numbers now linked to their accounts in a data dump that could be found for sale on the dark web.

In May, Twitter was also hit with a $150 million fine from the US Federal Trade Commission for violating an earlier consent order that prohibited the company from “misleading consumers about the extent to which it protects the security, privacy, and confidentiality of nonpublic consumer information, including the measures it takes to prevent unauthorized access to nonpublic information and honor the privacy choices made by consumers.”

But with so broad a description, what, exactly, did Twitter do?

It used phone numbers that were specifically requested for two-factor authentication—a security measure—for targeted advertising.

(At least the company did put its services on Tor last year, a great step for familiarizing users with a more private online experience.)


To learn more about privacy and Tor’s security benefits to the Internet, listen to our Lock and Code podcast interview with Alec Muffet here.


But privacy blunders—and intentional evasions—have become a guarantee in Silicon Valley, and so Twitter shouldn’t receive all the spotlight.

Facebook, possibly angered by one browser’s decision to remove tracking parameters that Facebook inserted into URLs to help track users across pages, decided to encrypt their URLs so that the browser in question could no longer determine which part of the URL would need to be removed. The company is also facing a lawsuit alleging that it has built a “secret workaround” to safeguards built into the iPhone to prevent the social media giant from tracking users.         

Finally, Google settled multi-state charges that the company had allegedly misled users as to how their locations were tracked across various services, agreeing to pay a whopping $391.5 million. A separate but similar lawsuit is still active against the company.

Legislation and legal violation

Compared to earlier years, the legislative battle for data privacy cooled off in 2022, but that doesn’t mean it was necessarily less spirited.

In June, a draft of the US Supreme Court’s opinion in Dobbs v. Jackson Women’s Health Organization was leaked, suggesting that the Court was positioned to soon overturn both Roe v. Wade and Planned Parenthood v. Casey, previous decisions that had guaranteed a Constitutional right to choose to have an abortion. 

Before the Supreme Court could issue its final decision, legislators in Washington DC moved fast, introducing the My Body, My Data Act in both the House of Representatives and the Senate. The bill sought to place new restrictions on how companies use “personal reproductive or sexual health information,” allowing for the collection, use, retainment, and disclosure of such data only if a company was delivering a service requested by a user, or if the user gave express consent. The definition of “personal reproductive or sexual health information” in the bill is broad, including any info that could reveal someone’s attempts to “research or obtain” reproductive health service, along with any reproductive or sexual health “conditions,” including pregnancy or menstruation.

The bill has not significantly moved forward since its introduction.

Separately, last year saw the introduction of the American Data and Privacy Protection Act—the latest attempt from Congressmembers to pass a Federal, comprehensive data privacy law for the United States. Like many attempts before, the American Data and Privacy Protection Act would extend the rights to access, correct, and request deletion of the personal data that companies have already collected on them—similar to the rights granted to those living in the European Union through the General Data Protection Regulation (GDPR).

With a new Congress elected, it is unclear whether the bill will progress.

Aside from legislation that has only been introduced, one full-blown law scored its first-ever enforcement action. In California, the state’s attorney general announced a settlement with the makeup company Sephora over allegations that it violated the California Consumer Privacy Act, the state’s privacy law that was first passed in 2018.

According to the Office of the Attorney General, Sephora allegedly “failed to disclose to consumers that it was selling their personal information, … failed to process user requests to opt out of sale via user-enabled global privacy controls in violation of the CCPA, and … it did not cure these violations within the 30-day period currently allowed by the CCPA.”

Per the settlement, Sephora agreed to pay $1.2 million.

Pushing back against stalkerware

Rounding out a turbulent year, there was some good news in the continued fight against stalkerware.

Through a months-long investigation, the technology publication TechCrunch revealed that a network of stalkerware-type apps all shared the same security vulnerability that could allow “near-unfettered remote access” to the data of hundreds of thousands of devices that are already infected with said apps. TechCrunch worked with the Carnegie Mellon University Software Engineering Institute to both report the vulnerability and contact any party that could responsibly address the vulnerability. The investigation provided likely the strongest, global “map” of who is providing cover for these types of apps, and how they seem to skirt any consequences.

But there’s more.

Following questions that TechCrunch sent to a company called 1Byte, which operates the server infrastructure behind several of the apps under investigation, two of the apps “appeared to cease working or shut down.” 

Today, TechCrunch offers a tool that allows people to see if their devices are infected with the apps that the publication previously investigated.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

https://blog.malwarebytes.com/feed/