The False Promise of “Lawful Access” to Private Data

Credit to Author: Andrew Sullivan| Date: Thu, 16 May 2019 21:00:09 +0000

A stark reality keeps confronting us: Terrible things are being done in the world. The darkest impulses of some people are honed and polished on the internet, in secret. Then those impulses are visited upon us, in violent and sickening ways. One of the most recent such tragedies, as I write, happened in Christchurch, New Zealand, on March 15, 2019, but there might be another by the time you read this. Every time, we all want to know the same thing: What is to be done about this?

Andrew Sullivan, CEO and president of the Internet Society.

Last month, at the meeting of the G7 Ministers of the Interior in France they made a pact to crack down on the use of the Internet for terrorist and violent extremist purposes, which included encouraging internet companies to establish lawful access solutions for their encrypted products and services. It would not be unreasonable to think that the accord made by the G7 Interior Ministers in April could be taken up by the G7 leaders in August. If this were to happen, it would set a dangerous precedent for weakening encryption globally.

One of the ideas that keeps coming up is something called “lawful access.” The idea sounds so reasonable. Bad people often communicate with others using encryption. So, it is effectively impossible to see what they are saying. And they often store information using encryption. So, after they have committed their crimes, it is practically impossible to find out how the crimes were planned or even to find the incriminating evidence.

The idea is that, under appropriate legal authorization, legitimate law enforcement agencies will have the power to intercept and open up communications between terrorists or other malefactors, and to decrypt data to retrieve incriminating evidence. The thought is that everyone ought to be willing to trade a little bit of privacy for the security of knowing that terrorists can be caught.

I wish it were so simple.

This is not a question of privacy versus security. It’s instead a tension between the different ways needed to protect citizens. Protected communications, sent through secure systems and using strong encryption, are themselves a matter of security.

Strong encryption helps prevent tampering with the operations of critical services, such as electricity and transport. Strong encryption keeps citizens’ data, such as financial and health information, away from criminals and terrorists. Strong encryption ensures that law enforcement communication, civil authorities' ability to communicate with each other, and banking transactions are all protected. Terrorists should not feel free to upload terrible images of slaughter, but neither should they be empowered to empty people’s bank accounts or to tap the phones of presidents and prime ministers.

“But,” people say, “What if only legitimate requests can get into the protected communications?” Weaknesses in computer systems are discovered by attackers all the time. There is simply no way to prevent weaknesses from becoming known to those who want to attack the wider society. And if there are these weaknesses, the most motivated—like criminals, terrorists, and hostile governments—will work harder than anyone else to find them and exploit them.

Some have acknowledged that encryption itself must not be weakened, and yet think that the same goal can be achieved by making the overall systems more vulnerable. Often, this takes the form of encouraging services that do not protect messages all the way along the path, but encrypt “hop by hop.” Communications are secured when going through the network, but they are accessible to operators of the communication service.

This system design is often linked to content filtering. Filtering content to eliminate terrorist messages requires being able to see the content. Hop-by-hop encryption exposes the content at each hop, so it becomes technically possible to filter content. The only problem is, this design does not really protect communication, because anyone who can get access to those systems in the middle can see the messages. It does not require targeted, legally sanctioned access to one person’s messages, but instead allows mass surveillance of all communications. It takes some kind of cynicism for governments to use the occasion of the slaughter of innocents to promote a technology that makes citizens less secure, but makes mass surveillance easier.

More worrisome are efforts on the part of governments to find vulnerabilities in the systems we all use. Instead of disclosing them, governments plan to keep those vulnerabilities secret. If the government found that an electric blanket habitually electrocuted the user when used as normal, we would expect the product be recalled. In this case, the idea is instead to use those vulnerabilities to help law enforcement.

None of this is new. “Lawful access” is the same bad idea that people knowledgeable about network and computer security have been attempting to stop for more than two decades. The desire for this access keeps coming back, because it sounds so reasonable on its surface and because it makes certain law enforcement jobs easier. But it is not reasonable. It is an idea that endangers every citizen and the whole society. Governments in the G7 and around the world need to resist this seemingly benign request from their law enforcement establishments, and resolve instead to make sure that their citizens have the best tool to protect them: strong encryption, from end to end, deployed widely, to defend the society against its attackers.

WIRED Opinion publishes pieces written by outside contributors and represents a wide range of viewpoints. Read more opinions here. Submit an op-ed at opinion@wired.com

https://www.wired.com/category/security/feed/