Breaking Encryption Hurts Our Defenses
The so-called “Five Eyes” recently pledged to force “encryption backdoors” into private systems. As I just wrote for Real Clear Policy, this has serious humanitarian implications, but it would also seriously harm those countries’ own cyber defenses.
Encryption backdoors operate on a relatively simple premise: The encryption keys used to keep information secret—which are usually only accessible to an end user, such as the recipient of a Signal message or the owner of an iPhone—are instead controlled by a third party, to access systems or data as desired. (Mashable has a great explainer here.)
Right now, companies like Apple do not have access to their users’ encryption keys; they cannot quickly bypass user encryption as a result. The argument often made by national-level policymakers is that this impedes law enforcement investigations, usually in the case of national security or counter-terrorism, where it might be more convenient to access the key shielding information or a device from view than to find another way in. National governments around the world have therefore tried to access (or have likely accessed) caches of encryption keys for private systems. This could occur by (a) having the government itself hold those keys or (b) having the company retain the keys and then hand over data in certain situations. (Or, (c), requiring companies to store encryption keys and then stealing them quietly.)
However, allowing anyone access to encryption keys in private systems (think iPhones, Windows laptops, WhatsApp, Google services, etc.) isn’t a nice, easy “fix” to search and seizure challenges in the digital age. There is no potential benefit without potential cost. Doing so would hurt the entire country’s cyber defenses: governments, corporations, consumers, citizens.
After the 2015 terrorist attack in San Bernardino, when the FBI wanted Apple to unlock the perpetrator’s iPhone on its behalf, Apple denied the request, with CEO Tim Cook then proclaiming that “you can’t have a backdoor that’s only for the good guys.” He’s right. There is no way to ensure that only certain governments would be able to use the encryption keys to access devices and information. In other words: The Five Eyes are arguing for vulnerabilities that adversely impact us all, made under the false premise that they or some company could entirely protect those keys from theft.
If you give a government all the private keys to the iPhones sold in the U.S., another government or cybercrime group or even individual hacker could come along and steal those keys—thereby gaining access to everyone’s systems and information. In 2016, it became clear that the NSA couldn’t even keep its own cyber weapons safe when the “Shadow Brokers” stole an entire cache of zero-day exploits that were later used in multiple global ransomware attacks. Such exploit reuse is a growing problem, as I’ve theorized in my own writing; as Gil Baram wrote for the Council on Foreign Relations just a few months ago; and as the New York Times Editorial Board has even discussed.
To be clear, nobody should expect any organization to prevent all cyber attacks all the time. But that is exactly the point: What happens with exploit theft could—and very likely would—happen with third-party-controlled encryption keys. A malicious actor would steal the keys from a storage site, gain access to innumerable devices in sensitive places, and then wreak havoc. Secret files could be leaked; phone conversations could be spied upon; Internet of Things devices could be remotely hacked and used to turn off urban traffic systems or electrical grid sensors.
I am hardly the first to point out the “backfire” that would result from mandated encryption backdoors. Some of the world’s foremost cybersecurity experts argued this in the seminal paper “Keys Under Doormats.” Bruce Schneier (one of the authors on the Doormats paper) has repeatedly argued this point—here, here, and here, for instance—as has IEEE, New America’s Open Technology Institute, researchers at Stanford’s Center for Internet and Society, the Chief Technology Officer of Sophos, and cryptography expert Matthew Green.
Congress’ Encryption Working Group wrote in 2016 that backdoors cause more harm than good. That same year, the European Union Agency for Network and Information Security (ENISA) wrote a report with the exact same conclusion. Even the group reviewing the NSA’s surveillance programs in 2013 found that the American government should “fully support and not undermine efforts to create encryption standards” and “increase the use of encryption and urge US companies to do so.” We could sit here and get a headache from all the pieces—written by recognized experts—that refute the (same, repeated) argument for government-mandated encryption backdoors.
Yet, time and time again, the same ludicrous argument persists—that it’s “worth” compromising encryption for the sake of law enforcement investigations or intelligence collection. I have no idea if politicians in Congress and other like places read these articles or papers. But the problem seems to be relatively straightforward: Misunderstandings of encryption, coupled with overblown fears of terrorism (and that’s ‘terrorism’ in a narrowly defined sense), lead to illogical cost-benefit evaluations—stances where compromising everybody is worth it in order to investigate a case after the fact or potentially stop a potential incident that’s unlikely as-is.
But let’s be clear: Pervasive encryption backdoors would undermine one of the most crucial security technologies in the world. Encryption protects intellectual property, commercial transactions, military communications, government secrets, corporate ledgers, consumer browsing, medical databases, connected infrastructure, leaks to journalists living under oppressive regimes, and innumerable other systems, devices, and pieces of data that are vital to the functioning of our digital age. It’s easy to implement, yet it remains remarkably difficult to break—favoring the cyber defender in critical ways. Compromising encryption is not responsible; it would be seriously hurting our national security.
Justin Sherman is studying computer science and political science at Duke University. Justin researches federal cyber policy and digital strategy with the Laboratory for Analytic Sciences; an industry-intelligence-academia group focused on cyber and national security. The views expressed here are his own.