This ‘Demonically Clever’ Backdoor Hides In a Tiny Slice of a Computer Chip

Security flaws in software can be tough to find. Purposefully planted ones—hidden backdoors created by spies or saboteurs—are often even stealthier. Now imagine a backdoor planted not in an application, or deep in an operating system, but even deeper, in the hardware of the processor that runs a computer. And now imagine that silicon backdoor is invisible not only to the computer’s software, but even to the chip’s designer, who has no idea that it was added by the chip’s manufacturer, likely in some farflung Chinese factory. And that it’s a single component hidden among hundreds of millions or billions. And that each one of those components is less than a thousandth of the width of a human hair.

In fact, researchers at the University of Michigan haven’t just imagined that computer security nightmare; they’ve built and proved it works. In a study that won the “best paper” award at last week’s IEEE Symposium on Privacy and Security, they detailed the creation of an insidious, microscopic hardware backdoor proof-of-concept. And they showed that by running a series of seemingly innocuous commands on their minutely sabotaged processor, a hacker could reliably trigger a feature of the chip that gives them full access to the operating system. Most disturbingly, they write, that microscopic hardware backdoor wouldn’t be caught by practically any modern method of hardware security analysis, and could be planted by a single employee of a chip factory.

“Detecting this with current techniques would be very, very challenging if not impossible,” says Todd Austin, one of the computer science professors at the University of Michigan who led the research. “It’s a needle in a mountain-sized haystack.” Or as Google engineer Yonatan Zunger wrote after reading the paper: “This is the most demonically clever computer security attack I’ve seen in years.”

Analog Attack

The “demonically clever” feature of the Michigan researchers’ backdoor isn’t just its size, or that it’s hidden in hardware rather than software. It’s that it violates the security industry’s most basic assumptions about a chip’s digital functions and how they might be sabotaged. Instead of a mere change to the “digital” properties of a chip—a tweak to the chip’s logical computing functions—the researchers describe their backdoor as an “analog” one: a physical hack that takes advantage of how the actual electricity flowing through the chip’s transistors can be hijacked to trigger an unexpected outcome. Hence the backdoor’s name: A2, which stands for both Ann Arbor, the city where the University of Michigan is based, and “Analog Attack.”

Here’s how that analog hack works: After the chip is fully designed and ready to be fabricated, a saboteur adds a single component to its “mask,” the blueprint that governs its layout. That single component or “cell”—of which there are hundreds of millions or even billions on a modern chip—is made out of the same basic building blocks as the rest of the processor: wires and transistors that act as the on-or-off switches that govern the chip’s logical functions. But this cell is secretly designed to act as a capacitor, a component that temporarily stores electric charge.

Screen-Shot-2016-05-31-at-8.43.32-AM-482x265

 Click to Open Overlay GalleryThis diagram shows the size of the processor created by the researchers compared with the size of malicious cell that triggers its backdoor function.University of Michigan

Every time a malicious program—say, a script on a website you visit—runs a certain, obscure command, that capacitor cell “steals” a tiny amount of electric charge and stores it in the cell’s wires without otherwise affecting the chip’s functions. With every repetition of that command, the capacitor gains a little more charge. Only after the “trigger” command is sent many thousands of times does that charge hit a threshold where the cell switches on a logical function in the processor to give a malicious program the full operating system access it wasn’t intended to have. “It takes an attacker doing these strange, infrequent events in high frequency for a duration of time,” says Austin. “And then finally the system shifts into a privileged state that lets the attacker do whatever they want.”

That capacitor-based trigger design means it’s nearly impossible for anyone testing the chip’s security to stumble on the long, obscure series of commands to “open” the backdoor. And over time, the capacitor also leaks out its charge again, closing the backdoor so that it’s even harder for any auditor to find the vulnerability.

New Rules

Processor-level backdoors have been proposed before. But by building a backdoor that exploits the unintended physical properties of a chip’s components—their ability to “accidentally” accumulate and leak small amounts of charge—rather than their intended logical function, the researchers say their backdoor component can be a thousandth the size of previous attempts. And it would be far harder to detect with existing techniques like visual analysis of a chip or measuring its power use to spot anomalies. “We take advantage of these rules ‘outside of the Matrix’ to perform a trick that would [otherwise] be very expensive and obvious,” says Matthew Hicks, another of the University of Michigan researchers. “By following that different set of rules, we implement a much more stealthy attack.”

The Michigan researchers went so far as to build their A2 backdoor into a simple open-source OR1200 processor to test out their attack. Since the backdoor mechanism depends on the physical characteristics of the chip’s wiring, they even tried their “trigger” sequence after heating or cooling the chip to a range of temperatures, from negative 13 degrees to 212 degrees Fahrenheit, and found that it still worked in every case.

Screen-Shot-2016-05-31-at-8.43.56-AM-582x288

Click to Open Overlay GalleryHere you can see the experimental setup the researchers used to test their backdoored processor at different temperatures.University of Michigan

As dangerous as their invention sounds for the future of computer security, the Michigan researchers insist that their intention is to prevent such undetectable hardware backdoors, not to enable them. They say it’s very possible, in fact, that governments around the world may have already thought of their analog attack method. “By publishing this paper we can say it’s a real, imminent threat,” says Hicks. “Now we need to find a defense.”

But given that current defenses against detecting processor-level backdoors wouldn’t spot their A2 attack, they argue that a new method is required: Specifically, they say that modern chips need to have a trusted component that constantly checks that programs haven’t been granted inappropriate operating-system-level privileges. Ensuring the security of that component, perhaps by building it in secure facilities or making sure the design isn’t tampered with before fabrication, would be far easier than ensuring the same level of trust for the entire chip.

They admit that implementing their fix could take time and money. But without it, their proof-of-concept is intended to show how deeply and undetectably a computer’s security could be corrupted before it’s ever sold. “I want this paper to start a dialogue between designers and fabricators about how we establish trust in our manufactured hardware,” says Austin. “We need to establish trust in our manufacturing, or something very bad will happen.”

Here’s the Michigan researchers’ full paper

Post your thoughts below in the comment section.

SOURCE: Wired

Loading