To keep hardware safe, cut out the code’s clues
Imagine you’re a chef with a highly sought-after recipe. You write your top-secret instructions in a journal to ensure you remember them, but its location within the book is evident from the folds and tears on the edges of that often-referenced page.
Much like recipes in a cookbook, the instructions to execute programs are stored in specific locations within a computer’s physical memory. The standard security method — referred to as “address space layout randomization” (ASLR) — scatters this precious code to different places, but hackers can now find their new locations. Instead of hacking the software directly, they use approaches called microarchitectural side attacks that exploit hardware, identifying which memory areas are most frequently used. From there, they can use code to reveal passwords and make critical administrative changes in the system (also known as code-reuse attacks).
To enhance ASLR’s effectiveness, researchers from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) have found a way to make these footprints vanish. Their “Oreo” method mitigates hardware attacks by removing randomized bits of addresses that lead to a program’s instructions before they’re translated to a physical location. It scrubs away traces of where code gadgets (or short sequences of instructions for specific tasks) are located before hackers can find them, efficiently enhancing security for operating systems like Linux.
Oreo has three layers, much like its tasty namesake. Between the virtual address space (which is used to reference program instructions) and the physical address space (where the code is located), Oreo adds a new “masked address space.” This re-maps code from randomized virtual addresses to fixed locations before it is executed within the hardware, making it difficult for hackers to trace the program’s original locations in the virtual address space through hardware attacks.
“We got the idea to structure it in three layers from Oreo cookies,” says Shixin Song, an MIT PhD student in electrical engineering and computer science (EECS) and CSAIL affiliate who is the lead author of a paper about the work. “Think of the white filling in the middle of that treat — our version of that is a layer that essentially whites out traces of gadget locations before they end up in the wrong hands.”
Senior author Mengjia Yan, an MIT associate professor of EECS and CSAIL principal investigator, believes Oreo’s masking abilities could make address space layout randomization more secure and reliable.
“ASLR was deployed in operating systems like Windows and Linux, but within the last decade, its security flaws have rendered it almost broken,” says Yan. “Our goal is to revive this mechanism in modern systems to defend microarchitecture attacks, so we’ve developed a software-hardware co-design mechanism that prevents leaking secret offsets that tell hackers where the gadgets are.”
The CSAIL researchers will present their findings about Oreo at the Network and Distributed System Security Symposium later this month.
Song and her coauthors evaluated how well Oreo could protect Linux by simulating hardware attacks in gem5, a platform commonly used to study computer architecture. The team found that it could prevent microarchitectural side attacks without hampering the software it protects.
Song observes that these experiments demonstrate how Oreo is a lightweight security upgrade for operating systems. “Our method introduces marginal hardware changes by only requiring a few extra storage units to store some metadata,” she says. “Luckily, it also has a minimal impact on software performance.”
While Oreo adds an extra step to program execution by scrubbing away revealing bits of data, it doesn’t slow down applications. This efficiency makes it a worthwhile security boost to ASLR for page-table-based virtual memory systems beyond Linux, such as those commonly found in major platforms such as Intel, AMD, and Arm.
In the future, the team will look to address speculative execution attacks — where hackers fool computers into predicting their next tasks, then steal the hidden data it leaves behind. Case in point: the infamous Meltdown/Spectre attacks in 2018.
To defend against speculative execution attacks, the team emphasizes that Oreo needs to be coupled with other security mechanisms (such as Spectre mitigations). This potential limitation extends to applying Oreo to larger systems.
“We think Oreo could be a useful software-hardware co-design platform for a broader type of applications,” says Yan. “In addition to targeting ASLR, we’re working on new methods that can help safeguard the critical crypto libraries widely used to safeguard information across people's network communication and cloud storage.”
Song and Yan wrote the paper with MIT EECS undergraduate researcher Joseph Zhang. The team’s work was supported, in part, by Amazon, the U.S. Air Force Office of Scientific Research, and ACE, a center within the Semiconductor Research Corporation sponsored by the U.S. Defense Advanced Research Projects Agency (DARPA).
Latest Research News
- New method combines imaging and sequencing to study gene function in intact tissueThe approach collects multiple types of imaging and sequencing data from the same cells, leading to new insights into mouse liver biology.
- Accelerating scientific discovery with AIFutureHouse, co-founded by Sam Rodriques PhD ’19, has developed AI agents to automate key steps on the path toward scientific progress.
- MIT and Mass General Brigham launch joint seed program to accelerate innovations in healthThe MIT-MGB Seed Program, launched with support from Analog Devices Inc., will fund joint research projects that advance technology and clinical research.
- Using generative AI to help robots jump higher and land safelyMIT CSAIL researchers combined GenAI and a physics simulation engine to refine robot designs. The result: a machine that out-jumped a robot designed by humans.
- Four from MIT named 2025 Goldwater ScholarsRising seniors Avani Ahuja, Julianna Lian, Jacqueline Prawira, and Alex Tang are honored for their academic achievements.
- LLMs factor in unrelated information when recommending medical treatmentsResearchers find nonclinical information in patient messages — like typos, extra white space, and colorful language — reduces the accuracy of an AI model.