Resources & Guides

The Great Refactor: Securing Critical Open-Source Code

Yes, it's about Rust
September 8, 2025,
9:11 am

Introduction

We are finally done talking about (digital) supply chains. I think. Until then, I wanted to switch it up and take a look at a very interesting policy proposal that came across my desk. “The Great Refactor” by authors from the Institute for Progress (IFP) is an essay proposing a way “to secure critical open-source code against memory safety exploits by automating code hardening at scale” to “secure US critical infrastructure and our software supply chains” (okay I lied this is still about supply chains). There’s a lot of buzzwords in there, but the paper is essentially about:

  • Modernizing and securing critical open-source systems at scale against a type of issue that occurs way more frequently than anyone wants to admit (but everyone knows about).

    (hmm I wonder if there’s any organization other than the US government out there that would be interested in something like this?)

Who

Before we jump into the thick of the paper, let’s put it into context. First, who is saying this? The Institute for Progress is (in their words) “a non-partisan think tank focused on innovation policy.” Innovation policy is yet more buzz words, but it’s vaguely about progress across technology and research (aka "how do we get the government to fund cool tech sh*t"). The specific authors from the IFP who put together the essay are Herbie Bradley (PhD student @ University of Cambridge and AI policy advisor) and Girish Sastry (former OpenAI and also AI policy advisor).

The Great Refactor (yes, it’s to Rust)

Like we mentioned above, the main thrust of the piece is to create a focused effort to rewrite critical software systems from languages that are memory unsafe, i.e., C and C++, to something like Rust, which is inherently memory safe. This seems like an almost negligible technical difference to those who don’t program, but research shows around 60-70% of major vulnerabilities across large and complex digital projects stem from memory exploits. Based on this, the authors propose that we “systematically rewrite key open-source libraries into the Rust programming language, which offers strong guarantees of memory safety” and “target widely used, under-resourced libraries historically responsible for severe vulnerabilities.” And yes, they propose using AI to help handle this large effort.

The Problem

I’m not sure if you’re aware, but the security of most systems and applications are held together by duct tape and prayers, including (or especially?) American ones. This is known and often exploited by different actors and organizations all over the world. Many of these issues and holes in security stem from a lack of focus and work specifically directed at hardening and improving software security. Most developers and teams are not incentivized to create the most secure systems, rather ones that are secure enough*. On top of that, many are built by open source communities, which would love to prioritize security but are usually maintained by like three people in their spare time (with one of them at a Furry conference at any given moment). As the authors put it, “security is limited by how much attention we can afford to give.”

*usually enough to go to market and start making $$

The Solution

With this in mind, the suggested policy recommends using the latest advancements in generative AI and coding tools to target the types of vulnerabilities that we can get the most bang for our buck: memory safety bugs. These allow software to access a computer’s memory in uh an unsafe way and give hostile actors the ability to insert malicious bits of code or cause devices to crash. Some of the most disastrous cyberattacks that have racked up damages in the billions come from memory safety issues. Since many of the original programming languages used to write critical systems have no inherent defense for memory safety due to their design, rewriting them in a language built for memory safety would address a significant number of attack vectors currently available for attackers. Simple in theory, absolutely brutal in practice (anyone who's tried to port a legacy codebase knows the pain).

The proposal puts forward both the tactical solutions (using AI to rewrite critical libraries in Rust) and some strategic considerations. From a governance perspective, the authors suggest creating a "Focused Research Organization (FRO)", which is basically a fancy way of saying "let's make a new organization because the existing ones aren't working". Unlike current organizations like startups or traditional academic research groups // labs, this FRO should be able to “marshal long-term funding and top-tier engineering talent while staying mission-focused and driven by the public interest.” The broad strategy that this organization would follow is something along these lines: they should identify the most important libraries to secure over a 3–5 year timeline. Then, ensure and provide robust validation of translated code. Over time, this should be systematically developed into a basis for defense-focused cybersecurity and a repeatable playbook for adoption (this timeline assumes everything goes according to plan, which... gestures vaguely at Healthcare.gov, the F-35 program, and every other government tech initiative).

From a practical standpoint, the authors are right that AI-assisted code translation could work for this. Modern LLMs are surprisingly good at Rust conversion (they’re definitely better than they were at the beginning of the bubble), and the nice thing about the nature of Rust is that the language's compiler will catch most of the subtle bugs that automated translation might introduce. As always, the real challenge isn't technical, it’s about people. Marshaling funding, labor, and convincing organizations to actually adopt the refactored code and deal with the inevitable integration headaches is where the heavy lifting comes in.

Conclusion

The authors conclude the paper with these specific suggestions:

  1. Establish dedicated FRO funding:

    1. Either use existing agencies, i.e. DARPA & NSF, or create some sort of new program that would allocate ~$70 million over 5 years to “The Great Refactor.” To avoid, uh, grifting, the funding should be connected to specific milestones or achievements.

  2. Create an oversight and advisory structure:

    1. As governments are wont to do, they should develop an advisory and leadership board that can help align the different interests and priorities of various stakeholders between open-source communities, nations, and private enterprise.

  3. Integrate with existing procurement and compliance frameworks:

    1. Leverage these frameworks and some state craft to create market-based incentives to use memory-safe options and libraries when building critical gov't systems and applications.

  4. Leverage industry co-investment:

    1. Use some good ol’ public-private partnerships to utilize the cracked Rust devs in industry in this effort and leverage the learning and development from their respective tech giants to make sure the implementation of this grand refactoring actually gets done.

Overall, this is an ambitious policy proposal that (on paper) does seem like it would significantly improve our currently very perilous (and frightening) security posture across critical systems and open source projects. The technical approach is sound, Rust really would solve a lot of these problems. But the organizational execution? That's where I have serious doubts. Either way, I am the perfect target audience for this type of software and policy wonkism, so there's a good chance anyone that could actually make this happen is still trying to figure out how to open their mobile email // doesn't know what Rust is // thinks "memory safety" is about not forgetting where you put your keys.

So what can us mere mortals learn from this software wonk fever dream? Three things:

  • First, map your actual digital supply chain. Not the one you think you have, the one you actually have. That includes every library, every dependency, every third-party component that could bring down your system.

  • Second, from that inventory, identify where you're most exposed to memory safety issues. If you're running critical stuff on C/C++ libraries from the early 2000s, you probably already know the answer.

  • Third, maybe start listening to that one dev who's been evangelizing Rust for the past two years. They might be onto something.

References

 

Bottom Text

What if we emailed you the secrets to the entire universe?

We wont, but that’d be cool, right?

Wait, there's more!