Elon Musk just dropped the source code for the X recommendation algorithm (again). We dive into the Rust code, the 75x reply multipliers, and why ‘open source’ might just be a PR stunt without the weights.

It is January 2026, and Elon Musk has once again decided that the best way to prove X (formerly Twitter) isn’t a shadowbanning hellscape is to dump its source code on GitHub.
“Transparency,” he calls it. “A distraction,” the cynics whisper. “Where are the weights?” Vitalik Buterin asks, probably.
On January 20, 2026, X released the latest iteration of its recommendation algorithm. This isn’t the 2023 “freedom of speech” release. This is the Grok-era release. It is leaner, meaner, and almost entirely run by AI.
So Here is what we found in the repo, what it means for your reach, and whether this is actual transparency or just code theatre.
The Tech Stack: Rust, Python, and Grok
The biggest shift from the 2023 release is the architecture. The old Scala spaghetti is mostly gone, replaced by a high-performance Rust backend and Python for the ML pipelines.
But the real star—or villain, depending on your view—is Grok.
The core recommendation engine is now fundamentally a Grok-based Transformer model. Unlike the previous heuristic-heavy approach (where engineers manually tuned parameters like author_is_elon = true), the new system is an embedding-based “Heavy Ranker.”
It works in two main stages:
1. Candidate Generation: It pulls 1,500 tweets. 50% from people you follow (In-Network), 50% from people you don’t (Out-of-Network).
2. The Heavy Ranker: A massive neural network predicts the probability of you engaging with each tweet.
The “Out-of-Network” graph is now entirely powered by real-time embeddings. If you liked a tweet about “Rust compilers,” the graph instantly finds 10,000 other tweets near that vector and serves them up. No human rules required.
Traffic Secrets: How to Game the System (Allegedly)
While the neural nets are a black box, the reward functions in the open-sourced code give us the cheat codes. If you want to go viral in 2026, here is the math:
1. The “Reply” is King (75x Multiplier)
This is the single most important number in the codebase.
* Like: 1 point.
* Retweet: 20 points.
* Reply: 1 point? No.
* Reply (where the author replies back): 75 points.
The algorithm is desperate for conversations. A “banger” tweet with 10k likes stands no chance against a controversial tweet with 100 arguments in the comments. If you want reach, reply to your replies.
2. Links are Reach Suicide
The code explicitly penalties posts with external links in the body. The platform wants “Time on Site.” If you try to send a user to Substack or YouTube, the algorithm will bury you.
* Strategy: Post the link in the replies? The code seems to check for that too now (Thread-level penalties).
* New Meta: Post a screenshot of the article and say “Link in bio.” Welcome to Instagram.
3. The “Time on Tweet” Metric
This is a new addition. The ranking model now inputs dwell_time. If a user stops scrolling to read your long-form post (even if they don’t like it), you get points. This explains why your feed is full of 2,000-word “threads” that could have been an email.
The “Dumb” Algorithm & The Limits of Transparency
Musk admitted on a Spaces that the current algorithm is “dumb.” He is right, but not for the reason he thinks.
It is dumb because it optimizes for short-term dopamine, not long-term satisfaction.
And this brings us to the elephant in the room: The Weights.
Open-sourcing the code (model.forward()) without open-sourcing the weights (the 500GB file of learning parameters) is like giving someone the blueprints to a Ferrari engine but not telling them what fuel it runs on or how the ECU is mapped.
We can see how it processes data. We cannot see why it makes specific decisions.
- Did it show you that political rage-bait because of a code rule? No.
- Did it show you that rage-bait because the weights learned that “rage = dwell time”? Yes.
We can’t audit the weights. We can’t audit the training data. So, we technically can’t prove bias. We just know the machinery exists.
Verified Verdict: A Framework, Not a Fix
The January 2026 release is a win for engineering transparency but a neutral move for algorithmic accountability.
It confirms what we suspected: Engagement is the only god X worships.
If you are a creator:
* Start arguments in your comments.
* Stop posting links.
* Write long, captivating hooks to arrest the scroll.
If you are a user:
* Realize that your feed is a mirror of your lizard brain, reflected back at you by a Rust-powered supercomputer.
The code is open. But the black box is darker than ever.