The Loop is Back: Why HRM is the Most Exciting AI Architecture in Years
Years ago, I sat in Jeremy Howardβs FastAI class, right at the dawn of a new era. He was teaching us ULMFiT, a method he (& Sebastianβ¦
It's a story about why QKV is magic, my love for the loop, and why HRM might be the blueprint for the next generation of AI reasoning.
My post, written with the help of an LLM (the irony!), is here. I poured my heart into this one:
medium.com/@gedanken.th...
#AI #DeepLearning #RNN #Transformer #HRM
07.08.2025 08:49
π 1
π 1
π¬ 0
π 0
The Hierarchical Reasoning Model (HRM) isn't just another model. It's a deep synthesis. It marries the iterative soul of an RNN (minus the BPTT nightmare) with the raw power of modern Attention.
I wrote a deep dive on why this is a full-circle moment for me, going back to the RNN finetuning days.
07.08.2025 08:49
π 0
π 0
π¬ 1
π 0
What makes HRM truly special is its ability to "think fast and slow."Its ACT module isn't just a stop signal; it's a cognitive engine that learns to allocate effort.
It's the closest we've come yet to embodying Prof. Kahneman's vision of a System 1/2 mind in code.
07.08.2025 08:49
π 0
π 0
π¬ 1
π 0
But how does it fix mistakes buried deep in the past? By not letting them stay in the past.
Each new "Thinking Session" (the M-loop) starts with the flawed result of the last one. It forces the model to confront its own errors until the logic is perfect.
07.08.2025 08:49
π 0
π 0
π¬ 1
π 0
So how does HRM work? Imagine a tiny,2-person company.
π§ A strategic CEO (H-module) who thinks slow, sees the big picture, and sets the overall strategy.
β‘οΈ A diligent Worker (L-module) who thinks fast, executing the details of the CEO's plan.
This separation allows for truly deep, iterative thought.
07.08.2025 08:49
π 0
π 0
π¬ 1
π 0
The Hierarchical Reasoning Model (HRM) isn't just another model. It's a deep synthesis. It marries the iterative soul of an RNN (minus the BPTT nightmare) with the raw power of modern Attention.
07.08.2025 08:49
π 0
π 0
π¬ 1
π 0
Then, last month, a paper dropped that changes everything.
This is the architecture I've been waiting for since 2018. A thread on HRM. π§΅
07.08.2025 08:49
π 0
π 0
π¬ 1
π 0
For years, I died a little inside every time I taught the Transformer model, grudgingly accepting that the elegant loop of the RNN was dead.
07.08.2025 08:49
π 1
π 0
π¬ 1
π 0
You're supposed to what? Swallow the toothpaste?
30.03.2025 04:52
π 0
π 0
π¬ 0
π 0
π₯π₯
MCTS rollout pruning, python interpreter verifier and iterative self improvement of intermediate steps during each round of training.
Brilliant stuff thisπͺ
rStar-Math is the kind of paper I wish to see more of!
09.01.2025 23:45
π 3
π 0
π¬ 0
π 0
(1/7) For a while we've been working on an ambitious problem: The National Archive of Mexico #AGN holds 58 linear km of documents. Only a drop of this βoceanβ has been studied due to many challenges. But great news: we are now unlocking this information! A thread π§΅ (1/8) #HTR #AI #CulturalHeritage
17.12.2024 14:15
π 140
π 60
π¬ 5
π 13
Computer Vision: Fact & Fiction is now available on YouTube ππΌ I made a playlist for it with the seven chapters. Enjoy this time capsule from two decades ago!
19.12.2024 16:50
π 58
π 16
π¬ 4
π 4
I like how the new gemini 2.0 thinking model insists like a child...lol
19.12.2024 18:38
π 0
π 0
π¬ 0
π 0
Taking a time machine within a time machine... stealing someone's consciousness...the ideas were next level!
The guy is a beast.
It's a shame Shane Carruth couldn't carry on making more amazing films.
07.12.2024 20:01
π 1
π 0
π¬ 0
π 0
Yooo...a primer fan?
There are so many incredible moments in this film.
Wow...have you seen 'Upstream color' as well?
07.12.2024 19:57
π 2
π 0
π¬ 1
π 0
Wow!
I should read this!
05.12.2024 15:11
π 0
π 0
π¬ 0
π 0
Ah...
03.12.2024 18:22
π 0
π 0
π¬ 0
π 0
What does "fuch" mean?
03.12.2024 14:56
π 0
π 0
π¬ 1
π 0
Diffusion transformer (DiT) ftw!!
03.12.2024 08:32
π 1
π 0
π¬ 0
π 0
6. V is not rotated. Only Q and K are rotated relative to each other. Farther tokens now have a larger angle between them.
7. The encoding signal is not going to die out. It can be preserved by doing it as part of the softmax dot product attn.
8. What a gorgeous π idea...
03.12.2024 06:32
π 1
π 0
π¬ 0
π 0
4. RoPE takes this operation from the beginning of the input to inside the attention operation itself.
5. There are 2 benefits: the semantic meaning of the token is not corrupted. We only rotate the vector, preserving the magnitude.
03.12.2024 06:32
π 0
π 0
π¬ 1
π 0
TL;DR:
1. We need a way to encode token positions when feeding them as input into the transformer
2. We could just concat 1,2,3 etc. but this doesn't scale for variable lengths
3. Noam Shazeer showed show sin and cos waves can produce a beautiful pattern that encodes relative positions bw tokens.
03.12.2024 06:32
π 0
π 0
π¬ 1
π 0
RoPE has been the one π― genuine upgrade to the vanilla Vaswani transformer.
This beautiful blogpost by Chris Fleetwood explains the significance and how rotations of Q & K preserves meaning(magnitude) while encodes relative positions(angle shift) π₯π₯
03.12.2024 06:32
π 13
π 2
π¬ 1
π 0
Why does ChatGPT refuse to say "David Mayer" ?? π€
I have tried a bunch of ways and it refuses to!! π
01.12.2024 06:38
π 1
π 0
π¬ 0
π 0
ππ
30.11.2024 03:04
π 0
π 0
π¬ 0
π 0
π€ Can you turn your vision-language model from a great zero-shot model into a great-at-any-shot generalist?
Turns out you can, and here is how: arxiv.org/abs/2411.15099
Really excited to this work on multimodal pretraining for my first bluesky entry!
π§΅ A short and hopefully informative thread:
28.11.2024 14:32
π 134
π 24
π¬ 2
π 7
π
29.11.2024 10:14
π 0
π 0
π¬ 0
π 0
SIGGRAPH'25 (form): 48 days.
RSS'25 (abs): 49 days.
SIGGRAPH'25 (paper-md5): 55 days.
RSS'25 (paper): 56 days.
ICML'25: 62 days.
RLC'25 (abs): 77 days.
RLC'25 (paper): 84 days.
ICCV'25: 97 days.
29.11.2024 10:00
π 12
π 1
π¬ 0
π 2
We should give this place a serious try...
It may work π
29.11.2024 10:07
π 0
π 0
π¬ 0
π 0