Radiant is open-sourcing its compiler toolchain and launching code.radiant.computer today.
Radiant is open-sourcing its compiler toolchain and launching code.radiant.computer today.
Congrats!
Some more details about compiler bootstrapping, fixed points and trust.
Though originally the fixed point was reached after 3 stages, it is now reached in 2!
The problem with agents is they donβt know what they donβt know. We humans do have an intuition for it.
The Radiance compiler has reached a fixed point.
This means it can now compile itself and generate identical output to itself.
user> just fix all the bugs I'm tired and going to bed.
llm> ok, I'll fix all the bugs.
... 8 hours later ...
llm> Wait, the issue might be... Actually... blah blah
llm> But wait! Let me just.. blah blah blah
user> I'm going back to bed.
πͺ΅ A new log entry was posted: "A.I. and the Future of Computing"
radiant.computer/notes/ai-and...
The bootstrapping stage is a bit of a mindfuck.
R0 = C implementation compiled with clang.
R1 = Radiance implementation compiled with R0.
R2 = Radiance implementation compiled with R1. β¬
οΈ We're here.
1. Writing a compiler in C to compile Radiance to RV64 β
2. Porting the C compiler in (1) to Radiance β
3. Compiling the ported compiler in (2) with the compiler in (1) β
4. Compiling the self-hosting Radiance compiler (3) with itself π₯π΅βπ«
I can't think of anything more soul crushing in the UX space than trying to make a keyboard work on a smart phone's touchscreen. It simply is the wrong interface.
In fact touchscreens are the wrong interface for most things.
ios-countdown.win
I haven't, looks very interesting!
Yes, this is a web-based Git repository browser created from scratch in a couple of hours using an LLM.
Great read, thanks! You might find @radiant.computer interesting, it is very Wirth-inspired.
Great talk about hardware/software co-design and why serious software developers should think about hardware. This is one of the core principles of @radiant.computer
h/t @lorenz.leutgeb.xyz
www.youtube.com/watch?v=v0Jj...
Incompatibility allows true progress.
πͺ΅ A new log entry was posted: "Radiance Intermediate Language"
radiant.computer/log/011-radi...
Agreed. Having used both extensively I think the reason is simply that the Claude Code CLI is much better, and Claude is faster at coding.
"On Being a Computer Scientist in the Time of Collapse" is a really excellent and thought provoking read. I'm one of those optimists that is heavily criticized in this essay.
web.cs.ucdavis.edu/~rogaway/pap...
I was wondering what that was
βWhat Remains of Edith Finchβ puts every other game I played recently to shame. What a crazy experience.
It's a bit like power tools, they are faster but less precise, and may not give the same results in the end, due to the process being different.
It's generally still quicker to do certain kinds of edits by hand, if you want something very specific. There's also the fact that writing code is a way to form thoughts and ideas that can be superior to prompting, ie. as purely a thinking tool to explore a design space.
And "by hand" doesn't mean typing every character manually, but using Claude in a piecemeal fashion, ie. telling it specifically what functions to write, vs. telling it what the end state should be.
The problem is identifying how Claude will perform early enough in the process, and that's hard, even with experience using LLMs. I think in the future I will limit this kind of workflow to maximum 2K LOC, anything over that should be written by hand or broken up in pieces somehow.
It does seem like focusing specific code leads to better results, eg. if I ask it to simplify the function `lowerFieldRef`, it would find opportunities to simplify the code which it wouldn't if I asked it to do that for a set of functions which includes that one.
I've done multiple passes (and even with Gemini 3), essentially asking it to find patterns and factor them out, or maximize code re-use, and yet through manual review have noticed dozens of obvious simplification opportunities it did not point out to me. This is quite unfortunate.
Another issue is that it seems to forget code it's written, so it doesn't diligently factor out similar behavior the way a human would. You would think that it's good at noticing patterns in the code, but it isn't, even when specifically prompted to review its code and simplify it...
Sometimes, adding 10 lines of code in another module (something it wasn't instructed to do) would allow for removing 50 lines in the module it was working on. It's not clear whether this is solvable via prompting due to unknown unknowns at the time of prompting.
... an interesting exercise. Could I have written this myself in less time? Maybe, I'm not sure. The biggest issue with getting Claude to write code like this is that it doesn't *think*. For instance, it's happy to work within the given constraints without asking "why do these constraints exist?" --
... a 6000 LOC Radiance module which I've been reviewing and simplifying for the last two weeks.
The code it generated is mostly correct and well tested, but I've been able to reduce it to 4300 LOC while keeping functionality intact and simplifying things that could be simplified. It's been...