Agreed! I write '|> as.data.frame()' more often than I write my name
Agreed! I write '|> as.data.frame()' more often than I write my name
I came to Minneapolis to report on what's going on, and one of the main questions I showed up with is "just what is the scale of the resistance?" After all, we're all used to the news calling Portland a "war zone" or whatever when it's just some protests in one part of town.
We introduce epiplexity, a new measure of information that provides a foundation for how to select, generate, or transform data for learning systems. We have been working on this for almost 2 years, and I cannot contain my excitement! arxiv.org/abs/2601.03220 1/7
Yep. This has been my approach so far
TITANS & MIRAS: real continual learning
MIRAS = a unifying theory of transformers (attention) and state space models (SSM, e.g. Mamba, RNNs)
TITANS = an optimal MIRAS implementation thatβs βhalfway betweenβ SSM & transformer with a CL memory module
letβs dive in!
research.google/blog/titans-...
I use R at work for lots of things that aren't really analysis, but within that subset I'd say it's probably 70% causal vs 30% predictive
And turkey sucks too, but try the deep fried bird one day (with proper precautions) and you'll never want to go back
Thanksgiving food is easy to make but hard to master
You're right and wrong. Stuffing sucks, but Ive seen people elevate it to delicious specifically as a reaction to that. Cranberry was a terrible tradition until my mom found a recipe with cranberry + jello + cream cheese that is now the thing we look forward to. Pumpkin pie -> sweet potato
no wonder g_d was pissed when they ate that fruit
Itβs time for us to reconnect with the radical, system-changing spirit that was once at the heart of our field. #publichealth #episky #medsky #activism www.thenation.com/article/acti...
Want a peek inside a 1,200 player RPG?
Read out interview with @samsorensen.bsky.social about OVER/UNDER, the massive play-by-post game that's a part of Mothership Month ( @mothership.bsky.social )
manysidednewsletter.substack.com/p/inside-a-1...
Do AI agents ask good questions? We built βCollaborative Battleshipβ to find outβand discovered that weaker LMs + Bayesian inference can beat GPT-5 at 1% of the cost.
Paper, code & demos: gabegrand.github.io/battleship
Here's what we learned about building rational information-seeking agents... π§΅π½
my latest investigation for @consumerreports.org is based on months of reporting and 60+ lab tests of leading protein supplements
we found that most protein powders and shakes have more lead in one serving than our experts say is safe to have in a day (π§΅)
www.consumerreports.org/lead/protein...
The core was essentially taking shame as the means to gain supernal power (like how vamps drain vitae). It's hard to seperate character embarrassment from OOC - you wind up feeling it. You need that shame to be useful, the more genuine the more effective, and the more thoughts of revenge creep in
This wierd fuckin broken thing.
Somehow we Homebrewed the rules enough to make it work and the result was somehow this amazing synthesis of role play and system that made me realize how those things could feed into each other. Also how a game that wasn't 'fun' could still be very much fun.
Over the past year, my lab has been working on fleshing out theory + applications of the Platonic Representation Hypothesis.
Today I want to share two new works on this topic:
Eliciting higher alignment: arxiv.org/abs/2510.02425
Unpaired learning of unified reps: arxiv.org/abs/2510.08492
1/9
if you're curious about the architecture and mechanics of LLMs, this site has a really excellent explorable interactive visualization. it helps build intuition for how massive these models are, what 'interpretability' means, and the complexity involved here
bbycroft.net/llm
β[D]ooming is itself a liberation from the burden of choice. If everything is ruined forever, if your allies have already forsaken you, if the battle is already lost, you aren't responsible for your choices. They can't affect the outcome. You're free.β
The 2500th, 250th, 50th, and 25th largest model families on Hugging Face. They show varying numbers of generations (between 3 and 8) and different edge types, including adapters, finetunes, merges, and quantizations.
In a new paper with @didaoh and Jon Kleinberg, we mapped the family trees of 1.86 million AI models on Hugging Face β the largest open-model ecosystem in the world.
AI evolution looks kind of like biology, but with some strange twists. π§¬π€
I've always wondered about this too and would prefer the alternative most of the time, but I do feel like for many systems you need at least a few sessions before everyone is familiar with it enough to let it shine. Also people like gradual progression
okay AI controversy aside, just talking as a nerd for a second: this is such a cool paper
LLMs becoming superhuman at all games & competitions but not even normal-expert good at anything that's not a game or competition continues to raise deep questions about whether games & competitions are a secret natural kind
Smoking Kills
youtu.be/HkgV_-nJOuE?...
See this one yet? What's one more hobby right?
Solar luminosity around 3.5 billion years ago was only about 70% of what it is right now.
Which means that when life arose on earth, it was outside the sun's habitability zone: liquid water should not have been able to exist.
Faint Young Sun Paradox
Exactly. To some extent thats always been exaggerated for plot, but not that much, and we lived in a world where those stakes were believable just last year. Some Black Mirror come to mind. SCP foundation and Co. Random but what prompted this for me was the cover art for Jurassic 5's new album
It seems clear that the current historical moment will have a major impact on culture. Even recent art feels anachronistic. Movies involving govt/cops more obviously, but also just the undertones of social stability. It feels like it meant something different to make music last year. Any art really.
I really enjoyed this. Even just reading it. Well done