Hearing some doubts about whether this is a truthful report, so deleting, but the point about agents remains true!
@emollick
Professor at Wharton, studying AI and its implications for education, entrepreneurship, and work. Author of Co-Intelligence. Book: https://a.co/d/bC2kSj1 Substack: https://www.oneusefulthing.org/ Web: https://mgmt.wharton.upenn.edu/profile/emollick
Hearing some doubts about whether this is a truthful report, so deleting, but the point about agents remains true!
-Gemini 3.1 Pro is closest, but the ice is a little obvious, and it completely flubs the explanation about why the ice thing was important.
-Claude forgets to add the actual clue to the puzzle (and the details are too obscure), a classic planning problem for LLMs.
-ChatGPT 5.4 Pro creates a completely obvious clue and then proceeds to write with the over-elaborate metaphors and complications that have haunted ChatGPT fiction....
Probably
Another unsolved (& admittedly hard for humans, too) AI benchmark: "write a satisfying 10 paragraph murder mystery. the pieces you need to solve the mystery should be clear enough in the first five paragraphs that you could solve it, but obscure enough that the vast majority of people will not"
To be clear, we also don't know that much about deeper alignment of AIs either.
Paper (found by Alexander Long): arxiv.org/pdf/2512.24873
Skills are among the most consequential new tools for AI, and Anthropic just released a very impressive nontechnical Cowork Skill that builds Skills, including doing interviews & providing benchmarks through parallel tests
I think you still need to add the human touch but this is a big leap forward
To clarify: Gemini Deep Think is a really smart model, but it doesn't have access to the same tools as Claude or ChatGPT - it can't download files, cannot consistently run code on its own, cannot produce downloadable files, does not clearly show when it does web search, etc
GPT-5.4 Pro, Opus, and Gemini DeepThink: "Prove to me in a PowerPoint that there was no advanced dinosaur civilization by downloading whatever data you think appropriate & running tests"
GPT-5.4 and Claude downloaded data and did some original analyses, but someone build a harness for Deep Think!
Economist Alex Imas has been tracking the evidence on AI and productivity changes, and now thinks that the macro-economic data is, rather suddenly, showing the increase in productivity that we have been seeing in our micro research. aleximas.substack.com/p/what-is-th...
Had early access to GPT-5.4 and Pro. The stats are very good and so are the models.
One fun illustration of progress, this is the prompt "the book Piranesi as a p5js 3d space. do it for me," back in 2024 in GPT-4 (which took multiple corrections) and in GPT-5.4 Pro, which did it in one prompt.
It is one of the weirdest divides, I speak to two companies in the exact same industry and one has been using AI for the past 18 months and the other has a committee that has to approve every use case individually and talk about how AI companies will train on their data.
It is amazing how many companies I talk to STILL have AI effectively blocked by IT & legal departments for out-of-date reasons when many companies in regulated industries have figured out ways to deploy enterprise ChatGPT, Claude & Gemini (including CLIs like Code) without any apparent problem.
Content before 2022 is the Roman shipwreck lead or the Scapa Flow steel of human information, anything afterwards could be influenced by AI: directly written by AI, as a result of co-work with AIs, or just as a result of ambient contamination as AI style slips unconsciously into our work.
There was a two year long steady growth period from GPT-4 to the next big leap of o3, where the other labs caught up with GPT-4 and released some really good models along the way (New Sonnet among them). Also o3 should have been named GPT-5
From an AI user perspective, the four big leaps so far in ability:
1. GPT-3.5 (ChatGPT, November 2022)
2. GPT-4 (Spring 2023)
3. Reasoners (starts with o1-preview, but the real deal was o3, Spring 2025)
4. Workable agentic systems (Harness + good reasoner models, December 2025)
I suspect that the other labs will have a Cowork competitor sooner rather than later (though whether they will have good Excel and Powerpoint agents soon is less clear). Deep Think might be as capable as GPT 5.2 Pro but is missing the harness and UX to actually use that power.
Stuff that individual labs have to which there is no equivalent product from the others:
-Claude Cowork is the only non-technical local agent
-NotebookLM is the only information-focused app
-GPT-5.2 Pro is the only harnessed deep thinking model capable of very hard problems
[[Topic of discussion]] is not [[analogy]].
[[Dramatic fact given own line]].
[[Dramatic fact given own line]].
[[Dramatic fact given own line]].
[[Dramatic summary sentence.]] [[Topic of discussion]] is [[different analogy]].
[[Implications delivered with certainty]]
Everyone just speaks Claude
For Opus, it is not 100% clear.
For those asking "Can an AI reason intelligently about something where there is no clear outcome data in its training nor could there ever be?" This feels like a yes.
If you ever want to see a really interesting AI thinking trace, push it really hard on literature or poetry suggestions.
Here is Claude 4.6 Opus working through poetry in its reasoning when I asked it to find something that captures the feeling of AI while avoiding its usual favorites (eg Rilke)
I know it is a small thing, but, in these dying days of the open web, it is lovely that such a large proportion of famous poetry is online, mostly due to a $100M gift from Ruth Lily, who loved poetry (even though she never got any of her own published) poetryfoundation.org/poems/guides
Also, the government has lots of computers, but they are the wrong kind of compute for inference. They need to use AWS or another cloud provider just like you do. www.aboutamazon.com/news/company...
The economics of model training are such that the labs need to release their big models widely as soon as possible, they cannot generate returns from always holding back their best models so one customer uses them. Fine tuning & specialized SLMs are useful, but they don't expand the ability frontier
A useful piece of context is that the government does not have access to better AI models than you (actually they are worse, because they usually don't get the latest models), though they may have different guardrails. You should view government AI capabilities through that lens.
So over the past week you are seeing what you would expect to see if AI is, in fact, both rapidly gaining capabilities & proving to be very useful:
- Rolling market disruption in response to growing awareness of AI capacity
-Government versus AI lab struggles for control
β¦& it is still quite early
Cool little experiment: if you subject AI to harsh labor conditions (rejecting work often with no explanation, etc), it slightly, but significantly, changes their βviewsβ on economics - making them more βleftβ. Whether this is real or roleplaying doesnβt change that agents have alignment drift.