A few things I've learnt from writing >4,000 word pieces:
- They tend to be more popular than my short pieces!
- They're more definitive - like reference material, not a hot take.
- Not everyone will reach the end, and that's okay.
- Write so that the people who do find it incredibly satisfying.
06.03.2026 22:34
๐ 152
๐ 25
๐ฌ 5
๐ 2
this is a very good thread. the point of rough drafts is that by creating them, you *learn about the nature of the problem you are trying to solve* and your new understanding shapes your strategy
06.03.2026 17:16
๐ 117
๐ 24
๐ฌ 2
๐ 1
Nice timing for @mjcrockett.bsky.social and my article on AI Surrogates and Illusions of Generalizability to be officially published. www.cell.com/trends/cogni...
06.03.2026 14:22
๐ 15
๐ 8
๐ฌ 0
๐ 0
A brown penguin chick of some kind. It looks very much like a man in a suit. It is bedraggled and miserable
Made it to Friday but at what cost
06.03.2026 03:24
๐ 6593
๐ 1292
๐ฌ 57
๐ 92
If you're doomscrolling, guess what? So far there are 51 kฤkฤpล chicks hatched and thriving this season, the same number of birds as we had in TOTAL in the 90s! Only one chick has died and there are still fertile eggs waiting to hatch!
06.03.2026 04:31
๐ 6246
๐ 1702
๐ฌ 66
๐ 63
New paper from team @aial.ie! aial.ie/research/gpa...
EU's AI Act Article 53(1)(d) is an obligation for GPAI model providers to publicly provide a 'summary' on their modelโs training data. The team assessed published summaries along 6 dimensions & found that all big providers failed on all 6.
1/
05.03.2026 18:04
๐ 127
๐ 74
๐ฌ 2
๐ 3
Thread
06.03.2026 12:16
๐ 9
๐ 2
๐ฌ 0
๐ 0
On NJ Transit on the way home and the train just died.
Just another day/reminder of how โthe richest and most powerful nation in the worldโ cannot do mass transit.
We can afford a war and build concentration camps, but not mass transit.
05.03.2026 21:38
๐ 6861
๐ 1406
๐ฌ 222
๐ 58
In writing about AI Surrogates for human subjects research, @mjcrockett.bsky.social and i realized that their appeal indexes disciplinary anxieties. Dave's post nicely details some of the broader (institutional) anxieties that promises of AI replacements are surfacing.
bsky.app/profile/lmes...
05.03.2026 18:16
๐ 9
๐ 3
๐ฌ 1
๐ 0
Good morning! Yes, this is he
04.03.2026 17:25
๐ 5307
๐ 1176
๐ฌ 30
๐ 33
In early 2024, researchers were already heavily using AI for work
- Survey of 816 verified authors via Semantic Scholar
- 81% of researchers reported using LLMs in their workflow
- Top uses: information seeking & editing
- Rare for data tasks: 69๎73% never use LLMs for data cleaning or generation
The measurement problem
LLM content has risen sharply in both review and non-review papers.
Review papers do have a higher prevalence rate.
But non-review LLM papers outnumber review papers ๎ฃ6x.
CS.CY ๎Computers & Society) faces potential 50% cuts compared to CS.CV (Computer Vision) would only face 3%
Interdisciplinary researchers โ who move between cultures and write in the โborderlandsหฎ โ are experts at adapting their writing. LLMs currently are not.
Private information can appear in unlikely prompts
I gave a short talk at Cornell yesterday on my science-of-science work investigating how AI is being used by researchers and how we should go about crafting policies in response.
Blanket policies are hard, privacy is important, we need more measurement.
Slides: drive.google.com/file/d/1gNTK...
04.03.2026 13:23
๐ 58
๐ 12
๐ฌ 2
๐ 0
In the midst of the geopolitical horror, I've been doing some processing around the most recent wave of Epstein files. While there's been no single presidency-destroying bombshell (could Trump even be destroyed that way?), there's a pattern of impunity and willful ignorance that's hard to accept.
03.03.2026 16:28
๐ 24
๐ 8
๐ฌ 3
๐ 1
I have seen a lot of cursed stuff in my time in academia but this is among the *most* cursed.
Grammarly is generating miniature LLMs based on academic work so that users can have their writing โreviewedโ by experts like David Abulafia, who died less than two months ago.
03.03.2026 11:58
๐ 3513
๐ 1537
๐ฌ 97
๐ 284
03.03.2026 02:11
๐ 7
๐ 0
๐ฌ 0
๐ 0
a good thing is when someone posts a picture of a cat, and then people reply with "great cat, here also is a picture of my cat." unsolicited cat pics aplenty. this could be the world we build
03.03.2026 01:40
๐ 2050
๐ 345
๐ฌ 197
๐ 30
This is such an important point.
Tech often treats knowledge as a kind of retrieval system for facts, like you just order them from a menu
A librarian is like a chef who can say what ingredients do you have, what flavors do you like? And then introduce you to the best food you've never heard of
02.03.2026 18:33
๐ 55
๐ 5
๐ฌ 3
๐ 0
headline reading "The Pentagon's Favorite Tech Guy is this Hawaiian shirt-wearing founder" and a picture of Palmer Luckey Chad-ified with missiles in the background
cover of Time magaine feature Palmer Luckey awkwardly leaping, barefoot.
i.... can't.
02.03.2026 21:40
๐ 19
๐ 1
๐ฌ 2
๐ 1
One of my favourite sides of politics/media.
02.03.2026 10:20
๐ 13390
๐ 4256
๐ฌ 131
๐ 173
This looks excellent, thank you!
02.03.2026 12:40
๐ 1
๐ 0
๐ฌ 0
๐ 0
Wise piece.
01.03.2026 20:45
๐ 8
๐ 1
๐ฌ 0
๐ 0
Tomorrow!
01.03.2026 15:23
๐ 21
๐ 6
๐ฌ 0
๐ 0
Computer Professionals for Social Responsibility - Wikipedia
Back in the 1980s-2000s, there was an organization called
Computer Professionals for Social Responsibility that worked to oppose irresponsible and dangerous uses of computers in warfare. Maybe it needs a reboot, in our new age of AI.
en.wikipedia.org/wiki/Compute...
28.02.2026 23:47
๐ 172
๐ 30
๐ฌ 8
๐ 2
01.03.2026 14:55
๐ 13
๐ 3
๐ฌ 0
๐ 0
Today's edition of "collapsing boundary between human and machine"
substack.com/home/post/p-...
(see thread below for an analysis of how this helps the AI industry)
27.02.2026 22:05
๐ 5
๐ 3
๐ฌ 0
๐ 0
Cover of Mary Beard's book SPQR
Great thread!
Epistemic vigilance mirrors the training of a professional historian.
We are encouraged early on to read every document with a grain of salt.
The historian is also comfortable with what we might call epistemic limitation: "We just don't know."
This book exemplifies both.
27.02.2026 13:40
๐ 6
๐ 1
๐ฌ 0
๐ 0
As someone who has been teaching media literacy for over 20 years, this thread sums up a lot of my concerns about LLMs, especially as aggregators and search engines.
27.02.2026 13:10
๐ 10
๐ 5
๐ฌ 0
๐ 0
Riffing off this.
I think a further risk with LLMs is that it de-socialises knowledge by circumventing human sources and training people to just ask a bot when they need info rather than another person.
I don't think that is inevitable, but I think it is a real risk with long-term consequences.
27.02.2026 13:19
๐ 21
๐ 4
๐ฌ 2
๐ 0
This whole thread is a very important insight as to why getting information from an LLM, even if you are dutiful and vigilant, corrodes your ability to ascertain what is true or not
** LLMs do not provide the info you need in order to evaluate truth. **
27.02.2026 10:47
๐ 131
๐ 58
๐ฌ 4
๐ 1
Netflix Backs Out of Bid for Warner Bros., Paving Way for an Ellison Takeover
The richest man owns X.
The second and third richest men control Google.
The fourth richest man owns Facebook, Instagram, and WhatsApp.
The fifth richest man owns The Washington Post.
And now the sixth richest could soon take over both Paramount and Warner Bros.
See the problem here?
26.02.2026 23:40
๐ 38947
๐ 16531
๐ฌ 1827
๐ 1084