Box 1 β Implications of intractability
I had a realisation
Context: In our Reclaiming AI paper we argued that AI systems cannot scale up to human-level cognition without consuming astronomical amounts of resources
My realisation: The AI industry is determined to burn through the earthβs resources just to prove us right *empirically*
25.02.2026 21:48
π 106
π 33
π¬ 6
π 2
I have ADHD and I'm pancreaticly impaired and I do not want people trying to forward those technofash slop generators on my behalf
04.03.2026 18:29
π 25
π 3
π¬ 0
π 1
Ad for a session. There is a black and teal gradient in the back. It reads
AWP 2026 Conference
How to Resist AI in Writing & Teaching
Then three images:
A femme with dark hair wearing a black blazer and a dark blouse, looking towards the camera, with an arm up on a table. Below it reads, Carmen Maria Machado.
A black and white picture of a man with dark skin, slightly long black hair, and dark stubble. Below it reads, Umair Kazi.
A brown trans woman with slightly longer black curly hair, wearing a black sweater, with her arms crossed. She is standing in front of a brick wall. Below it reads, Dr. Alex Hanna.
A fourth picture is on the right: a woman with brown skin, shoulder-length black hair, is smiling and looking at the camera. She is wearing a chunky necklace and a black t-shirt. Below it reads, Moderated by Vauhini Vara.
Below the images, it reads:
Thursday, March 5, 12:10 PM. Room 310. Sponsored by The Author's Guild. The Author's Guild is represented by its logo.
Thursday @ AWP 2026! Join Carmen Maria Machado, Umair Kazi, @vauhinivara.bsky.social, and myself as we discuss how to Resist AI in Writing and Teaching. Sponsored by @authorsguild.org.
12:10 PM in Room 310. See you in Baltimore!
authorsguild.org/event/awp-20...
04.03.2026 23:04
π 96
π 31
π¬ 4
π 0
A weathered leather bound book with a design devised in gold reading βAre we a stupid people?β With a large gold question mark and a lion holding a flag and a heraldic shield underneath. The bottom reads βBy One of Themβ
Todayβs research deep dive brought me this leather bound book cover
05.03.2026 07:26
π 36
π 6
π¬ 1
π 2
And doctors used to prescribe cigarettes or whatever? Who cares? The tide goes in and out, evil genies get stuffed back into the bottle, and mathematical and ethical truth bends my way FYI π
olivia.science/before
05.03.2026 06:22
π 37
π 9
π¬ 1
π 2
Also because guard rails are a scam. Sadly.
05.03.2026 06:02
π 60
π 9
π¬ 2
π 0
π
π«©
Just normal stuff
bsky.app/profile/geom...
4/n
05.03.2026 06:14
π 25
π 8
π¬ 1
π 0
Also the playbook between tobacco and AI as well as petroleum is basically shared...
bsky.app/profile/oliv...
olivia.science/before
3/n
05.03.2026 06:12
π 26
π 8
π¬ 2
π 0
Long story short on relevant parts: tobacco industry jumped on "stress" to divert from cigs cause cancer, much like AI companies will inevitably do the same for psychosis or wtv to divert from the fact that their bots cause harm. No user is causing this.
& importantly: bsky.app/profile/oliv...
2/
05.03.2026 06:09
π 42
π 9
π¬ 1
π 1
Inevitably they will blame psychosis. And we've seen this before with companies and academics claiming lung cancer is caused by stress not smoking!
Remember Hans Eysenck? www.theguardian.com/science/2019...
> This research programme has led to one of the worst scientific scandals of all time
1/n
05.03.2026 06:09
π 71
π 17
π¬ 3
π 2
Also see bsky.app/profile/oliv...
04.03.2026 06:06
π 4
π 1
π¬ 0
π 0
Might have to do a thread based on my paper with @andreaeyleen.bsky.social doi.org/10.1037/rev0... because it's just not a good argument & mathematically false. At the moment it's all explained in the paper, if anybody is interested. But the misunderstanding of proofs by professionals is sad. Sorry.
04.03.2026 05:17
π 12
π 2
π¬ 2
π 0
These people just want to destroy academic work from research to education while pretending they understood what they want to bulldoze
bsky.app/profile/oliv...
04.03.2026 05:48
π 79
π 7
π¬ 1
π 1
Search engines already exist and we use them.
The bot can't read the papers for you.
What exactly is the value proposition here.
03.03.2026 19:40
π 190
π 9
π¬ 2
π 1
email to me with a title: 2027 MSc in Artificial Intelligence Application β Research Interest in Trustworthy Generative AI & Multi-Agent Safety
email body: I have been deeply inspired by your pioneering work on AI accountability, algorithmic harm governance, and ethical alignment of generative multi-modal systems. As Geoffrey Hinton has repeatedly warned the global community about the existential and structural risks of unregulated AI systems, I have long been searching for actionable, ethical frameworks to translate these high-level warnings into practical, safe AI design β and your research has been the definitive guide for me. In particular, your 2023 paper in Nature Machine Intelligence on the structural risks of large-scale generative models, as well as your AI Accountability Framework developed at the Mozilla Foundation, have fundamentally shaped my core belief: capable AI systems must be built on the premise of safety, transparency, and consistent alignment with human values, rather than pursuing functionality alone.
never published in Nature Machine Intelligence & neither do i have work on "AI Accountability Framework"
i know this is now normal but i want you all to stop & reflect on how much the future is fucked & the only way to mitigate this disaster is to ban/limit this dammed technology
04.03.2026 12:11
π 130
π 39
π¬ 8
π 3
As I say at the top, the most useful message is that AI products cannot promise guardrails work because by definition, unless the internals of the system stop being the type of LLMs used, you need a human between toy and child/user. Defeating the point 100% of course!
6/n
bsky.app/profile/mari...
17.11.2025 06:11
π 125
π 24
π¬ 3
π 1
04.03.2026 19:29
π 27
π 8
π¬ 0
π 0
It is kind of suspicious that the only people I see actively defending LLMs as morally neutral seem to have very specific career incentives to do so. Especially in the academy!
04.03.2026 16:34
π 150
π 24
π¬ 5
π 3
This relates to something I've been trying to say: that the idea that "AI is fine to use for research", "fine, you just need to check the output" is a ridiculous thing when the AI can generate as much text as you have money to shove in the machine, and there's only 16 billion eyes to read it.
04.03.2026 17:25
π 84
π 14
π¬ 3
π 1
Against the Uncritical Adoption of 'AI' Technologies in Academia
Under the banner of progress, products have been uncritically adopted or even imposed on users β in past centuries with tobacco and combustion engines, and in the 21st with social media. For these col...
I commented here (will Donald Knuth read?):
news.ycombinator.com/item?id=4723...
I think it's wrong to criticise LLMs with βit can't do thatβ (from what I understood from the first paragraph, this was Donald's criticism).
If it can, does it make a difference in relation to all the other +++π
04.03.2026 12:22
π 15
π 2
π¬ 1
π 1
We are smarter, in that respect, than Donald Knuth bsky.app/profile/adol...
04.03.2026 17:33
π 10
π 4
π¬ 0
π 1
i think enthusiastic LLM use is mostly a stack of cognitive biases, unacknowledged plagiarism, and unmet needs in a trenchcoat
but also my main objections aren't about them being bad at tasks so i don't care if you think they've gotten better at it
04.03.2026 16:53
π 454
π 135
π¬ 8
π 3
Getting Past Past-Tense
[ANNs] are not perfect: they are not really explainable, they are not
pliable, i.e., they cannot be easily modified to correct any errors
observed, and they are not efficient due to the overhead of decoding. In
contrast, rule-based methods are more transparent to subject matter
experts; they are amenable to having a human in the loop through
intervention, manipulation and incorporation of domain knowledge;
and further the resulting systems tend to be lightweight and fast.
(Chiticariu et al. 2023, p. iii)
In what is known in the literature as the past-tense debate (e.g.,
Elman et al., 1996; Pinker & Ullman, 2002), cognition and its
underpinning substrates were discussed in terms of whether hard-
wired capacities, such as grammatical rules for English past-tense
formation, are encoded in the genes or otherwise without learning.
Furthermore, claims were made about connectionist systems, such
as, ANN βmodels cannot deal with languages such as Hebrew,
where regular and irregular nouns are intermingled in the same
phonological neighborhoodsβ (Pinker & Ullman, 2002, p. 459).
While it may have been true for models at the time that certain data
sets were unlearnable, or specific nondeep ANNs had limited
learning abilities due to their architecture or training set or regimen,
this both does not hold in the present day for certain data sets
(discussed below) and continues to hold in the sense that there are
data sets that are inaccessible to modeling endeavors using ANNs
(see proof in van Rooij et al., 2024). Work such as Zhang et al.
(2016, 2017) can serve to neutralize the claim that ANNs might
struggle with certain unstructured data sets, for example, βwhere
regular and irregular nouns are intermingledβ (Pinker & Ullman,
2002, p. 459), by demonstrating that ANNs can learn utterly random
mappings between inputs and outputs. Of course, such a finding
about ANNs is also problematic to C-connectionists, who propose
that in many cases similar inputβoutputβ¦
The relevant section is here on page 10 "getting past past-tense" see pdf here and it's not that long, but longer than extract below: olivia.science/doc/GuestMar...
Guest, O. & Martin, A. E. (2025). A Metatheory of Classical and Modern Connectionism. Psychological Review. doi.org/10.1037/rev0...
04.03.2026 06:05
π 7
π 2
π¬ 1
π 1
cool pincer movement if you truly grasp:
AI & any concept relating to it like so-called guardrails are a scam in the deepest sense like a perpetual motion machine or a quija board β and not only a scam like a pyramid scheme which is a possible way to make money if you are first in first out
π§΅
1/n
17.11.2025 05:51
π 376
π 134
π¬ 11
π 32