Read this!! π
bsky.app/profile/oliv...
Read this!! π
bsky.app/profile/oliv...
got inspired by some questions from phd students the other day β and also really irritated by senior academics who seem to refuse to get the issues and continue to blame students; hope useful for you! π©·
olivia.science/cheating
π
bsky.app/profile/maar...
Bug or feature?
I came across βvibe citingβ the other dayβ¦
bsky.app/profile/iris...
Stop with the βvibe codingβ and start doing some algorithm design, foundations of computing, and computational complexity analyses.
Even just doing playful computer science unplugged is better than all the current hyped non-sense.
www.csunplugged.org/en/
Have they never heard of the Halting Problem? Or the Frame Problem? There is no such thing as automated coding *for everything*.
We are deskilling a whole generation of computer scientists β sigh
Box 1 β Implications of intractability
I had a realisation
Context: In our Reclaiming AI paper we argued that AI systems cannot scale up to human-level cognition without consuming astronomical amounts of resources
My realisation: The AI industry is determined to burn through the earthβs resources just to prove us right *empirically*
βvibe learningβ
*eye roll*
I have ADHD and I'm pancreaticly impaired and I do not want people trying to forward those technofash slop generators on my behalf
Ad for a session. There is a black and teal gradient in the back. It reads AWP 2026 Conference How to Resist AI in Writing & Teaching Then three images: A femme with dark hair wearing a black blazer and a dark blouse, looking towards the camera, with an arm up on a table. Below it reads, Carmen Maria Machado. A black and white picture of a man with dark skin, slightly long black hair, and dark stubble. Below it reads, Umair Kazi. A brown trans woman with slightly longer black curly hair, wearing a black sweater, with her arms crossed. She is standing in front of a brick wall. Below it reads, Dr. Alex Hanna. A fourth picture is on the right: a woman with brown skin, shoulder-length black hair, is smiling and looking at the camera. She is wearing a chunky necklace and a black t-shirt. Below it reads, Moderated by Vauhini Vara. Below the images, it reads: Thursday, March 5, 12:10 PM. Room 310. Sponsored by The Author's Guild. The Author's Guild is represented by its logo.
Thursday @ AWP 2026! Join Carmen Maria Machado, Umair Kazi, @vauhinivara.bsky.social, and myself as we discuss how to Resist AI in Writing and Teaching. Sponsored by @authorsguild.org.
12:10 PM in Room 310. See you in Baltimore!
authorsguild.org/event/awp-20...
A weathered leather bound book with a design devised in gold reading βAre we a stupid people?β With a large gold question mark and a lion holding a flag and a heraldic shield underneath. The bottom reads βBy One of Themβ
Todayβs research deep dive brought me this leather bound book cover
And doctors used to prescribe cigarettes or whatever? Who cares? The tide goes in and out, evil genies get stuffed back into the bottle, and mathematical and ethical truth bends my way FYI π
olivia.science/before
Also because guard rails are a scam. Sadly.
The answer is probably either in this paper:
doi.org/10.1007/s421...
Or this:
philsci-archive.pitt.edu/25289
Or both! If you search the doi link on here you'll find threads by me and @irisvanrooij.bsky.social on these two, but I suspect the papers offer the detail you want on what we think!
π
π«©
Just normal stuff
bsky.app/profile/geom...
4/n
Also the playbook between tobacco and AI as well as petroleum is basically shared...
bsky.app/profile/oliv...
olivia.science/before
3/n
Long story short on relevant parts: tobacco industry jumped on "stress" to divert from cigs cause cancer, much like AI companies will inevitably do the same for psychosis or wtv to divert from the fact that their bots cause harm. No user is causing this.
& importantly: bsky.app/profile/oliv...
2/
Inevitably they will blame psychosis. And we've seen this before with companies and academics claiming lung cancer is caused by stress not smoking!
Remember Hans Eysenck? www.theguardian.com/science/2019...
> This research programme has led to one of the worst scientific scandals of all time
1/n
Daily reminder that calling ai dead labor and stolen labor is literal.
Also see bsky.app/profile/oliv...
Might have to do a thread based on my paper with @andreaeyleen.bsky.social doi.org/10.1037/rev0... because it's just not a good argument & mathematically false. At the moment it's all explained in the paper, if anybody is interested. But the misunderstanding of proofs by professionals is sad. Sorry.
These people just want to destroy academic work from research to education while pretending they understood what they want to bulldoze
bsky.app/profile/oliv...
Search engines already exist and we use them.
The bot can't read the papers for you.
What exactly is the value proposition here.
email to me with a title: 2027 MSc in Artificial Intelligence Application β Research Interest in Trustworthy Generative AI & Multi-Agent Safety email body: I have been deeply inspired by your pioneering work on AI accountability, algorithmic harm governance, and ethical alignment of generative multi-modal systems. As Geoffrey Hinton has repeatedly warned the global community about the existential and structural risks of unregulated AI systems, I have long been searching for actionable, ethical frameworks to translate these high-level warnings into practical, safe AI design β and your research has been the definitive guide for me. In particular, your 2023 paper in Nature Machine Intelligence on the structural risks of large-scale generative models, as well as your AI Accountability Framework developed at the Mozilla Foundation, have fundamentally shaped my core belief: capable AI systems must be built on the premise of safety, transparency, and consistent alignment with human values, rather than pursuing functionality alone.
never published in Nature Machine Intelligence & neither do i have work on "AI Accountability Framework"
i know this is now normal but i want you all to stop & reflect on how much the future is fucked & the only way to mitigate this disaster is to ban/limit this dammed technology
As I say at the top, the most useful message is that AI products cannot promise guardrails work because by definition, unless the internals of the system stop being the type of LLMs used, you need a human between toy and child/user. Defeating the point 100% of course!
6/n
bsky.app/profile/mari...
It is kind of suspicious that the only people I see actively defending LLMs as morally neutral seem to have very specific career incentives to do so. Especially in the academy!