Iris van Rooij πŸ’­'s Avatar

Iris van Rooij πŸ’­

@irisvanrooij

Professor of Computational Cognitive Science | @AI_Radboud | @Iris@scholar.social on 🦣 | http://cognitionandintractability.com | she/they πŸ³οΈβ€πŸŒˆ

17,605
Followers
1,253
Following
2,592
Posts
29.05.2023
Joined
Posts Following

Latest posts by Iris van Rooij πŸ’­ @irisvanrooij

Read this!! πŸ‘‡

bsky.app/profile/oliv...

07.03.2026 18:10 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Why does cheating matter? A personal anecdote and an appeal to junior colleagues.

got inspired by some questions from phd students the other day β€” and also really irritated by senior academics who seem to refuse to get the issues and continue to blame students; hope useful for you! 🩷

olivia.science/cheating

07.03.2026 17:56 πŸ‘ 23 πŸ” 7 πŸ’¬ 1 πŸ“Œ 1

😭

bsky.app/profile/maar...

07.03.2026 18:09 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Bug or feature?

07.03.2026 17:55 πŸ‘ 6 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0

I came across β€œvibe citing” the other day…

07.03.2026 17:48 πŸ‘ 3 πŸ” 1 πŸ’¬ 1 πŸ“Œ 2

bsky.app/profile/iris...

07.03.2026 17:55 πŸ‘ 7 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Stop with the β€œvibe coding” and start doing some algorithm design, foundations of computing, and computational complexity analyses.

Even just doing playful computer science unplugged is better than all the current hyped non-sense.

www.csunplugged.org/en/

07.03.2026 17:54 πŸ‘ 17 πŸ” 6 πŸ’¬ 3 πŸ“Œ 1

Have they never heard of the Halting Problem? Or the Frame Problem? There is no such thing as automated coding *for everything*.

We are deskilling a whole generation of computer scientists β€” sigh

07.03.2026 17:49 πŸ‘ 24 πŸ” 4 πŸ’¬ 4 πŸ“Œ 0
Box 1 β€” Implications of intractability

Box 1 β€” Implications of intractability

I had a realisation

Context: In our Reclaiming AI paper we argued that AI systems cannot scale up to human-level cognition without consuming astronomical amounts of resources

My realisation: The AI industry is determined to burn through the earth’s resources just to prove us right *empirically*

25.02.2026 21:48 πŸ‘ 120 πŸ” 38 πŸ’¬ 7 πŸ“Œ 3

β€œvibe learning”

*eye roll*

07.03.2026 17:41 πŸ‘ 27 πŸ” 4 πŸ’¬ 4 πŸ“Œ 1

I have ADHD and I'm pancreaticly impaired and I do not want people trying to forward those technofash slop generators on my behalf

04.03.2026 18:29 πŸ‘ 29 πŸ” 3 πŸ’¬ 0 πŸ“Œ 1
Ad for a session. There is a black and teal gradient in the back. It reads
AWP 2026 Conference
How to Resist AI in Writing & Teaching

Then three images:
A femme with dark hair wearing a black blazer and a dark blouse, looking towards the camera, with an arm up on a table. Below it reads, Carmen Maria Machado.

A black and white picture of a man with dark skin, slightly long black hair, and dark stubble. Below it reads, Umair Kazi.

A brown trans woman with slightly longer black curly hair, wearing a black sweater, with her arms crossed. She is standing in front of a brick wall. Below it reads, Dr. Alex Hanna.

A fourth picture is on the right: a woman with brown skin, shoulder-length black hair, is smiling and looking at the camera. She is wearing a chunky necklace and a black t-shirt. Below it reads, Moderated by Vauhini Vara.

Below the images, it reads:
Thursday, March 5, 12:10 PM. Room 310. Sponsored by The Author's Guild. The Author's Guild is represented by its logo.

Ad for a session. There is a black and teal gradient in the back. It reads AWP 2026 Conference How to Resist AI in Writing & Teaching Then three images: A femme with dark hair wearing a black blazer and a dark blouse, looking towards the camera, with an arm up on a table. Below it reads, Carmen Maria Machado. A black and white picture of a man with dark skin, slightly long black hair, and dark stubble. Below it reads, Umair Kazi. A brown trans woman with slightly longer black curly hair, wearing a black sweater, with her arms crossed. She is standing in front of a brick wall. Below it reads, Dr. Alex Hanna. A fourth picture is on the right: a woman with brown skin, shoulder-length black hair, is smiling and looking at the camera. She is wearing a chunky necklace and a black t-shirt. Below it reads, Moderated by Vauhini Vara. Below the images, it reads: Thursday, March 5, 12:10 PM. Room 310. Sponsored by The Author's Guild. The Author's Guild is represented by its logo.

Thursday @ AWP 2026! Join Carmen Maria Machado, Umair Kazi, @vauhinivara.bsky.social, and myself as we discuss how to Resist AI in Writing and Teaching. Sponsored by @authorsguild.org.

12:10 PM in Room 310. See you in Baltimore!

authorsguild.org/event/awp-20...

04.03.2026 23:04 πŸ‘ 96 πŸ” 31 πŸ’¬ 4 πŸ“Œ 0
A weathered leather bound book with a design devised in gold reading β€œAre we a stupid people?” With a large gold question mark and a lion holding a flag and a heraldic shield underneath. The bottom reads β€œBy One of Them”

A weathered leather bound book with a design devised in gold reading β€œAre we a stupid people?” With a large gold question mark and a lion holding a flag and a heraldic shield underneath. The bottom reads β€œBy One of Them”

Today’s research deep dive brought me this leather bound book cover

05.03.2026 07:26 πŸ‘ 36 πŸ” 6 πŸ’¬ 1 πŸ“Œ 2

And doctors used to prescribe cigarettes or whatever? Who cares? The tide goes in and out, evil genies get stuffed back into the bottle, and mathematical and ethical truth bends my way FYI 😌

olivia.science/before

05.03.2026 06:22 πŸ‘ 37 πŸ” 9 πŸ’¬ 1 πŸ“Œ 2

Also because guard rails are a scam. Sadly.

05.03.2026 06:02 πŸ‘ 60 πŸ” 9 πŸ’¬ 2 πŸ“Œ 0
Preview
Reclaiming AI as a Theoretical Tool for Cognitive Science - Computational Brain & Behavior The idea that human cognition is, or can be understood as, a form of computation is a useful conceptual tool for cognitive science. It was a foundational assumption during the birth of cognitive scien...

The answer is probably either in this paper:

doi.org/10.1007/s421...

Or this:

philsci-archive.pitt.edu/25289

Or both! If you search the doi link on here you'll find threads by me and @irisvanrooij.bsky.social on these two, but I suspect the papers offer the detail you want on what we think!

05.03.2026 04:48 πŸ‘ 7 πŸ” 3 πŸ’¬ 1 πŸ“Œ 0

πŸ‘€

🫩

Just normal stuff

bsky.app/profile/geom...

4/n

05.03.2026 06:14 πŸ‘ 26 πŸ” 8 πŸ’¬ 1 πŸ“Œ 0

Also the playbook between tobacco and AI as well as petroleum is basically shared...

bsky.app/profile/oliv...

olivia.science/before

3/n

05.03.2026 06:12 πŸ‘ 28 πŸ” 8 πŸ’¬ 2 πŸ“Œ 0

Long story short on relevant parts: tobacco industry jumped on "stress" to divert from cigs cause cancer, much like AI companies will inevitably do the same for psychosis or wtv to divert from the fact that their bots cause harm. No user is causing this.

& importantly: bsky.app/profile/oliv...

2/

05.03.2026 06:09 πŸ‘ 43 πŸ” 10 πŸ’¬ 1 πŸ“Œ 1

Inevitably they will blame psychosis. And we've seen this before with companies and academics claiming lung cancer is caused by stress not smoking!

Remember Hans Eysenck? www.theguardian.com/science/2019...

> This research programme has led to one of the worst scientific scandals of all time

1/n

05.03.2026 06:09 πŸ‘ 73 πŸ” 17 πŸ’¬ 3 πŸ“Œ 2
Preview
We've been here before! Parallels between AI and tobacco, and other warnings.

Bingo!

olivia.science/before

04.03.2026 23:28 πŸ‘ 20 πŸ” 2 πŸ’¬ 3 πŸ“Œ 1
Preview
Grammarly Offering Manuscript Reviews by AI Versions of Recently Deceased Professors The Grammarly "Expert Review" feature uses AI to provide feedback on papers using the name and work of real professors, dead or alive.

Daily reminder that calling ai dead labor and stolen labor is literal.

04.03.2026 22:26 πŸ‘ 326 πŸ” 128 πŸ’¬ 8 πŸ“Œ 24

Also see bsky.app/profile/oliv...

04.03.2026 06:06 πŸ‘ 4 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

Might have to do a thread based on my paper with @andreaeyleen.bsky.social doi.org/10.1037/rev0... because it's just not a good argument & mathematically false. At the moment it's all explained in the paper, if anybody is interested. But the misunderstanding of proofs by professionals is sad. Sorry.

04.03.2026 05:17 πŸ‘ 12 πŸ” 2 πŸ’¬ 2 πŸ“Œ 0

These people just want to destroy academic work from research to education while pretending they understood what they want to bulldoze

bsky.app/profile/oliv...

04.03.2026 05:48 πŸ‘ 79 πŸ” 7 πŸ’¬ 1 πŸ“Œ 1

Search engines already exist and we use them.

The bot can't read the papers for you.

What exactly is the value proposition here.

03.03.2026 19:40 πŸ‘ 190 πŸ” 9 πŸ’¬ 2 πŸ“Œ 1
email to me with a title: 2027 MSc in Artificial Intelligence Application – Research Interest in Trustworthy Generative AI & Multi-Agent Safety

email body: I have been deeply inspired by your pioneering work on AI accountability, algorithmic harm governance, and ethical alignment of generative multi-modal systems. As Geoffrey Hinton has repeatedly warned the global community about the existential and structural risks of unregulated AI systems, I have long been searching for actionable, ethical frameworks to translate these high-level warnings into practical, safe AI design β€” and your research has been the definitive guide for me. In particular, your 2023 paper in Nature Machine Intelligence on the structural risks of large-scale generative models, as well as your AI Accountability Framework developed at the Mozilla Foundation, have fundamentally shaped my core belief: capable AI systems must be built on the premise of safety, transparency, and consistent alignment with human values, rather than pursuing functionality alone.

email to me with a title: 2027 MSc in Artificial Intelligence Application – Research Interest in Trustworthy Generative AI & Multi-Agent Safety email body: I have been deeply inspired by your pioneering work on AI accountability, algorithmic harm governance, and ethical alignment of generative multi-modal systems. As Geoffrey Hinton has repeatedly warned the global community about the existential and structural risks of unregulated AI systems, I have long been searching for actionable, ethical frameworks to translate these high-level warnings into practical, safe AI design β€” and your research has been the definitive guide for me. In particular, your 2023 paper in Nature Machine Intelligence on the structural risks of large-scale generative models, as well as your AI Accountability Framework developed at the Mozilla Foundation, have fundamentally shaped my core belief: capable AI systems must be built on the premise of safety, transparency, and consistent alignment with human values, rather than pursuing functionality alone.

never published in Nature Machine Intelligence & neither do i have work on "AI Accountability Framework"

i know this is now normal but i want you all to stop & reflect on how much the future is fucked & the only way to mitigate this disaster is to ban/limit this dammed technology

04.03.2026 12:11 πŸ‘ 130 πŸ” 39 πŸ’¬ 8 πŸ“Œ 3

As I say at the top, the most useful message is that AI products cannot promise guardrails work because by definition, unless the internals of the system stop being the type of LLMs used, you need a human between toy and child/user. Defeating the point 100% of course!

6/n

bsky.app/profile/mari...

17.11.2025 06:11 πŸ‘ 125 πŸ” 24 πŸ’¬ 3 πŸ“Œ 1
04.03.2026 19:29 πŸ‘ 27 πŸ” 8 πŸ’¬ 0 πŸ“Œ 0

It is kind of suspicious that the only people I see actively defending LLMs as morally neutral seem to have very specific career incentives to do so. Especially in the academy!

04.03.2026 16:34 πŸ‘ 150 πŸ” 24 πŸ’¬ 5 πŸ“Œ 3