Katia Schwerzmann's Avatar

Katia Schwerzmann

@katschwerzmann

Philosophy, Technology, and the Body—Toward Justice www.katiaschwerzmann.net

276
Followers
369
Following
43
Posts
24.12.2023
Joined
Posts Following

Latest posts by Katia Schwerzmann @katschwerzmann

Who are the experts / journalists / researchers you turn to for critical perspectives on AI & the tech world? I created a feed of experts who watch AI developments with a skeptical & critical (but *informed*) POV. I'd love more people to follow it, but I'm also looking to add people - recs welcome!

07.01.2026 15:32 👍 94 🔁 33 💬 25 📌 4

“My ChatGPT” he writes. Oh sweet ignorance.

17.12.2025 22:29 👍 1 🔁 0 💬 0 📌 0

Ethical AI is an oxymoron, like automated science

16.12.2025 22:10 👍 448 🔁 107 💬 16 📌 4

Big tech bros' dream is a generative technocracy where the human factor doesn't disturb the smooth machine operations of value extraction, appropriation, and realization.

17.12.2025 11:41 👍 0 🔁 0 💬 0 📌 0

It is in the best interest of big tech bros to turn the general population into LLM output processors. Then, to defend democracy, you need to have the capacity to critically analyze discourse. And we know that they hate democracy. They never made it a secret (cf. Thiel, The Ed. of a Libertarian).

17.12.2025 11:41 👍 0 🔁 0 💬 1 📌 0
Ruled by the Representation Space: On the University's Embrace of Large Language Models This paper explores the implications of universities' rapid adoption of large language models (LLMs) for studying, teaching, and research by analyzing the logics underpinning their representation spac...

And in all this, I said nothing about the highly problematic epistemic and normative impact of generative AI on scientific and public discourse, my usual line of research. Cf: doi.org/10.48550/arX...

17.12.2025 11:20 👍 1 🔁 0 💬 1 📌 0

Remember the words of Duolingo’s CEO: “By the way, that doesn’t mean the teachers are going to go away, you still need people to take care of the students…I also don’t think schools are going to go away, because you still need *childcare*.”

17.12.2025 11:20 👍 3 🔁 0 💬 1 📌 0

When the educational profession becomes nothing more than the enforcement and supervision of the “good” use of generative AI by students, there is no reason to pay teachers/professors more than the minimum wage.

17.12.2025 11:20 👍 1 🔁 0 💬 1 📌 0

AI devaluates teaching. And it is in my view no coincidence that universities and high schools promote and sometimes even expect from their workers that they implement LLMs in class.

17.12.2025 11:20 👍 1 🔁 0 💬 1 📌 0

When students start writing their essay with LLMs, professors evaluating them with LLMs, and when even researchers delegate their own work to LLMs, it gives excellent reasons to the State to cut down costs and take public financing away from research and the academia—a trend already underway in EU.

17.12.2025 11:20 👍 1 🔁 0 💬 1 📌 0

3. More gravely, X contributes to the devaluation of the academic profession.

17.12.2025 11:20 👍 1 🔁 0 💬 1 📌 0

2. X expects from me to seriously read this piece and offer ways of improving it, thus delegating the part of the work that they should have done themselves while hoping to reap recognition for it.

17.12.2025 11:20 👍 1 🔁 0 💬 1 📌 0

1. Person(s) X made me lose my time: I, who still don't have a stable position in the academia while offering high quality research and teaching labor.

17.12.2025 11:20 👍 1 🔁 0 💬 1 📌 0

In this over 10’000-word paper, no specific claim, no clear object of study, no declared method, just a very general addition to the pile of text that’s being produced/generated everyday. I am angry.

17.12.2025 11:20 👍 1 🔁 0 💬 1 📌 0

This week I received my first paper to peer-review that was written in large part by an LLM. The typical telltale signs were a lot of placeholder-words that add nothing to the argument and many qualifying adjectives that make some paragraphs sound like an advertisement for technology.

17.12.2025 11:20 👍 1 🔁 0 💬 1 📌 0

I can’t wait to read the paper and will circulate it.

13.12.2025 09:18 👍 1 🔁 0 💬 1 📌 0
3 Critical Thinking
A vital skill under rising fascism, planetary crises, and the onslaught of technosolutionism. Critical
thinking in an academic context comprises, as Isabelle Stengers (2018) explains, “the ability to be
vigilant about one’s abstractions, to not be blindly led by them.” (p. 111) For ‘AI’ in the classroom
this can be exercised with two steps (Guest 2025). First, students can ask themselves if their cog-
nitive labour is being affected by an artifact; and if ‘yes’ then they can ask what that relationship
means for their specific context. If they pick chatbots, it becomes obvious that these so-called tools
damage their ability to learn, while also harming other people and planet (Guest, Suarez, et al. 2025;
Suarez et al. 2025).
Another strategy, applicable both to learners’ and seasoned scholars, is to check how ideas hang
in relation to other ideas (Elgin 1999, 2005, 2017). So when students work on assigned readings, they
can vigilantly retain the assumption for the duration that what they are reading is coherent and let
the ideas hang in relation to each other. If the students experience this as forming a sensible whole,
then they can try to see how these ideas or concepts further relate to extant knowlewdge. This
testing of how ideas hang in relation to both each other (within the work being analysed) and to
other constellations of ideas they may know about — ones they may disagree or agree with — can be
enlightening. And for those who enjoy drawing and thinking formally, this process can be rendered
using graphs (Blokpoel et al. 2021-2025, Chapter 6) and other formalisms (Beisbart et al. 2021).
Such activities can help pick out contradictions, like those that Teresa Heffernan (2023) outlines:
“Billions of dollars have backed AI [...] while science-based climate research has met resistance,
deferral, and denial as the world burns.” (p. 122)

3 Critical Thinking A vital skill under rising fascism, planetary crises, and the onslaught of technosolutionism. Critical thinking in an academic context comprises, as Isabelle Stengers (2018) explains, “the ability to be vigilant about one’s abstractions, to not be blindly led by them.” (p. 111) For ‘AI’ in the classroom this can be exercised with two steps (Guest 2025). First, students can ask themselves if their cog- nitive labour is being affected by an artifact; and if ‘yes’ then they can ask what that relationship means for their specific context. If they pick chatbots, it becomes obvious that these so-called tools damage their ability to learn, while also harming other people and planet (Guest, Suarez, et al. 2025; Suarez et al. 2025). Another strategy, applicable both to learners’ and seasoned scholars, is to check how ideas hang in relation to other ideas (Elgin 1999, 2005, 2017). So when students work on assigned readings, they can vigilantly retain the assumption for the duration that what they are reading is coherent and let the ideas hang in relation to each other. If the students experience this as forming a sensible whole, then they can try to see how these ideas or concepts further relate to extant knowlewdge. This testing of how ideas hang in relation to both each other (within the work being analysed) and to other constellations of ideas they may know about — ones they may disagree or agree with — can be enlightening. And for those who enjoy drawing and thinking formally, this process can be rendered using graphs (Blokpoel et al. 2021-2025, Chapter 6) and other formalisms (Beisbart et al. 2021). Such activities can help pick out contradictions, like those that Teresa Heffernan (2023) outlines: “Billions of dollars have backed AI [...] while science-based climate research has met resistance, deferral, and denial as the world burns.” (p. 122)

Another such example is formally treating so-called guardrails, which are post hoc checks on
the output of LLMs which industry claims can safeguard users. However, LLMs by design “give
emotionally inappropriate or unsafe responses [and i]f a response isn’t caught by these rules, it
will slip through” (Olivia Guest, as quoted in Kilgore 2025). Guardrails may seem sensible, giving
the impression of responsibility, but critical computational thinking through formal analyses re-
veal they are unimplementable. Not only is it impossible to make an LLM replicate human expert
behaviour, but also catching when LLMs fail to meet appropriate standards requires full blown
cognition, which provably cannot be codified (van Rooij, Guest, et al. 2024). This poses an infinite
4
Towards Critical Artificial Intelligence Literacies
regress of humans in the loop needed to make pretend of what AI was supposed to solve (Bainbridge
1983; Guest 2025; Guest and Martin 2025b). Through critical computational thinking we can un-
veil that the proposal that formal rules can make (dysfunctional) systems safe is self-defeating. Not
only are the LLMs produced by the AI industry a scam, akin to a ouija board (Guest, Suarez, et al.
2025), but guardrails is a scam too. This strategy — from tobacco with filters to petroleum with
the carbon footprint and now to AI with guardrails — to propose non-solutions to buy time and
save face in order to push their agenda, is ubiquitous (Guest, Suarez, et al. 2025).

Another such example is formally treating so-called guardrails, which are post hoc checks on the output of LLMs which industry claims can safeguard users. However, LLMs by design “give emotionally inappropriate or unsafe responses [and i]f a response isn’t caught by these rules, it will slip through” (Olivia Guest, as quoted in Kilgore 2025). Guardrails may seem sensible, giving the impression of responsibility, but critical computational thinking through formal analyses re- veal they are unimplementable. Not only is it impossible to make an LLM replicate human expert behaviour, but also catching when LLMs fail to meet appropriate standards requires full blown cognition, which provably cannot be codified (van Rooij, Guest, et al. 2024). This poses an infinite 4 Towards Critical Artificial Intelligence Literacies regress of humans in the loop needed to make pretend of what AI was supposed to solve (Bainbridge 1983; Guest 2025; Guest and Martin 2025b). Through critical computational thinking we can un- veil that the proposal that formal rules can make (dysfunctional) systems safe is self-defeating. Not only are the LLMs produced by the AI industry a scam, akin to a ouija board (Guest, Suarez, et al. 2025), but guardrails is a scam too. This strategy — from tobacco with filters to petroleum with the carbon footprint and now to AI with guardrails — to propose non-solutions to buy time and save face in order to push their agenda, is ubiquitous (Guest, Suarez, et al. 2025).

Critical Thinking is deep engagement with the relationships
between statements about the world.

See section 3 here: doi.org/10.5281/zeno...

4/

02.12.2025 07:08 👍 50 🔁 6 💬 2 📌 2

Somebody needs to write at length of the unhinged nonsense of calling academics critical of a monstrous technology, esp ones with direct expertise in formalism & implementation details: technopessimist. Are climate experts called pessimists for believing in & fighting against the climate crisis? 1/2

12.12.2025 07:14 👍 178 🔁 32 💬 8 📌 7

So ist die Schweiz. Es ist das Land, wo die Bevölkerung fünf Mal pro Jahr die Gelegenheit hat, abzustimmen und fünf mal pro Jahr gegen Ihre Interessen abstimmt. Die Bevölkerung lehnt systematisch solche Steuererhöhungen für die Reichen ab.

30.11.2025 13:40 👍 1 🔁 0 💬 1 📌 0
Preview
Ruled by the Representation Space: On the University's Embrace of Large Language Models This paper explores the implications of universities' rapid adoption of large language models (LLMs) for studying, teaching, and research by analyzing the logics underpinning their representation spac...

"Additionally, by accepting generative models’ normative rationality, the University reframes learning and research as
essentially consisting in the evaluation of models’ outputs." @katschwerzmann.bsky.social

arxiv.org/abs/2505.03513

04.11.2025 17:25 👍 29 🔁 11 💬 2 📌 0

Thank you @olivia.science for your reading!

24.11.2025 16:47 👍 1 🔁 0 💬 1 📌 0
Open Letter: Stop the Uncritical Adoption of AI Technologies in Academia

And this is the English version of the open letter against the uncritical adoption of AI technologies in academia: openletter.earth/open-letter-...

24.11.2025 12:50 👍 10 🔁 3 💬 0 📌 0
Gegen die unkritische Anwendung und Implementierung sog. Künstlicher Intelligenz (KI) in der deutschen Wissenschaft und im Hochschulalltag

Liebe Lehrende an deutschen Hochschulen, bitte erwägen Sie, diesen Open Letter gegen die unkritische Anwendung von KI im Hochschulkontext zu unterzeichnen. openletter.earth/gegen-die-un...

24.11.2025 12:47 👍 10 🔁 4 💬 1 📌 0

Love your music rec!

29.08.2025 08:47 👍 1 🔁 0 💬 0 📌 0
Post image

#KWIBlog: In today's blog text, former Thyssen Fellow @katschwerzmann.bsky.social investigates the role of reading in the context of new developments in AI, stressing the need for ongoing investment in close and critical reading that considers AI practices and limitations.
🔎 tinyurl.com/25upvwyj

26.05.2025 07:42 👍 9 🔁 3 💬 0 📌 0

Ich hätte Interesse. Ist das Ticket noch zu haben?

22.05.2025 12:45 👍 0 🔁 0 💬 1 📌 0

A social sciences and humanities reading list on AI in education 🧵

09.02.2025 00:00 👍 551 🔁 196 💬 57 📌 28

Thank you for building this reading list on AI and eduction!

19.05.2025 09:12 👍 3 🔁 1 💬 0 📌 0
Ruled by the Representation Space: On the University’s Embrace of Large Language Models

Very timely and necessary critical intervention by @katschwerzmann.bsky.social ‬– highly recommended: «By embracing LLMs before developing any critical framework for their use in pedagogical and research contexts, the University allows itself to be governed by the contingent, ever-evolving ...
1/

19.05.2025 05:11 👍 52 🔁 19 💬 2 📌 1

Thank you for sharing your reading @bildoperationen.bsky.social. I am glad it resonated. Working at a critique of generative AI in the context of research and education is currently a somewhat lonely endeavor. I hope more researchers will join. We need to tackle this from a plurality of approaches.

19.05.2025 07:00 👍 6 🔁 1 💬 1 📌 0