's Avatar

@lucyavraamidou

126
Followers
47
Following
3
Posts
18.11.2024
Joined
Posts Following

Latest posts by @lucyavraamidou

Hi, thanks for reading/engaging!

26.01.2026 22:03 👍 3 🔁 0 💬 0 📌 0
Preview
Λούση Αβρααμίδου: Οι μεγάλες εταιρείες ΑΙ αποτελούν ένα σύγχρονο παράδειγμα τεχνοφασισμού και συγκεντρωτισμού Πώς τα μέσα κοινωνικής δικτύωσης «σαπίζουν» τον ανθρώπινο εγκέφαλο; Ποιες είναι οι επιπτώσεις στην ψυχική υγεία; Τι μπορεί να προσφέρει η τεχνητή

Great interview of @lucyavraamidou.bsky.social in Greek Cypriot newspaper titled:

Οι μεγάλες εταιρείες ΑΙ αποτελούν ένα σύγχρονο παράδειγμα τεχνοφασισμού και συγκεντρωτισμού

The big AI companies are a modern example of technofascism and centralisation

www.philenews.com/politismos/p...

26.01.2026 09:11 👍 37 🔁 14 💬 2 📌 0

Coming out of 3 week migraine (status migrainosus, look it up) & seeing my work — w/ @marentierra.bsky.social @irisvanrooij.bsky.social @altibel.bsky.social @jedbrown.org @felienne.bsky.social @lucyavraamidou.bsky.social — in Rolling Stone. Guess I shouldn't be as depressed anymore based on facts 😌

17.12.2025 22:11 👍 65 🔁 13 💬 2 📌 3

🤣

29.10.2025 09:46 👍 2 🔁 0 💬 1 📌 0
Critical AI Literacy: Empowering people to resist hype and harms in the age of AI -- Symposium 2025
Critical AI Literacy: Empowering people to resist hype and harms in the age of AI -- Symposium 2025 YouTube video by Iris van Rooij

Critical AI Literacy Symposium, with Dagmar Monett, @lucyavraamidou.bsky.social & Miquel Pérez Torres, Linda Mannila, and @olivia.science

🎬 Video recording: www.youtube.com/watch?v=Fxyg...

📢 Symposium website: www.ru.nl/en/about-us/...

🤔 Critical AI Literacy website: www.ru.nl/en/research/...

14.10.2025 22:02 👍 72 🔁 41 💬 2 📌 4

Great piece, thank you!

26.09.2025 03:59 👍 1 🔁 0 💬 0 📌 0

Getting close to 50k views and I'm wondering is it just everybody is scared to say this and pleased I did? Because if there's so many of us who agree, trust me I'd know if 1k people disagreed with me let alone 50k, why are we letting AI ruin our universities?

Together we can turn back the tide.

21.09.2025 11:07 👍 329 🔁 103 💬 19 📌 2

Can you all explain yourselves what the actual f is 23k views and about half as many downloads on a preprint right now come off it 🥺

08.09.2025 16:51 👍 72 🔁 15 💬 10 📌 0
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users — in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues.

Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users — in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously toa) counter the technology industry’s marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Protecting the Ecosystem of Human Knowledge: Five Principles

Protecting the Ecosystem of Human Knowledge: Five Principles

Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n

06.09.2025 08:13 👍 3786 🔁 1897 💬 110 📌 390