Tereza Blazkova's Avatar

Tereza Blazkova

@terezablazek

PhD Student in Social Data Science, University of Copenhagen AI & Society | Algorithmic Fairness | ML | Education Data Science https://tereza-blazkova.github.io/

116
Followers
185
Following
5
Posts
29.12.2024
Joined
Posts Following

Latest posts by Tereza Blazkova @terezablazek

Banner image with screenshot of scientific article from nature Medicine, as well as two panels from the study method and results

Banner image with screenshot of scientific article from nature Medicine, as well as two panels from the study method and results

⚠️ Despite all the hype, chatbots still make terrible doctors. Out today is the largest user study of language models for medical self-diagnosis. We found that chatbots provide inaccurate and inconsistent answers, and that people are better off using online searches or their own judgment.

09.02.2026 17:07 👍 358 🔁 168 💬 7 📌 33
Preview
SODAS Data Discussion 4 (Fall 2025) SODAS is delighted to host Yani Kartalis and Tereza Blazkova for the Fall 2025 Data Discussion series!

Join us for a Data Discussion on December 12! 📅

Yani Kartalis will discuss content plurality in Greece’s fragmented media system, while Tereza Blažková will present on participatory design in predictive modeling for student success.

Event🔗: sodas.ku.dk/events/sodas...

09.12.2025 07:41 👍 4 🔁 2 💬 0 📌 1

Check out @isabelcorpus.bsky.social's fantastic thread on our paper studying the effects of a "write with AI" button on change.org! ✍️ Spoiler: the effects of AI aren't always positive.

01.12.2025 20:42 👍 23 🔁 6 💬 0 📌 0
Preview
SODAS Data Discussion 3 (Fall 2025) SODAS is delighted to host Daniel Juhász Vigild and Stephanie Brandl for the Fall 2025 Data Discussion series!

Join us for a Data Discussion on Friday, November 7! 📅

Daniel Juhász Vigild will start by exploring how government use of AI impacts its trustworthiness, while Stephanie Brandl will examine whether LLMs can identify and classify fine-grained forms of populism.

Event🔗: sodas.ku.dk/events/sodas...

31.10.2025 13:12 👍 3 🔁 3 💬 0 📌 1
Preview
From the ChatGPT community on Reddit: ChatGPT asked if I wanted a diagram of what’s going on inside my pregnant belly. Explore this post and more from the ChatGPT community

time to implement it into healthcare systems www.reddit.com/r/ChatGPT/co...

26.08.2025 05:40 👍 4 🔁 1 💬 0 📌 0
Post image

Cheers from Learning@Scale poster 033 😋

22.07.2025 11:56 👍 1 🔁 0 💬 0 📌 0

Loved working on this with @kizilcec.bsky.social, Magnus Lindgaard Nielsen, David Dreyer Lassen, and @andbjn.bsky.social , thank you for the collaboration, looking forward to future work!

21.07.2025 08:15 👍 1 🔁 0 💬 0 📌 0

To learn more, check out our paper and TikTok-style video, or see my poster and talk this week in Palermo!
Paper and video : dl.acm.org/doi/10.1145/...
Poster session: learningatscale.acm.org/las2025/inde...
Lightning talk: fair4aied.github.io/2025/

21.07.2025 08:15 👍 0 🔁 0 💬 1 📌 0

Depending on the time point and fairness metric, we observe both alarming disparities and confidence intervals that include zero.

21.07.2025 08:03 👍 0 🔁 0 💬 1 📌 0
Post image

While human behavior and the data describing it evolve over time, fairness is often evaluated at a single snapshot. Yet, as we show in our newly published paper, fairness is dynamic. We studied how fairness evolves in dropout prediction across enrollment and found that it shifts over time.

21.07.2025 08:02 👍 4 🔁 2 💬 1 📌 0
Preview
Large language models act as if they are part of a group - Nature Computational Science An extensive audit of large language models reveals that numerous models mirror the ‘us versus them’ thinking seen in human behavior. These social prejudices are likely captured from the biased conten...

Happy to write this News & Views piece on the recent audit showing LLMs picking up "us versus them" biases: www.nature.com/articles/s43... (Read-only version: rdcu.be/d5ovo)

Check out the amazing (original) paper here: www.nature.com/articles/s43...

02.01.2025 14:11 👍 13 🔁 7 💬 0 📌 1