Jindřich Libovický's Avatar

Jindřich Libovický

@jlibovicky

Researcher at Charles University | multilingual natural language processing, machine translation

543
Followers
248
Following
39
Posts
07.12.2023
Joined
Posts Following

Latest posts by Jindřich Libovický @jlibovicky

None of this would work without my TAs: Dušan Variš, Tomáš Musil, Jan Bronec, @gianlucavico.bsky.social , Adnan Al Ali, Kristýna Onderková, and @straka-milan.bsky.social taking care of ReCodEx: recodex.mff.cuni.cz. Thank you 🙏

09.03.2026 09:45 👍 1 🔁 0 💬 0 📌 0
Post image

Some students find the assignments too time-consuming. Fair. But here's what the data shows over 3 years:
📉 Forum questions dropped ~4×
📈 Full bonus points: 20% → 27% → 52%
📉 Avg. test attempts: 2.7 → 2.4 → 1.9

Asking less, achieving more, iterating less. 🤔

09.03.2026 09:45 👍 1 🔁 0 💬 1 📌 0
Introduction to Machine Learning with Python | ÚFAL

3rd run teaching ML to 250+ bachelor students (with great materials originaly by @straka-milan.bsky.social). Core philosophy: explain the math, implement algorithms from scratch, Kaggle-style competitions, all auto-graded. ufal.mff.cuni.cz/courses/npfl...

But look what LLMs did to the course 👇

09.03.2026 09:45 👍 1 🔁 0 💬 1 📌 0
Post image

Spent time making AI-generated images of Bayes' Rule, Laplace Smoothing, Markov Chains & Shannon Entropy for class today 🎨🤖 Even though the images are objectively hilarious, none of the 50 students in the room laughed. Or even smiled. 💀

04.03.2026 14:37 👍 4 🔁 1 💬 1 📌 0
Post image

More interesting: humans are predictably inconsistent in their values. LLMs capture this but overgeneralize: they become more stereotypically consistent than actual humans.

After several rejections, finally publishable. To appear at the Multilingual Multicultural Evaluation workshop at EACL 2026.

16.02.2026 17:08 👍 2 🔁 2 💬 0 📌 0

I reviewed papers evaluating LLM values using sociology questionnaires. Different methods, different results. Didn't trust them, so I tested it myself.
Methodology matters. Short answers vs CoT, squared err vs KL div.: each changes which populations an LLM "aligns" with.
www.arxiv.org/pdf/2602.04033

16.02.2026 17:06 👍 4 🔁 0 💬 1 📌 0

We have updated the pre-print on CUS-QA, benchmark for regional knowledge about Czechia, Slovakia and Ukraine arxiv.org/abs/2507.22752
Now, there are results of retrieval-augmented generation and more detailed analysis of model performance depending on the topic of the question or visual context.

03.02.2026 21:49 👍 7 🔁 1 💬 0 📌 0
Post image

👉 What do we do?
We use the good old IBM1 model to align subwords with morphological features from Unimorph and we show it captures the same thing as morpheme boundary recall.
👉 Why it matters?
For many languages good segmentation data is missing. Morphological features are more widely available.

02.02.2026 13:38 👍 4 🔁 1 💬 0 📌 0
Preview
Evaluating Morphological Plausibility of Subword Tokenization via Statistical Alignment with Morpho-Syntactic Features We present a novel metric for the evaluation of the morphological plausibility of subword segmentation. Unlike the typically used morpheme boundary or retrieval F-score, which requires gold segmentati...

We (= mostly @abyste.bsky.social) developed a way to evaluate how morphological a #tokenization is w/o gold segmentation labels. arxiv.org/abs/2601.18536 The key: align subword tokens with morphological features from UniMorph using IBM Model 1.
To appear in EACL 2026 Findings.

02.02.2026 13:38 👍 9 🔁 1 💬 1 📌 1
Video thumbnail

Happy holidays! 🎄🎅🤩🎁

23.12.2025 10:51 👍 3 🔁 0 💬 0 📌 0

Attenzione! 🇮🇹 Know Piedmontese or Neapolitan speakers? @gianlucavico.bsky.social is collecting crowd-sourced translations to evaluate LLM performance on these regional languages. Partecipate!

10.11.2025 14:36 👍 2 🔁 1 💬 0 📌 0

Cultural awareness is trickier. Different data for different cultures means we can't really compare performance across cultures in a straightforward way. And there's no clear optimization target for cultural awareness beyond curating diverse training data.

21.10.2025 13:30 👍 1 🔁 0 💬 0 📌 0

☝️🧵 Most current approaches emphasize langauge neutrality: about two-thirds of VL benchmarks use translation-based evaluation. This makes sense because we can explicitly train for language neutrality when we have parallel data. But... 🧵👇

21.10.2025 13:30 👍 0 🔁 0 💬 1 📌 0

With @andrei-a-manea.bsky.social, we posted a survey on multilingual vision-language models 👉 arxiv.org/pdf/2509.22123
We reviewed 31 models+21 benchmarks. There's a tension between language neutrality (same results across languages) & cultural awareness (context matters differently across cultures)

21.10.2025 13:30 👍 3 🔁 2 💬 1 📌 0

Most vision-language models only work in English. We explore how different parallel data types (machine-translated vs authentic captions) affect cross-lingual transfer. Key finding: authentic data can outperform machine translation, and multilingual training beats bilingual approaches. #NLP

01.09.2025 15:38 👍 2 🔁 0 💬 0 📌 0

So proud of my PhD student @andrei-a-manea.bsky.social for his first first-author publication! 🎉 He presented this work last week at TSD. Investigating the Effect of Parallel Data in the Cross-Lingual Transfer for Vision-Language Encoders arxiv.org/pdf/2504.21681

01.09.2025 15:38 👍 6 🔁 0 💬 1 📌 0

For evaluation researchers: Simple string-overlap metrics (BLEU, chrF) work surprisingly well for factual QA. 🤔 When answers are mostly named entities, exact matches matter more than we thought.

LLM-as-judge 🦙🧑‍⚖️ correlates best with human judgment, though.

25.08.2025 08:06 👍 1 🔁 0 💬 1 📌 0

The results are... humbling 😅
Even the best models:

>40% accuracy on textual questions
<30% on visual questions
Often perform better in English than the local language (!!)

Visual QA with regional images is especially challenging.

25.08.2025 08:06 👍 0 🔁 0 💬 1 📌 0
Post image

The problem: Most QA benchmarks focus on globally known facts. But real users ask about local geography, culture, and history.

We collected questions from native speakers in Czechia 🇨🇿, Slovakia 🇸🇰, and Ukraine 🇺🇦 about facts locals know but outsiders don't.

25.08.2025 08:06 👍 0 🔁 0 💬 1 📌 0
Preview
ufal/cus-qa · Datasets at Hugging Face We’re on a journey to advance and democratize artificial intelligence through open source and open science.

🧵 We're releasing CUS-QA - a new benchmark for testing LLMs on regional knowledge!
Find out what your model knows about Czechia 🇨🇿, Slovakia 🇸🇰, and Ukraine 🇺🇦!
👉 Textual and visual questions, answers, and human judgment on model outputs!
huggingface.co/datasets/ufa...
www.arxiv.org/abs/2507.22752

25.08.2025 08:06 👍 16 🔁 3 💬 1 📌 3

Stay tuned, we will release the dataset soon...

01.08.2025 16:49 👍 2 🔁 0 💬 0 📌 0

We need to have poster fights at the end of every conference.

29.07.2025 19:01 👍 3 🔁 1 💬 0 📌 0

Just presented MAGBIG, a new dataset and evaluation methodology for gender bias in multilingual text-to-image generation. Grammatical gender matters when studying these biases across languages!
Thanks to Felix Friedrich, @kathaem.bsky.social and all co-authors - it was fun to work on this together!

28.07.2025 13:14 👍 2 🔁 0 💬 0 📌 0
Post image

This week I am at #ACL2025NLP in Vienna 🎡🇦🇹. Find me 🕵️ or message 💌 me if you want to chat about multilinguality or tokenization. Stop 🛑 by our poster on gender bias in text-to-image generation on Monday aclanthology.org/2025.acl-lon...

27.07.2025 07:24 👍 7 🔁 0 💬 0 📌 0
Preview
TokShop 2025 Registering interest in all things tokenization at TokShop @ ICML 2025 (July 18) Consider joining the Google group for future updates! https://groups.google.com/g/tokshop

TokShop @ #ICML2025 got way more submissions than expected! 📈 We could really use a few more reviewers to help out. If you have the capacity to review a #tokenization paper by Saturday, please fill out this form: forms.gle/32A6sQHQrMSb... 🙏

02.06.2025 16:40 👍 0 🔁 4 💬 0 📌 2
Preview
ICML 2025 Workshop TokShop Welcome to the OpenReview homepage for ICML 2025 Workshop TokShop

📣 Call for Paper Alert: TokShop @ ICML 2025
TokShop explores tokenization across all data modalities. Topics include: subword NLP techniques, multimodal approaches, multilingual challenges, post-training modification, alternative representations, and statistical perspectives.

14.05.2025 13:31 👍 18 🔁 12 💬 1 📌 2
Tokenization Workshop @ ICML 2025

Got a tokenization paper that just didn't make the cut for ICML? Submit it to the Tokenization Workshop TokShop at #ICML2025 -- we'd love to see it there!
tokenization-workshop.github.io

04.05.2025 19:27 👍 7 🔁 6 💬 0 📌 0
Preview
Beyond Literal Token Overlap: Token Alignability for Multilinguality Katharina Hämmerl, Tomasz Limisiewicz, Jindřich Libovický, Alexander Fraser. Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics:...

If you will be on the virtual NAACL day on May 6, 5 pm Central European Time, don't miss @kathaem.bsky.social presenting our work on the importance of semantic token overlap in multilingual language models. aclanthology.org/2025.naacl-s...

30.04.2025 12:50 👍 1 🔁 0 💬 0 📌 0
Post image

Attending #NAACL2025 virtually. Since 2022, I've been training a classifier on papers I read to tackle the arXiv madness. Ran it on the NAACL proceedings for my personalized watch list. 🤓📺 However, it's far from perfect: Multilingual cultural awareness is great, but where is tokenization? 🤷

30.04.2025 12:50 👍 2 🔁 0 💬 2 📌 0

We're organizing ✨Tokenization Workhop✨ TokShop❗ Join us at @icmlconf.bsky.social in July in Vancouver 🇨🇦. Follow @tokshop.bsky.social for updates! Submit your paper by May 30.

15.04.2025 17:37 👍 4 🔁 0 💬 0 📌 0