Dayeon (Zoey) Ki's Avatar

Dayeon (Zoey) Ki

@dayeonki

CS PhD @umdclip Multilingual / Culture #NLProc, MT https://dayeonki.github.io/

169
Followers
220
Following
24
Posts
05.12.2024
Joined
Posts Following

Latest posts by Dayeon (Zoey) Ki @dayeonki

Preview
Multiple LLM Agents Debate for Equitable Cultural Alignment Large Language Models (LLMs) need to adapt their predictions to diverse cultural contexts to benefit diverse communities across the world. While previous efforts have focused on single-LLM, single-tur...

8/ πŸ’Œ Huge thanks to @marinecarpuat.bsky.social, Rachel, and @zhoutianyi.bsky.social for their guidance β€” and special shoutout to the amazing UMD CLIP team!

Check out our paper and code below πŸš€
πŸ“„ Paper: arxiv.org/abs/2505.24671
πŸ€–Β Dataset: github.com/dayeonki/cul...

12.06.2025 23:33 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

7/ 🌟 What’s next for Multi-Agent Debate?

Some exciting future directions:
1️⃣ Assigning specific roles to represent diverse cultural perspectives
2️⃣ Discovering optimal strategies for multi-LLM collaboration
3️⃣ Designing better adjudication methods to resolve disagreements fairly 🀝

12.06.2025 23:33 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

6/ But do these gains hold across cultures? πŸ—Ύ

πŸ«‚ We measure cultural parity across diverse groups β€” and find that Multi-Agent Debate not only boosts average accuracy but also leads to more equitable cultural alignment 🌍

12.06.2025 23:33 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

5/ How do model decisions evolve through debate?

We track three phases of LLM behavior:
πŸ’— Initial decision correctness
πŸ’š Final decision correctness
πŸ’™ Judge’s decision correctness

✨ Multi-Agent Debate is most valuable when models initially disagree!

12.06.2025 23:33 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

4/ πŸ”₯Β Distinct LLMs are complementary!

We find that:
🀯 Multi-Agent Debate lets smaller LLMs (7B) match the performance of much larger ones (27B)
πŸ† Best combo? Gemma-2 9B + EXAONE-3 7B πŸ’ͺ

12.06.2025 23:33 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

3/ Before bringing in two #LLMs, we first πŸ“ˆ maximize single-LLM performance through:

1️⃣ Cultural Contextualization: adding relevant rules-of-thumb for the target culture
2️⃣ Self-Reflection: evaluating and improve its own outputs

These serve as strong baselines before we introduce collaboration 🀝

12.06.2025 23:33 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

2/ πŸ€”Β Why involve multiple #LLMs?

Different LLMs bring complementary perspectives and reasoning paths, thanks to variations in:
πŸ’½ Training data
🧠 Alignment processes
🌐 Language and cultural coverage

We explore one common form of collaboration: debate.

12.06.2025 23:33 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

1/ Are two #LLMs better than one for equitable cultural alignment? 🌍

We introduce a Multi-Agent Debate framework β€” where two LLM agents debate the cultural adaptability of a given scenario.

#ACL2025 πŸ§΅πŸ‘‡

12.06.2025 23:33 πŸ‘ 7 πŸ” 0 πŸ’¬ 1 πŸ“Œ 1

Trying to collect all the MT people here. I probably missed many. Ping me!

bsky.app/starter-pack...

02.12.2024 08:39 πŸ‘ 24 πŸ” 8 πŸ’¬ 9 πŸ“Œ 0
Preview
AskQE: Question Answering as Automatic Evaluation for Machine Translation How can a monolingual English speaker determine whether an automatic translation in French is good enough to be shared? Existing MT error detection and quality estimation (QE) techniques do not addres...

8/ ❀️ Huge thanks to @marinecarpuat.bsky.social, Kevin duh, and the amazing UMD CLIP team for all the feedback and inspiration throughout this work!

We’d love for you to check it out πŸš€
πŸ“„ Paper: arxiv.org/abs/2504.11582
πŸ€–Β Dataset: github.com/dayeonki/askqe

21.05.2025 17:48 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

7/ Can AskQE handle naturally occurring translation errors too? πŸƒ

Yes! It shows:
πŸ’β€β™€οΈ Stronger correlation with human judgments
βœ… Better decision-making accuracy than standard QE metrics

21.05.2025 17:48 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

6/ πŸ€– What kinds of questions does AskQE generate?

Most commonly:
πŸ“ Extent β€” How many COVID-19 cases were reported today? (24.6%)
πŸ’‘ Concept β€” What is another name for paracetamol? (23.6%)

21.05.2025 17:48 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image Post image

5/ πŸ”₯ We test AskQE on ContraTICO and find:

πŸ“‰ It effectively distinguishes minor to critical translation errors
πŸ‘­ It aligns closely with established quality estimation (QE) metrics

21.05.2025 17:48 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

4/ We introduce ContraTICO, a dataset of 8 contrastive MT error types in the COVID-19 domain 😷🦠

⚠️ Minor errors: spelling, word order, synonym, intensifier, expansion (no impact)
πŸ“› Critical errors: expansion (impact), omission, alteration

21.05.2025 17:48 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

3/ AskQE has two main components:

❓ Question Generation (QG): conditioned on the source + its entailed facts
❕ Question Answering (QA): based on the source and backtranslated MT

If the answers don’t match... there's likely an error ⚠️

21.05.2025 17:48 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

2/ But why question answering? πŸ€”

1️⃣ Provides functional explanations of MT quality
2️⃣ Users can weigh the evidence based on their own judgment
3️⃣ Aligns well with real-world cross-lingual communication strategies 🌐

21.05.2025 17:48 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

1/ How can a monolingual English speaker πŸ‡ΊπŸ‡Έ decide if an automatic French translation πŸ‡«πŸ‡· is good enough to be shared?

Introducing ❓AskQE❓, an #LLM-based Question Generation + Answering framework that detects critical MT errors and provides actionable feedback πŸ—£οΈ

#ACL2025

21.05.2025 17:48 πŸ‘ 1 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0
Post image

How does the public conceptualize AI? Rather than self-reported measures, we use metaphors to understand the nuance and complexity of people’s mental models. In our #FAccT2025 paper, we analyzed 12,000 metaphors collected over 12 months to track shifts in public perceptions.

02.05.2025 01:19 πŸ‘ 49 πŸ” 14 πŸ’¬ 3 πŸ“Œ 1
Post image

Multilinguality is happening at #NAACL2025

@crystinaz.bsky.social
@oxxoskeets.bsky.social
@dayeonki.bsky.social @onadegibert.bsky.social

30.04.2025 23:18 πŸ‘ 14 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Preview
"It was 80% me, 20% AI": Seeking Authenticity in Co-Writing with Large Language Models Given the rising proliferation and diversity of AI writing assistance tools, especially those powered by large language models (LLMs), both writers and readers may have concerns about the impact of th...

Starting my journey on Bluesky with a topic that I care deeply about: AI tools can support creators in various ways, but disclosing AI use may risk devaluing creative work.

Check out our abstract here: angelhwang.github.io/doc/ic2s2_AI...
Inspired by our past work: arxiv.org/abs/2411.13032

18.04.2025 21:38 πŸ‘ 27 πŸ” 5 πŸ’¬ 1 πŸ“Œ 1
Preview
Automatic Input Rewriting Improves Translation with Large Language Models Can we improve machine translation (MT) with LLMs by rewriting their inputs automatically? Users commonly rely on the intuition that well-written text is easier to translate when using off-the-shelf M...

8/ 🫢 Huge thanks to my advisor @marinecarpuat.bsky.social and the amazing UMD CLIP folks for all the insightful discussions!

Please check out our paper accepted to NAACL 2025 πŸš€
πŸ“„ Paper: arxiv.org/abs/2502.16682
πŸ€–Β Code: github.com/dayeonki/rew...

17.04.2025 01:32 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

7/ Taken together, we show that simpler texts are more translatable β€” and more broadly, #LLM-assisted input rewriting is a promising direction for improving translations! πŸ’₯

As LLM-based writing assistants grow, we encourage future work on interactive, rewriting-based approaches to MT 🫑

17.04.2025 01:32 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

6/ πŸ§‘β€βš–οΈ Do humans actually prefer translations of simplified inputs?

Yes! They rated these to be:
πŸ“ More contextually appropriate
πŸ‘οΈ Easier to read
πŸ€— More comprehensible
compared to translations of original inputs!

17.04.2025 01:32 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

5/ What does input rewriting actually change? 🧐

Here are 3 key findings:
1️⃣ Better translatability trades-off meaning preservation
2️⃣ Simplification boosts both input & output readability πŸ“–
3️⃣ Input rewriting > Output post-editing 🀯

17.04.2025 01:32 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

4/ πŸ€”Β Can we have more selective strategies?

Yes! By selecting rewrites based on translatability scores at inference time, we outperform all other methods πŸ”₯

17.04.2025 01:32 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

3/ πŸ” Which rewriting strategy works best?

Simpler texts are easier to translate!
But... simplification isn't always a win for MT quality 😞

17.04.2025 01:32 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

2/ How should inputs be rewritten for machine translations? ✍️

We explore 21 methods with different levels of MT-awareness πŸ‘‡
πŸ“Β MT-Agnostic: no knoweldge of the task
🌐 Task-Aware: aware of the end task (MT)
πŸ…Β Translatability-Aware: guided by quality estimation scores

17.04.2025 01:32 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

🚨 New Paper 🚨

1/ We often assume that well-written text is easier to translate ✏️

But can #LLMs automatically rewrite inputs to improve machine translation? 🌍

Here’s what we found 🧡

17.04.2025 01:32 πŸ‘ 8 πŸ” 4 πŸ’¬ 1 πŸ“Œ 0
Tokenization Workshop @ ICML 2025

🚨 NEW WORKSHOP ALERT 🚨

We're thrilled to announce the first-ever Tokenization Workshop (TokShop) at #ICML2025 @icmlconf.bsky.social! πŸŽ‰

Submissions are open for work on tokenization across all areas of machine learning.

πŸ“… Submission deadline: May 30, 2025
πŸ”— tokenization-workshop.github.io

15.04.2025 17:23 πŸ‘ 23 πŸ” 7 πŸ’¬ 1 πŸ“Œ 4
Post image

Thrilled our global data ecosystem audit was accepted to #ICLR2025!

Empirically, it shows:

1️⃣ Soaring synthetic text data: ~10M tokens (pre-2018) to 100B+ (2024).

2️⃣ YouTube is now 70%+ of speech/video data but could block third-party collection.

3️⃣ <0.2% of data from Africa/South America.

1/

14.04.2025 15:28 πŸ‘ 12 πŸ” 4 πŸ’¬ 1 πŸ“Œ 1