Multiple LLM Agents Debate for Equitable Cultural Alignment
Large Language Models (LLMs) need to adapt their predictions to diverse cultural contexts to benefit diverse communities across the world. While previous efforts have focused on single-LLM, single-tur...
8/ π Huge thanks to @marinecarpuat.bsky.social, Rachel, and @zhoutianyi.bsky.social for their guidance β and special shoutout to the amazing UMD CLIP team!
Check out our paper and code below π
π Paper: arxiv.org/abs/2505.24671
π€Β Dataset: github.com/dayeonki/cul...
12.06.2025 23:33
π 1
π 0
π¬ 0
π 0
7/ π Whatβs next for Multi-Agent Debate?
Some exciting future directions:
1οΈβ£ Assigning specific roles to represent diverse cultural perspectives
2οΈβ£ Discovering optimal strategies for multi-LLM collaboration
3οΈβ£ Designing better adjudication methods to resolve disagreements fairly π€
12.06.2025 23:33
π 1
π 0
π¬ 1
π 0
6/ But do these gains hold across cultures? πΎ
π« We measure cultural parity across diverse groups β and find that Multi-Agent Debate not only boosts average accuracy but also leads to more equitable cultural alignment π
12.06.2025 23:33
π 0
π 0
π¬ 1
π 0
5/ How do model decisions evolve through debate?
We track three phases of LLM behavior:
π Initial decision correctness
π Final decision correctness
π Judgeβs decision correctness
β¨ Multi-Agent Debate is most valuable when models initially disagree!
12.06.2025 23:33
π 2
π 0
π¬ 1
π 0
4/ π₯Β Distinct LLMs are complementary!
We find that:
π€― Multi-Agent Debate lets smaller LLMs (7B) match the performance of much larger ones (27B)
π Best combo? Gemma-2 9B + EXAONE-3 7B πͺ
12.06.2025 23:33
π 1
π 0
π¬ 1
π 0
3/ Before bringing in two #LLMs, we first π maximize single-LLM performance through:
1οΈβ£ Cultural Contextualization: adding relevant rules-of-thumb for the target culture
2οΈβ£ Self-Reflection: evaluating and improve its own outputs
These serve as strong baselines before we introduce collaboration π€
12.06.2025 23:33
π 1
π 0
π¬ 1
π 0
2/ π€Β Why involve multiple #LLMs?
Different LLMs bring complementary perspectives and reasoning paths, thanks to variations in:
π½ Training data
π§ Alignment processes
π Language and cultural coverage
We explore one common form of collaboration: debate.
12.06.2025 23:33
π 1
π 0
π¬ 1
π 0
1/ Are two #LLMs better than one for equitable cultural alignment? π
We introduce a Multi-Agent Debate framework β where two LLM agents debate the cultural adaptability of a given scenario.
#ACL2025 π§΅π
12.06.2025 23:33
π 7
π 0
π¬ 1
π 1
Trying to collect all the MT people here. I probably missed many. Ping me!
bsky.app/starter-pack...
02.12.2024 08:39
π 24
π 8
π¬ 9
π 0
7/ Can AskQE handle naturally occurring translation errors too? π
Yes! It shows:
πββοΈ Stronger correlation with human judgments
β
Better decision-making accuracy than standard QE metrics
21.05.2025 17:48
π 0
π 0
π¬ 1
π 0
6/ π€ What kinds of questions does AskQE generate?
Most commonly:
π Extent β How many COVID-19 cases were reported today? (24.6%)
π‘ Concept β What is another name for paracetamol? (23.6%)
21.05.2025 17:48
π 0
π 0
π¬ 1
π 0
5/ π₯ We test AskQE on ContraTICO and find:
π It effectively distinguishes minor to critical translation errors
π It aligns closely with established quality estimation (QE) metrics
21.05.2025 17:48
π 0
π 0
π¬ 1
π 0
4/ We introduce ContraTICO, a dataset of 8 contrastive MT error types in the COVID-19 domain π·π¦
β οΈ Minor errors: spelling, word order, synonym, intensifier, expansion (no impact)
π Critical errors: expansion (impact), omission, alteration
21.05.2025 17:48
π 0
π 0
π¬ 1
π 0
3/ AskQE has two main components:
β Question Generation (QG): conditioned on the source + its entailed facts
β Question Answering (QA): based on the source and backtranslated MT
If the answers donβt match... there's likely an error β οΈ
21.05.2025 17:48
π 0
π 0
π¬ 1
π 0
2/ But why question answering? π€
1οΈβ£ Provides functional explanations of MT quality
2οΈβ£ Users can weigh the evidence based on their own judgment
3οΈβ£ Aligns well with real-world cross-lingual communication strategies π
21.05.2025 17:48
π 0
π 0
π¬ 1
π 0
1/ How can a monolingual English speaker πΊπΈ decide if an automatic French translation π«π· is good enough to be shared?
Introducing βAskQEβ, an #LLM-based Question Generation + Answering framework that detects critical MT errors and provides actionable feedback π£οΈ
#ACL2025
21.05.2025 17:48
π 1
π 2
π¬ 1
π 0
How does the public conceptualize AI? Rather than self-reported measures, we use metaphors to understand the nuance and complexity of peopleβs mental models. In our #FAccT2025 paper, we analyzed 12,000 metaphors collected over 12 months to track shifts in public perceptions.
02.05.2025 01:19
π 49
π 14
π¬ 3
π 1
Multilinguality is happening at #NAACL2025
@crystinaz.bsky.social
@oxxoskeets.bsky.social
@dayeonki.bsky.social @onadegibert.bsky.social
30.04.2025 23:18
π 14
π 1
π¬ 0
π 0
7/ Taken together, we show that simpler texts are more translatable β and more broadly, #LLM-assisted input rewriting is a promising direction for improving translations! π₯
As LLM-based writing assistants grow, we encourage future work on interactive, rewriting-based approaches to MT π«‘
17.04.2025 01:32
π 1
π 0
π¬ 1
π 0
6/ π§ββοΈ Do humans actually prefer translations of simplified inputs?
Yes! They rated these to be:
π More contextually appropriate
ποΈ Easier to read
π€ More comprehensible
compared to translations of original inputs!
17.04.2025 01:32
π 0
π 0
π¬ 1
π 0
5/ What does input rewriting actually change? π§
Here are 3 key findings:
1οΈβ£Β Better translatability trades-off meaning preservation
2οΈβ£ Simplification boosts both input & output readability π
3οΈβ£ Input rewriting > Output post-editing π€―
17.04.2025 01:32
π 0
π 0
π¬ 1
π 0
4/ π€Β Can we have more selective strategies?
Yes! By selecting rewrites based on translatability scores at inference time, we outperform all other methods π₯
17.04.2025 01:32
π 0
π 0
π¬ 1
π 0
3/ π Which rewriting strategy works best?
Simpler texts are easier to translate!
But... simplification isn't always a win for MT quality π
17.04.2025 01:32
π 0
π 0
π¬ 1
π 0
2/ How should inputs be rewritten for machine translations? βοΈ
We explore 21 methods with different levels of MT-awareness π
πΒ MT-Agnostic: no knoweldge of the task
πΒ Task-Aware: aware of the end task (MT)
π
Β Translatability-Aware: guided by quality estimation scores
17.04.2025 01:32
π 0
π 0
π¬ 1
π 0
π¨Β New Paper π¨
1/ We often assume that well-written text is easier to translate βοΈ
But can #LLMs automatically rewrite inputs to improve machine translation? π
Hereβs what we found π§΅
17.04.2025 01:32
π 8
π 4
π¬ 1
π 0
Tokenization Workshop @ ICML 2025
π¨ NEW WORKSHOP ALERT π¨
We're thrilled to announce the first-ever Tokenization Workshop (TokShop) at #ICML2025 @icmlconf.bsky.social! π
Submissions are open for work on tokenization across all areas of machine learning.
π
Submission deadline: May 30, 2025
π tokenization-workshop.github.io
15.04.2025 17:23
π 23
π 7
π¬ 1
π 4
Thrilled our global data ecosystem audit was accepted to #ICLR2025!
Empirically, it shows:
1οΈβ£ Soaring synthetic text data: ~10M tokens (pre-2018) to 100B+ (2024).
2οΈβ£ YouTube is now 70%+ of speech/video data but could block third-party collection.
3οΈβ£ <0.2% of data from Africa/South America.
1/
14.04.2025 15:28
π 12
π 4
π¬ 1
π 1