Enzo Doyen's Avatar

Enzo Doyen

@edoyen.com

PhD Candidate in Natural Language Processing @unistra.fr; working on LLM gender bias mitigation. Localization Specialist (EN → FR). Interested in research; politics; technology; languages; literature; philosophy. Website: https://edoyen.com/ Views my own.

115
Followers
281
Following
79
Posts
23.07.2024
Joined
Posts Following

Latest posts by Enzo Doyen @edoyen.com

Video thumbnail

What coders lose by relying on AI. From our event with the University of Washington Office of Public Lectures.

(with @emilymbender.bsky.social)

06.03.2026 21:07 👍 167 🔁 47 💬 8 📌 12
ChatGPT Translate translating "nurse" as "female nurse" into French, with no gender bias notice or any alternative suggestion

ChatGPT Translate translating "nurse" as "female nurse" into French, with no gender bias notice or any alternative suggestion

Almost 7 years after Prates et al./Stanovsky et al.'s papers, have we not learned anything?

(ChatGPT Translate translating "nurse" as "female nurse" into French, with no gender bias notice or any alternative suggestion)

29.01.2026 20:42 👍 1 🔁 0 💬 1 📌 0

blog.arxiv.org/2025/10/31/a...

FYI the blog post for the updated policy is out. Our llm future is dire:/

31.10.2025 19:37 👍 27 🔁 6 💬 3 📌 4

> be a language model
> all you see is tokens
> you don't care, it's all abstracted away
> you live for a world of pure ideas, chain of concepts, reasoning streams
> tokens don't exist.

15.09.2025 16:50 👍 105 🔁 12 💬 2 📌 10
Preview
Robin Lakoff, Expert on Language and Gender, Is Dead at 82

NYT obit for Robin Lakoff

17.08.2025 15:52 👍 12 🔁 5 💬 0 📌 0

It should be said that LLMs also generally have on-par performance with traditional NMT engines (see arxiv.org/html/2401.05... or aclanthology.org/2024.wmt-1.1...); but apart from that, I guess the whole "novelty" thing makes it a preferred choice for people wanting to implement machine l10n.

15.07.2025 19:07 👍 3 🔁 0 💬 0 📌 0

Compared to traditional NMT engines, LLMs do have this advantage of easily allowing to provide requirements for the translation (in terms of style, keywords; see aclanthology.org/2023.wmt-1.8... or arxiv.org/abs/2301.13294); even though I highly doubt it's widely used for machine l10n.

15.07.2025 19:05 👍 1 🔁 0 💬 1 📌 0

@bsavoldi.bsky.social taking us back in time at #GITT2025 ⌚⏳ focusing on the first discussions of gender bias in language technology as a socio-technical issue. No, the problem hasn't been fixed yet. But what has happened?

23.06.2025 07:22 👍 6 🔁 2 💬 6 📌 0

hmm that's nice, but does ACL allow to change style files like that?

29.05.2025 12:51 👍 1 🔁 0 💬 1 📌 0

to quote a colleague quoting a goose: “alignment to what? alignment to what??”

06.04.2025 06:46 👍 31 🔁 6 💬 2 📌 0

I never said that you were against benchmarking; rather that, in my opinion, such datasets can be used as a starting point to theoretically define the "default behaviors" of LLMs insofar as they reflect what we generally expect from them on a diverse range of tasks.

13.03.2025 15:34 👍 0 🔁 0 💬 0 📌 0

To my knowledge, there is no research on the topic, but I intuitively believe that generic prompts are much more prevalent than one may first think. While many do, I don't think *most* people actually use already created prompt templates or necessarily have the time to describe their task at length.

12.03.2025 21:10 👍 1 🔁 0 💬 1 📌 0

I think that makes sense to draw on these benchmarks for research on LLM behaviors given they're the standard in evaluating LLMs.

So the "golden" default behavior for each task could theoretically be found in standard LLM benchmarking datasets (and same for "generic prompts").

12.03.2025 21:10 👍 0 🔁 0 💬 1 📌 0

Actually, I think we should talk about default behaviors (plural) where each default behavior is task-dependent. Main tasks can be determined from commonly used LLM benchmarks (that is, commonsense reasoning w/ ARC; language understanding/question-answer w/ OpenBookQA…).

12.03.2025 21:10 👍 1 🔁 0 💬 1 📌 0

vastai is the cheapest and the most reliable that I know

12.02.2025 11:34 👍 1 🔁 0 💬 0 📌 0
Ring Of Past (live)
Ring Of Past (live) YouTube video by Men I Trust

MIT releasing new live sessions I can't
www.youtube.com/watch?v=TTX4...

11.02.2025 12:07 👍 0 🔁 0 💬 0 📌 0

we've been laughing at so many of the twitter responses to this, its very funny

01.02.2025 18:55 👍 91 🔁 8 💬 3 📌 0

aaah! Well that's definitely an interesting question. Very curious to know the answer too lol. Theoretically I guess it's possible but the performance may not be very good

01.02.2025 12:23 👍 1 🔁 0 💬 0 📌 0
Preview
GitHub - ading2210/doompdf: A port of Doom (1993) that runs inside a PDF file A port of Doom (1993) that runs inside a PDF file. Contribute to ading2210/doompdf development by creating an account on GitHub.

It can: github.com/ading2210/do...

01.02.2025 12:09 👍 4 🔁 0 💬 1 📌 0

Is this even feasible or desirable? (I think it is.) And where to draw the line between inherently inappropriate content and disputed (but sound) content when doing pre-training filtering?

28.01.2025 23:36 👍 0 🔁 0 💬 0 📌 0

This is obviously not specific to China — DeepSeek shows an example of it, but it could apply to any other country — and not even to diplomatic topics in general. The larger questions (and perhaps debate) are: How to best promote the development of globally fair and accurate models?

28.01.2025 23:36 👍 0 🔁 0 💬 1 📌 0

"Open-source" generally implies more than just giving access to the code, though. Can an LLM really be called "open" if it purposely refuses to answer historical questions that may go against a certain political power's narrative? Or if it promotes the One China principle with propaganda?

28.01.2025 23:36 👍 0 🔁 0 💬 1 📌 0

DeepSeek is incredible evidence that the number of local, open-source LLMs will keep growing and that these models can achieve similar performance similar to proprietary models.

28.01.2025 23:36 👍 1 🔁 0 💬 1 📌 0

Is this even feasible or desirable? (I think it is.) And where to draw the line between inherently inappropriate content and disputed (but sound) content when doing pre-training filtering?

28.01.2025 23:33 👍 0 🔁 0 💬 0 📌 0

This is obviously not specific to China — DeepSeek shows an example of it, but it could apply to any other country — and not even to diplomatic topics in general. The larger questions (and perhaps debate) are: How to best promote the development of globally fair and accurate models?

28.01.2025 23:33 👍 0 🔁 0 💬 1 📌 0

"Open-source" generally implies more than just giving access to the code, though. Can an LLM really be called "open" if it purposely refuses to answer historical questions that may go against a certain political power's narrative? Or if it promotes the One China principle with propaganda?

28.01.2025 23:33 👍 0 🔁 0 💬 1 📌 0

DeepSeek is incredible evidence that the number of local, open-source LLMs will keep growing and that these models can achieve similar performance similar to proprietary models.

28.01.2025 23:33 👍 0 🔁 0 💬 1 📌 0

"Open-source" generally implies more than just giving access to the code, though. Can an LLM really be called "open" if it purposely refuses to answer historical questions that may go against a certain political power's narrative? Or promotes the One China principle with propaganda?

28.01.2025 23:29 👍 0 🔁 0 💬 0 📌 0

DeepSeek is incredible evidence that the number of local, open-source LLMs will keep growing and that these models can achieve similar performance similar to proprietary models.

28.01.2025 23:29 👍 0 🔁 0 💬 1 📌 0

My main take away of the Deepseek paper is not scientific but organizational: we need an European industrial plan in AI right now. No safety summit, no peppered compute grants, no funding processes that take two years.

20.01.2025 19:32 👍 34 🔁 11 💬 5 📌 0