Will Held's Avatar

Will Held

@williamheld.com

Modeling Linguistic Variation to expand ownership of NLP tools Views my own, but affiliations that might influence them: ML PhD Student under Prof. Diyi Yang 2x RS InternπŸ¦™ Pretraining Alum NYU Abu Dhabi BurqueΓ±o he/him

2,150
Followers
452
Following
104
Posts
06.11.2024
Joined
Posts Following

Latest posts by Will Held @williamheld.com

OpenAI addresses this with a backend classifier "to detect if the GPT‑4o output is using a voice that’s different from our approved list".

But that isn't possible for open-source models, so would be great to at least partially mitigate this by baking in this gating mechanism.

29.10.2025 15:13 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

While many systems exist explicitly designed for voice cloning, some systems can do it unintentionally due to ICL.

For example, in the original 4o card
"""
During testing, we also observed rare instances where the model would unintentionally generate an output emulating the user’s voice
"""

29.10.2025 15:12 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Super interested to what degree this interaction can be fine-tuned into models in a non-reversible fashion!

Voice cloning is unfortunately a capability which inherently shows up in pretrained audio models. It would be great to be able to largely limit the capability at the level of model weights!

29.10.2025 15:01 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Speech and Language Processing Speech and Language Processing

Now that school is starting for lots of folks, it's time for a new release of Speech and Language Processing! Jim and I added all sorts of material for the August 2025 release! With slides to match! Check it out here: web.stanford.edu/~jurafsky/sl...

24.08.2025 19:28 πŸ‘ 150 πŸ” 59 πŸ’¬ 3 πŸ“Œ 4
Post image

"GPT-5 shows scaling laws are coming to an end"

11.08.2025 17:46 πŸ‘ 5 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

We’ve discovered a literal miracle with almost unlimited potential and it’s being scrapped for *no reason whatsoever*. This isn’t even nihilism, it’s outright worship of death and human suffering.

05.08.2025 23:09 πŸ‘ 10384 πŸ” 3314 πŸ’¬ 49 πŸ“Œ 157
Preview
Attention Is Off By One Let’s fix these pesky Transformer outliers using Softmax One and QuietAttention.

Really great pointer from Hao Zhang on the other site in relation to GPT OSS use of attention sinks.

If I were to guess, the attention sink is what allows them to omit QK-Norm which has become otherwise standard.

www.evanmiller.org/attention-is...

06.08.2025 12:48 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Alt Text:

Conference schedule for July 28th (Monday) and July 29th (Tuesday), listing talk titles, locations, times, and authors:

July 28th, Monday:

1. Attacking Vision-Language Computer Agents via Pop-ups
Location: Hall 4/5, Time: 11:00–12:30
Authors: Yanzhe Zhang, Tao Yu, Diyi Yang


2. SPHERE: An Evaluation Card for Human-AI Systems
Location: Hall 4/5, Time: 18:00–19:30
Authors: Dora Zhao*, Qianou Ma*, Xinran Zhao, Chenglei Si, Chenyang Yang, Ryan Louie, Ehud Reiter, Diyi Yang*, Tongshuang Wu*
(asterisk denotes equal contribution)



July 29th, Tuesday:

1. SynthesizeMe! Inducing Persona-Guided Prompts for Personalized Reward Models in LLMs
Location: Hall 4/5, Time: 10:30–12:00
Authors: Michael J Ryan, Omar Shaikh, Aditri Bhagirath, Daniel Frees, William Barr Held, Diyi Yang


2. Distilling an End-to-End Voice Assistant Without Instruction Training Data
Location: Room 1.61, Time: 14:12 (Second Talk)
Authors: William Barr Held, Yanzhe Zhang, Weiyan Shi, Minzhi Li, Michael J Ryan, Diyi Yang


3. Mind the Gap: Static and Interactive Evaluations of Large Audio Models
Location: Room 1.61 (implied), follows previous talk
Authors: Minzhi Li*, William Barr Held*, Michael J Ryan, Kunat Pipatanakul, Potsawee Manakul, Hao Zhu, Diyi Yang
(asterisk denotes equal contribution)


4. EgoNormia: Benchmarking Physical Social Norm Understanding
Location: Hall 4/5, Time: 16:00–17:30
Authors: MohammadHossein Rezaei*, Yicheng Fu*, Phil Cuvin*, Caleb Ziems, Yanzhe Zhang, Hao Zhu, Diyi Yang
(asterisk denotes equal contribution)

Alt Text: Conference schedule for July 28th (Monday) and July 29th (Tuesday), listing talk titles, locations, times, and authors: July 28th, Monday: 1. Attacking Vision-Language Computer Agents via Pop-ups Location: Hall 4/5, Time: 11:00–12:30 Authors: Yanzhe Zhang, Tao Yu, Diyi Yang 2. SPHERE: An Evaluation Card for Human-AI Systems Location: Hall 4/5, Time: 18:00–19:30 Authors: Dora Zhao*, Qianou Ma*, Xinran Zhao, Chenglei Si, Chenyang Yang, Ryan Louie, Ehud Reiter, Diyi Yang*, Tongshuang Wu* (asterisk denotes equal contribution) July 29th, Tuesday: 1. SynthesizeMe! Inducing Persona-Guided Prompts for Personalized Reward Models in LLMs Location: Hall 4/5, Time: 10:30–12:00 Authors: Michael J Ryan, Omar Shaikh, Aditri Bhagirath, Daniel Frees, William Barr Held, Diyi Yang 2. Distilling an End-to-End Voice Assistant Without Instruction Training Data Location: Room 1.61, Time: 14:12 (Second Talk) Authors: William Barr Held, Yanzhe Zhang, Weiyan Shi, Minzhi Li, Michael J Ryan, Diyi Yang 3. Mind the Gap: Static and Interactive Evaluations of Large Audio Models Location: Room 1.61 (implied), follows previous talk Authors: Minzhi Li*, William Barr Held*, Michael J Ryan, Kunat Pipatanakul, Potsawee Manakul, Hao Zhu, Diyi Yang (asterisk denotes equal contribution) 4. EgoNormia: Benchmarking Physical Social Norm Understanding Location: Hall 4/5, Time: 16:00–17:30 Authors: MohammadHossein Rezaei*, Yicheng Fu*, Phil Cuvin*, Caleb Ziems, Yanzhe Zhang, Hao Zhu, Diyi Yang (asterisk denotes equal contribution)

The SALT Lab is at #ACL2025 with our genius leader @diyiyang.bsky.social.

Come see work from
@yanzhe.bsky.social,
@dorazhao.bsky.social @oshaikh.bsky.social,
@michaelryan207.bsky.social, and myself at any of the talks and posters below!

28.07.2025 07:45 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Paper: aclanthology.org/2025.acl-lon...

28.07.2025 04:25 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I'm in Vienna for #ACL2025!

My work is all presented tomorrow, but today you'll find me today at the poster session from 11-12:30 evangelizing
my labmate Yanzhe Zhang's work on his behalf.

If you're interested in the risks traditional pop-up attacks present for AI agents, come chat!

28.07.2025 04:24 πŸ‘ 4 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

It seems (at a minimum) like they post-trained on the virulently racist content from this thread. Musk framed this as a request for training data... and the top post is eugenics. Seems unlikely to be coincidence that the post uses the same phrasing as the prompt they later removed...

10.07.2025 05:20 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Btw, all of this is very nice for something that was a quick 15 line addition to Levanter.

github.com/stanford-crf...

03.07.2025 15:14 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Adding an Optimizer for Speedrun - Marin Documentation Documentation for the Marin project

Have an optimizer you want to prove works better than AdamC/Muon/etc?

Submit a speedrun to Marin! marin.readthedocs.io/en/latest/tu...

For PRs with promising results, we're lucky to be able to help test at scale on compute generously provided by the TPU Research Cloud!

03.07.2025 15:14 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image Post image

In our most similar setting to the original work (130M model), we don't see AdamC's benefits but

- We use a smaller WD (0.01) identified from sweeps v.s. what is used in the paper (0.05).
- We only train to Chnichilla optimal (2B tokens) whereas the original paper was at 200B.

03.07.2025 15:14 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image Post image Post image Post image

We see the same pattern at 300m and 500m!

Remember, everything else in these experiments is held constant by Levanter & Marin (data order, model init. etc.)

Experiment files here: github.com/marin-commun...

03.07.2025 15:14 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image Post image

As a side note, Kaiyue Wen found that weight decay also causes slower loss decrease at the start of training in wandb.ai/marin-commun...

Similar to the end of training, this is likely because LR warmup also impacts the LR/WD ratio.

AdamC seems to mitigate this too.

03.07.2025 15:14 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image Post image

TL;DR: 3/4 of our scales we find the AdamC results to reproduce out of the box!

When compared to AdamW with all other factors held constant, AdamC mitigates the gradient ascent at the end of training and leads to an overall lower loss (-0.04)!

03.07.2025 15:14 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

A while ago I mentioned that for marin.community project, this gradient increase led to problematic loss ascent which we patched with Z-loss.

I was curious, does AdamC just work?

So over the weekend, I ran 4 experimentsβ€”130M to 1.4B paramsβ€”all at ~compute-optimal token counts...🧡

03.07.2025 15:14 πŸ‘ 4 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Unmute by Kyutai Make LLMs listen and speak.

kyutai.org/next/unmute has built in turn-detection on the ASR and full I/O streaming for the TTS. Solves the latency issues that I think are 90% of why people use end-to-end speech models in the first place!

From the details, you can @kyutai-labs.bsky.social is focused on real-world utility.

03.07.2025 15:05 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Flattered and shocked for our paper to receive the #facct2025 best paper award.

21.06.2025 01:16 πŸ‘ 11 πŸ” 3 πŸ’¬ 1 πŸ“Œ 0

As far as I can tell, the models aren't good enough right now that they can replace VFX at any high quality commercial scale.

They are exactly good enough to generate fake viral videos for ad revenue on TikTok/Instagram & spread misinformation. Is there any serious argument for their safe release??

17.06.2025 01:32 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I don't really see an argument for releasing such models with photorealistic generation capabilities.

What valid & frequent business use case is there for photorealistic video & voice generation like Veo 3 offers?

17.06.2025 01:25 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I've only seen Veo 3 (or any other video generation model) used to produce viral videos. The fake videos seem to successfully trick the majority of commenters and have no visible watermark or disclosure of AI use.

17.06.2025 01:24 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

What would you say if you saw it in another country? A senator from a coequal branch of government dragged away by security from asking a question of a Cabinet official

12.06.2025 18:33 πŸ‘ 485 πŸ” 143 πŸ’¬ 26 πŸ“Œ 6
Post image

🚨 70 million US workers are about to face their biggest workplace transmission due to AI agents. But nobody’s asking them what they want.

While AI R&D races to automate everything, we took a different approach: auditing what workers want vs. what AI can deliver across the US workforce.🧡

12.06.2025 16:33 πŸ‘ 22 πŸ” 7 πŸ’¬ 1 πŸ“Œ 0

Really cool to see theory connect to practice! We observed this phenomenon when trying to do deeper WSD cooldowns of our 8B model in the marin.community project!

We Z-Lossed our way through the pain, but cool to see some stronger theory: marin.readthedocs.io/en/latest/re...

06.06.2025 01:27 πŸ‘ 10 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

Now, I wouldn't do research on LLMs if I thought that was true in the long term!

But I think it's reasonable for skeptics to question whether advances in inference efficiency, hardware efficiency, and even core energy infrastructure will happen soon enough for current companies to capitalize.

05.06.2025 02:26 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
The Subprime AI Crisis None of what I write in this newsletter is about sowing doubt or "hating," but a sober evaluation of where we are today and where we may end up on the current path. I believe that the artificial intel...

The underlying assumption being that they can (a la Uber/Lyft) eventually increase prices once the core customers are fundamentally reliant on AI.

The real question then is "what is demand once you start charging the true unit costs?". Personally, I found this article sobering but well reasoned.

05.06.2025 02:12 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Without knowing all the model details or with transparent financials, it's hard to say but I would naively suspect most AI companies are in the red both on a cost per query basis (for API services) and on a cost per user basis (for subscription services).

05.06.2025 02:09 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I haven't seen people mocking the revenue forecasts, but I agree with your take w.r.t. demand. The bigger question is whether demand is the constraint?

Unlike standard software or even manufacturing businesses, I'm not sure the economies of scale look great if you factor in cost per query.

05.06.2025 02:05 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0