Call for Submissions: flann.cs.yale.edu/cfp.html
Registration: flann.cs.yale.edu/registration...
Contact: flann@cs.yale.edu
Call for Submissions: flann.cs.yale.edu/cfp.html
Registration: flann.cs.yale.edu/registration...
Contact: flann@cs.yale.edu
We also have (limited) financial support available on a need basis for graduate student who are not able to attend otherwise. 🙂
(Only for students with an accepted abstract, please see website and register before the abstract submission deadline for it)
The FLaNN Workshop submission deadline has been extended to Feb 19!
Invited talks + posters (non-archival): expressivity, computation, and learning in neural nets/LLMs. Previous work welcome. Graduate students encouraged to submit!
📍 Yale University
🗓️ May 11-13, 2026
We welcome posters on the formal expressivity, computational properties, and learning behavior of neural nets (incl. LLMs). Graduate students are especially encouraged to submit!
Contact: flann@cs.yale.edu
An advertisement for the Formal Languages and Neural Networks workshop. It has the date, a call for papers, the website+email, and a list of speakers with their names, headshots, and institutional affiliations (Pablo Barceló, David Chiang, Will Merrill, Naomi Saphra, and Gail Weiss)
📣 FLaNN 2026 at Yale 🍮
Invited talks+posters (non-archival): expressivity, computation, and learning in neural nets/LLMs
Speakers: Pablo Barceló, David Chiang, Will Merrill, Naomi Saphra, Gail Weiss
Abstracts due Feb 12, 2026
Details: flann.cs.yale.edu
Deadline in just under two weeks!
Thank you on behalf of the organizing committee: Robert Frank, Lena Strobl, Dana Angluin, Timos Antonopoulos, Arman Cohan, Tom McCoy, Ruzica Piskac, Andy Yang
Location: Yale University, New Haven, Connecticut, USA
Workshop date: May 11-13, 2026
Abstract submissions due: February 12, 2026
Website: flann.cs.yale.edu
Contact: flann@cs.yale.edu
More information to come!
Announcing the first Workshop on Formal Languages and Neural Networks (FLaNN)!
We invite the submission of abstracts for posters that discuss the formal expressivity, computational properties, and learning behavior of neural network models, including large language models (LLMs).
Read the cookbook: arxiv.org/abs/2510.00368
Join us for weekly seminars on formal language theory, ML, NLP, and more: flannseminars.github.io
Thanks to all the chefs: @ccwatson.bsky.social, @antonxue.bsky.social, @satwik77.bsky.social, @ll4r3n4.bsky.social, @lambdaviking.bsky.social, Emile Dos Santos Ferreira, @anejsvete.bsky.social, @dchiang.bsky.social
There is no better way to understand what transformers can do than to get your hands dirty and construct them, weight-by-weight. The Transformer Cookbook provides a guide for anyone aiming to understand the expressive power of transformers on such a formal level.
We present The Transformer Cookbook: a collection of recipes for programming algorithms directly into transformers!
Hungry for an induction head? Craving a Dyck language recognizer? We show you step-by-step how to cook up transformers for these algorithms and many more!
New paper and two not-so-new papers on arXiv about transformer expressivity: (1) With @pentagonalize and Dana Angluin, "Simulating Hard Attention Using Soft Attention" arxiv.org/abs/2412.09925