's Avatar

@pentagonalize

15
Followers
21
Following
13
Posts
21.11.2024
Joined
Posts Following

Latest posts by @pentagonalize

FLaNN Workshop 2026

Call for Submissions: flann.cs.yale.edu/cfp.html
Registration: flann.cs.yale.edu/registration...
Contact: flann@cs.yale.edu

12.02.2026 20:05 👍 0 🔁 0 💬 0 📌 0

We also have (limited) financial support available on a need basis for graduate student who are not able to attend otherwise. 🙂

(Only for students with an accepted abstract, please see website and register before the abstract submission deadline for it)

12.02.2026 20:05 👍 0 🔁 0 💬 1 📌 0

The FLaNN Workshop submission deadline has been extended to Feb 19!

Invited talks + posters (non-archival): expressivity, computation, and learning in neural nets/LLMs. Previous work welcome. Graduate students encouraged to submit!

📍 Yale University
🗓️ May 11-13, 2026

12.02.2026 20:05 👍 0 🔁 0 💬 1 📌 0

We welcome posters on the formal expressivity, computational properties, and learning behavior of neural nets (incl. LLMs). Graduate students are especially encouraged to submit!

Contact: flann@cs.yale.edu

04.02.2026 15:24 👍 0 🔁 0 💬 0 📌 0
An advertisement for the Formal Languages and Neural Networks workshop. It has the date, a call for papers, the website+email, and a list of speakers with their names, headshots, and institutional affiliations (Pablo Barceló, David Chiang, Will Merrill, Naomi Saphra, and Gail Weiss)

An advertisement for the Formal Languages and Neural Networks workshop. It has the date, a call for papers, the website+email, and a list of speakers with their names, headshots, and institutional affiliations (Pablo Barceló, David Chiang, Will Merrill, Naomi Saphra, and Gail Weiss)

📣 FLaNN 2026 at Yale 🍮

Invited talks+posters (non-archival): expressivity, computation, and learning in neural nets/LLMs

Speakers: Pablo Barceló, David Chiang, Will Merrill, Naomi Saphra, Gail Weiss

Abstracts due Feb 12, 2026
Details: flann.cs.yale.edu

04.02.2026 15:24 👍 3 🔁 2 💬 2 📌 1

Deadline in just under two weeks!

31.01.2026 00:14 👍 1 🔁 1 💬 0 📌 0

Thank you on behalf of the organizing committee: Robert Frank, Lena Strobl, Dana Angluin, Timos Antonopoulos, Arman Cohan, Tom McCoy, Ruzica Piskac, Andy Yang

19.12.2025 02:58 👍 0 🔁 0 💬 0 📌 0
FLaNN Workshop 2026

Location: Yale University, New Haven, Connecticut, USA
Workshop date: May 11-13, 2026
Abstract submissions due: February 12, 2026
Website: flann.cs.yale.edu
Contact: flann@cs.yale.edu

More information to come!

19.12.2025 02:58 👍 0 🔁 0 💬 1 📌 0
Post image

Announcing the first Workshop on Formal Languages and Neural Networks (FLaNN)!

We invite the submission of abstracts for posters that discuss the formal expressivity, computational properties, and learning behavior of neural network models, including large language models (LLMs).

19.12.2025 02:58 👍 10 🔁 5 💬 1 📌 2

Read the cookbook: arxiv.org/abs/2510.00368

Join us for weekly seminars on formal language theory, ML, NLP, and more: flannseminars.github.io

03.10.2025 16:24 👍 1 🔁 2 💬 0 📌 0

Thanks to all the chefs: @ccwatson.bsky.social, @antonxue.bsky.social, @satwik77.bsky.social, @ll4r3n4.bsky.social, @lambdaviking.bsky.social, Emile Dos Santos Ferreira, @anejsvete.bsky.social, @dchiang.bsky.social

03.10.2025 16:24 👍 2 🔁 2 💬 1 📌 0

There is no better way to understand what transformers can do than to get your hands dirty and construct them, weight-by-weight. The Transformer Cookbook provides a guide for anyone aiming to understand the expressive power of transformers on such a formal level.

03.10.2025 16:24 👍 1 🔁 2 💬 1 📌 0
Preview
The Transformer Cookbook We present the transformer cookbook: a collection of techniques for directly encoding algorithms into a transformer's parameters. This work addresses the steep learning curve of such endeavors, a prob...

We present The Transformer Cookbook: a collection of recipes for programming algorithms directly into transformers!

Hungry for an induction head? Craving a Dyck language recognizer? We show you step-by-step how to cook up transformers for these algorithms and many more!

03.10.2025 16:24 👍 5 🔁 5 💬 1 📌 0
Preview
Simulating Hard Attention Using Soft Attention We study conditions under which transformers using soft attention can simulate hard attention, that is, effectively focus all attention on a subset of positions. First, we examine several variants of ...

New paper and two not-so-new papers on arXiv about transformer expressivity: (1) With @pentagonalize and Dana Angluin, "Simulating Hard Attention Using Soft Attention" arxiv.org/abs/2412.09925

23.12.2024 22:55 👍 3 🔁 1 💬 2 📌 0