Link: arxiv.org/abs/2510.04834
Joint work with the super team: Lev Reyzin, Nati Srebro, Gal Vardi
@idanattias
Postdoc researcher at IDEAL Institute in Chicago, hosted by UIC and TTIC. My research interests are in machine learning theory, data-driven sequential decision-making, and theoretical computer science. https://www.idanattias.com/
Link: arxiv.org/abs/2510.04834
Joint work with the super team: Lev Reyzin, Nati Srebro, Gal Vardi
A key takeaway is that what truly matters is the complexity measure (or description length, or equivalently βpriorβ) induced by the model, rather than the concept class itself!
We prove hardness in the PAC model and in the membership query setting, under distribution-free learning as well as under the uniform distribution.
Note that DFAs are efficiently learnable with membership queries, whereas we prove that REs remain hard in the same model.
The important point is that when we say DFAs or REs are easy or hard to learn, we mean that it is easy or hard to learn languages with *succinct* DFAs or REs. But even though every DFA has an equivalent RE and vice versa, the conversion may require exponential blowups.
What is the computational complexity of learning regular expressions (REs)? At first glance, one might assume this question has long been settled. Yet surprisingly, it does not follow from any known results on learning DFAs or NFAs...
Thanks Sagnik!
Really nice lecture notes by Alkis Kalavasis: Stability in Machine Learning: Generalization, Privacy & Replicability.
alkisk.github.io
when my family asks me about the impact of my research
π―
New paper: Simulating Time With Square-Root Space
people.csail.mit.edu/rrw/time-vs-...
It's still hard for me to believe it myself, but I seem to have shown that TIME[t] is contained in SPACE[sqrt{t log t}].
To appear in STOC. Comments are very welcome!
nice initiative
With @adamsmith.xyz and @thejonullman.bsky.social, we have compiled a set of profiles of 29 people in the "foundations of responsible computing" community ("mathematical research in computation and society writ large") who are on the faculty job market.
Link: drive.google.com/file/d/1Hyvg... 1/3
My book is (at last) out, just in time for Christmas!
A blog post to celebrate and present it: francisbach.com/my-book-is-o...
Looks like a cool result with interesting technical ideas
Number one job: keep them alive, the rest is a bonus
I'm excited about a new paper that gives tractable generalizations of Aumann's Agreement Theorem, with an eye towards human/model collaboration in machine learning. We can implement algorithms that can "converse" with people and quickly come to agreement on downstream actions. π§΅
How about the "set of all CS researcher sets that don't contain themselves"
So many starter packs... someone needs to create the "set of all CS researcher sets that don't contain themselves"
I made a starter pack for learning theory people to gather some people around the topic. There are too many names on here that I don't know so I only added a few I do. If you believe you should be on this list, let me know. I will add people with accurate profile descriptions.
go.bsky.app/21nFz12
Can you please add me?
Thanks, I don't feel grumpy enough