Rejecting another Elsevier review request. Hoping my attempt at a dispassionate tone keeps my contempt for Elsevier from leaking through.
Rejecting another Elsevier review request. Hoping my attempt at a dispassionate tone keeps my contempt for Elsevier from leaking through.
Weβre hiring postdocs/research scientists! Your interests can be anywhere on the spectrum from pure theory to empirically testing predictions relevant to AI safety.
Our theoretical work relies on dynamical systems and tools from statistical physics.
3
New preprint from the lab! Γbel SΓ‘godi developed a theory of approximating dynamical systems that goes beyond finite time. #theoreticalNeuroscience
Follow @neurabel.bsky.social
Universal Approximation Theorems for Dynamical Systems with Infinite-Time Horizon Guarantees. . arxiv.org/abs/2602.08640
Applications for 2026 entry to the Gatsby Bridging Programme (7-week maths summer school) will open on 19 Jan and close on 16 Feb. Designed for students who wish to pursue a postgrad research degree in theoretical neuroscience or foundational machine learning but whose degree programme lacks a strong maths focus. Applications from students in underrepresented groups in STEM strongly encouraged. A small number of bursaries available. Register for the information webinar on 23 Jan.
π’ Applications open on 19 Jan for the 7-week #Mathematics #SummerSchool in London. You will develop the maths skills and intuition necessary to enter the #TheoreticalNeuroscience / #MachineLearning field.
Find out more & register for the information webinar π www.ucl.ac.uk/life-science...
Back in 2014, I co-organized a #COSYNE workshop on scalable modeling. scalablemodels.wordpress.com #timeflies
π¨π+π§΅π¨ Very excited about this work showing that people with no hand function following a spinal cord injury can control the activity of motor units from those muscles to perform 1D, 2D and 3D tasks, play video games, or navigate a virtual wheelchair
By a wonderful team co-mentored w Dario Farina
My department, Duke Neurobiology, is searching for a new chair. Ad below. Come work with me, @jmgrohneuro.bsky.social @ennatsew.bsky.social @jorggrandl.bsky.social @jnklab.bsky.social @sbilbo.bsky.social @neurocircuits.bsky.social and many other amazing folks! @dukemedschool.bsky.social
We have reached a situation where (1) the time/resources spent by people applying for grant X often outweighs (2) the time/resources awarded.
For these grants, society loses net time/resources.
www.nature.com/articles/d41...
How can I accelerate breakdown of caffeine in my body? I will need to increase CYP1A2 (P450) activity (without smoking). Vigorous exercise over 30 days was shown to increase it up to 70%? pubmed.ncbi.nlm.nih....
Learning a lot while preparing for a lecture on RNNs for neuroscience.
according to TripIt, I traveled 240Mm (yes, that's mega-meters), 10 countries in 2025. Oh my. I'm definitely going to travel much much less this year.
according to last.fm, my favourite artist of 2025 was #LauraThorn. Scrobbled 493/5728 times (just one song; La poupΓ©e monte le son). 0.01% of fans worldwide for the song. Of 2672 unique tracks I listened to. Also #1 on Beatrice Rana's Goldberg Variations album.
A major personal goal for 2025 was extensive networking. I met so many interesting people around the world, which helped enable these meetings and future collaborations.
Not everything worked out. I submitted six major grant applications in 2025; five were rejected, despite substantial time and resources invested (still waiting to hear back on the last one). All 3 of our NeurIPS submissions were rejected.
In Oct, I co-organized Neurocybernetics at Scale, a three-day conference with ~300 participants, aimed at rethinking how neuroscience can scale in the modern era and how we might better integrate across levels, methods, and communities:
π neurocybernetics.cc/neurocyberne...
In April, I co-organized Beyond Clarity, a small, closed interdisciplinary meeting focused on how to overcome the gaps in the combinatorial yet discrete limits of language create gaps in meaning across fields (with Sool Park):
π beyond-clarity.github.io
Highlights of 2025
Ayesha Vermani defended her PhD thesis this year. She helped jump start a new direction in integrative neuroscience:
Vermani et al. (2025), Meta-dynamical state space models for integrative neural data analysis. ICLR
π openreview.net/forum?id=SRp...
π youtu.be/SiXxPmkpYF8
I admire all who have donated and will donate to OpenReview. Thank you.
Today, the NeurIPS Foundation is proud to announce a $500,000 donation to OpenReview, supporting the infrastructure that makes modern ML research possible.
blog.neurips.cc/2025/12/15/s...
π£οΈ English is the working language.
Curious about our culture, values, and scientific environment?
π Learn more: www.fchampalimaud.org/about-cr
INDP includes an initial year of advanced coursework π + three lab rotations π¬, followed by PhD research. We welcome talented, motivated applicants from neuroscience, as well as physics, mathematics, statistics, computer science, electrical/biomedical engineering βοΈ, and related quantitative fields.
Fully-funded International Neuroscience Doctoral Programmeπ§ Champalimaud Foundation, Lisbon, Portugal π΅πΉ
Deadline: Jan 31, 2026
fchampalimaud.org/champalimaud...
Research program spans systems/computational/theoretical/clinical/sensory/motor neuroscience, neuroethology, intelligence, and more!!
You can have labelled lines and copies of microcircuits, too. But, I'm just acknowledging some evolutionary pressure to use neuron-centric codes. (in fact I'm fully a mixed selectivity kinda neuroscientist.)
One advantage of monosemantic, sharply-tuned, grandmother-cell, axis-aligned, neuron-centric representation as opposed to polysemantic, mixed-selective, oblique population code is that it can benefit from evolution. Genes are good at operating at the cell level. #neuroscience
Theoretical Insights on Training Instability in Deep Learning TUTORIAL
uuujf.github.io/inst...
gradient flow-like regime is slow and can overfit while large (but not too large) step size can trasiently go far, converge faster, and find better solutions #optimization #NeurIPS2025
score/flow matching diffusion models only starts memorizing when trained for long enough
Bonnaire, T., Urfin, R., Biroli, G., & Mezard, M. (2025). Why Diffusion Models Donβt Memorize: The Role of Implicit Dynamical Regularization in Training.
analysis of coupled dynamical system to study learning #cybernetics #learningdynamics
Ger, Y., & Barak, O. (2025). Learning dynamics of RNNs in closed-loop environments. In arXiv [cs.LG]. arXiv. http://arxiv.org/abs...
related:
Tricks to make it even faster.
Zoltowski, D. M., Wu, S., Gonzalez, X., Kozachkov, L., & Linderman, S. (2025). Parallelizing MCMC Across the Sequence Length. The Thirty-Ninth Annual Conference on Neural Information Processing Systems.
Some of my favorites from #NeurIPS2025
more neg max Lyapunov exp => faster parallelized RNN convergence
Gonzalez, X., Kozachkov, L., Zoltowski, D. M., Clarkson, K. L., & Linderman, S. Predictability Enables Parallelization of Nonlinear State Space Models.
This was a fantastic poster presentation!