Amazing! Congrats Micha :)
Amazing! Congrats Micha :)
NSD-synthetic, the out-of-distribution companion dataset of NSD consisting of 7T fMRI responses to 284 artificial images, is now published.
#NeuroAI #CompNeuro #neuroscience #AI
doi.org/10.1038/s414...
Architecture of train station Liège-Guillemins by Calatrava
Returning home inspired after a great visit to KU Leuven, where I presented our work on CorText in the Brain & Cognition group. Thank you for the invitation and great discussions! @hansopdebeeck.bsky.social @costantinoai.bsky.social
(pictured: the magnificent architecture of the Liège station)
Model weights in Borges’ Library of Babel: seemingly meaningless series of characters, unless you know.
Our latest paper, “Visual language models show widespread visual deficits on neuropsychological tests”, is now out in Nature Machine Intelligence: www.nature.com/articles/s42...
Non-paywalled version:
arxiv.org/abs/2504.10786
Tweet thread below from first author @genetang.bsky.social...
1/7 Can infants recognise the world around them? 👶🧠 As part of the FOUNDCOG project, we scanned 134 awake infants using fMRI. Published today in Nature Neuroscience, our research reveals 2-month-old infants already possess complex visual representations in VVC that align with DNNs.
Interested in the latest advances in neuroscience (neural dynamics and internal models) and how they can be leveraged to build smarter, adaptive AI?
➡️ My first real solo piece 🖤🫶 @natneuro.nature.com
rdcu.be/eWVmA
When and why do modular representations emerge in neural networks?
@stefanofusi.bsky.social and I posted a preprint answering this question last year, and now it has been extensively revised, refocused, and generalized. Read more here: doi.org/10.1101/2024... (1/7)
🚨new work with the dream team @danakarca.bsky.social @loopyluppi.bsky.social @fatemehhadaeghi.bsky.social @stuartoldham.bsky.social @duncanastle.bsky.social
We use game theory and show the brain is not optimally wired for communication and there’s more to its story:
www.biorxiv.org/content/10.6...
Brains have many pathways / subnetworks but which principles underlie their formation?
In our #NeurIPS paper lead by Jack Cook we identify biologically relevant inductive biases that create pathways in brain-like Mixture-of-Experts models🧵
#neuroskyence #compneuro #neuroAI
arxiv.org/abs/2506.02813
In this piece for @thetransmitter.bsky.social, I argue that ecological neuroscience should leverage generative video and interactive models to simulate the world from animals' perspectives.
The technological building blocks are almost here - we just need to align them for this application.
🧠🤖
Looking forward!
Congrats! ✨
🚨 Out in Patterns!
We asked ourselves, if complex neural dynamics like predictive remapping and allocentric coding can emerge from simple physical principles, in this case Energy Efficiency. Turns out they can!
More information in the 🧵 below.
I am super excited to see this one out in the wild.
Y’all are reading this paper in the wrong way.
We love to trash dominant hypothesis, but we need to look for evidence against the manifold hypothesis elsewhere:
This elegant work doesn't show neural dynamics are high D, nor that we should stop using PCA
It’s quite the opposite!
(thread)
Congrats Thomas! Great to see this out :)
What happens if you hook up an energy-efficiency optimising RNN on active vision input?
It learns predictive remapping and path integration into allocentric scene coordinates.
Now out in patterns: www.cell.com/patterns/ful...
🚨New Preprint!
How can we model natural scene representations in visual cortex? A solution is in active vision: predict the features of the next glimpse! arxiv.org/abs/2511.12715
+ @adriendoerig.bsky.social , @alexanderkroner.bsky.social , @carmenamme.bsky.social , @timkietzmann.bsky.social
🧵 1/14
6. The AI scientist took 45 minutes and $8.25 in LLM tokens to find a new tuning equation that fits the data better, and predicts the population code’s high-dimensional structure – even though we had only tasked it to model single-cell tuning.
New preprint led by @pablooyarzo.bsky.social together with @kohitij.bsky.social, Diego Vidaurre & Radek Cichy.
Using EEG + fMRI, we show that when humans recognize images that feedforward CNNs fail on, the brain recruits cortex-wide recurrent resources.
www.biorxiv.org/content/10.1... (1/n)
Thanks! We’ll put the code and chat interface out soon :)
Congratulations!!
Thanks! 🧠✨
Thanks to all coauthors! 🦾
@anthesdaniel.bsky.social
@adriendoerig.bsky.social
@sushrutthorat.bsky.social
Peter König
@timkietzmann.bsky.social
/fin
We are convinced that these results mark a shift from static neural decoding toward interactive, generative brain-language interfaces.
Preprint: www.arxiv.org/abs/2509.23941
CorText also responds to in-silico microstimulations in line with experimental predictions: For example, when amplifying face-selective voxels for trials where no people were shown to the participant, CorText starts hallucinating them. With inhibition we can "remove people”. 7/n
Following Shirakawa et al. (2025), we test zero-shot neural decoding: When entire semantic categories (e.g., zebras, surfers, airplanes) are withheld during training, the model can still give meaningful descriptions of the visual content. 6/n
What can we do with it? For example, we can have CorText answer questions about a visual scene (“What’s in this image?” “How many people are there”?) that a person saw while in an fMRI scanner. CorText never sees the actual image, only the brain scan. 5/n
By moving neural data into LLM token space, we gain open-ended, linguistic access to brain scans as experimental probes. At the same time, this has the potential to unlock many additional downstream capabilities (think reasoning, in-context learning, web-search, etc). 4/n
To accomplish this, CorText fuses fMRI data into the latent space of an LLM, turning neural signal into tokens that the model can reason about in response to questions. This sets it apart from existing decoding techniques, which map brain data into static embeddings/output. 3/n