Victoria Bosch's Avatar

Victoria Bosch

@initself

neuromantic - ML and cognitive computational neuroscience - PhD student at Kietzmann Lab, Osnabrück University. ⛓️ https://init-self.com

648
Followers
463
Following
30
Posts
05.10.2023
Joined
Posts Following

Latest posts by Victoria Bosch @initself

Amazing! Congrats Micha :)

27.02.2026 12:30 👍 1 🔁 0 💬 0 📌 0
Post image

NSD-synthetic, the out-of-distribution companion dataset of NSD consisting of 7T fMRI responses to 284 artificial images, is now published.

#NeuroAI #CompNeuro #neuroscience #AI

doi.org/10.1038/s414...

12.02.2026 14:46 👍 24 🔁 14 💬 0 📌 0
Architecture of train station Liège-Guillemins by Calatrava

Architecture of train station Liège-Guillemins by Calatrava

Returning home inspired after a great visit to KU Leuven, where I presented our work on CorText in the Brain & Cognition group. Thank you for the invitation and great discussions! @hansopdebeeck.bsky.social @costantinoai.bsky.social

(pictured: the magnificent architecture of the Liège station)

11.02.2026 10:38 👍 12 🔁 1 💬 1 📌 0

Model weights in Borges’ Library of Babel: seemingly meaningless series of characters, unless you know.

09.02.2026 06:43 👍 5 🔁 0 💬 0 📌 0
Preview
Visual language models show widespread visual deficits on neuropsychological tests - Nature Machine Intelligence Tangtartharakul and Storrs use standardized neuropsychological tests to compare human visual abilities with those of visual language models (VLMs). They report that while VLMs excel in high-level obje...

Our latest paper, “Visual language models show widespread visual deficits on neuropsychological tests”, is now out in Nature Machine Intelligence: www.nature.com/articles/s42...

Non-paywalled version:
arxiv.org/abs/2504.10786

Tweet thread below from first author @genetang.bsky.social...

09.02.2026 02:40 👍 70 🔁 36 💬 1 📌 2
Video thumbnail

1/7 Can infants recognise the world around them? 👶🧠 As part of the FOUNDCOG project, we scanned 134 awake infants using fMRI. Published today in Nature Neuroscience, our research reveals 2-month-old infants already possess complex visual representations in VVC that align with DNNs.

02.02.2026 16:00 👍 155 🔁 70 💬 4 📌 8
Preview
Leveraging insights from neuroscience to build adaptive artificial intelligence Nature Neuroscience - Adaptive intelligence envisions AI that, like animals, learns online, generalizes and adapts quickly. This Perspective reviews biological foundations, progress in AI and...

Interested in the latest advances in neuroscience (neural dynamics and internal models) and how they can be leveraged to build smarter, adaptive AI?

➡️ My first real solo piece 🖤🫶 @natneuro.nature.com

rdcu.be/eWVmA

31.12.2025 08:00 👍 121 🔁 35 💬 5 📌 1
Post image

When and why do modular representations emerge in neural networks?

@stefanofusi.bsky.social and I posted a preprint answering this question last year, and now it has been extensively revised, refocused, and generalized. Read more here: doi.org/10.1101/2024... (1/7)

09.01.2026 19:06 👍 76 🔁 18 💬 1 📌 2
Post image

🚨new work with the dream team @danakarca.bsky.social @loopyluppi.bsky.social @fatemehhadaeghi.bsky.social @stuartoldham.bsky.social @duncanastle.bsky.social
We use game theory and show the brain is not optimally wired for communication and there’s more to its story:
www.biorxiv.org/content/10.6...

15.12.2025 08:01 👍 60 🔁 26 💬 4 📌 0
Post image

Brains have many pathways / subnetworks but which principles underlie their formation?

In our #NeurIPS paper lead by Jack Cook we identify biologically relevant inductive biases that create pathways in brain-like Mixture-of-Experts models🧵

#neuroskyence #compneuro #neuroAI
arxiv.org/abs/2506.02813

21.11.2025 12:01 👍 37 🔁 12 💬 1 📌 0

In this piece for @thetransmitter.bsky.social, I argue that ecological neuroscience should leverage generative video and interactive models to simulate the world from animals' perspectives.

The technological building blocks are almost here - we just need to align them for this application.

🧠🤖

08.12.2025 15:59 👍 42 🔁 14 💬 0 📌 1

Looking forward!

03.12.2025 13:23 👍 6 🔁 0 💬 0 📌 0

Congrats! ✨

25.11.2025 22:05 👍 2 🔁 0 💬 1 📌 0

🚨 Out in Patterns!

We asked ourselves, if complex neural dynamics like predictive remapping and allocentric coding can emerge from simple physical principles, in this case Energy Efficiency. Turns out they can!
More information in the 🧵 below.

I am super excited to see this one out in the wild.

20.11.2025 19:47 👍 19 🔁 3 💬 3 📌 0

Y’all are reading this paper in the wrong way.

We love to trash dominant hypothesis, but we need to look for evidence against the manifold hypothesis elsewhere:

This elegant work doesn't show neural dynamics are high D, nor that we should stop using PCA

It’s quite the opposite!

(thread)

25.11.2025 16:16 👍 69 🔁 23 💬 3 📌 3

Congrats Thomas! Great to see this out :)

21.11.2025 18:25 👍 2 🔁 0 💬 1 📌 0

What happens if you hook up an energy-efficiency optimising RNN on active vision input?

It learns predictive remapping and path integration into allocentric scene coordinates.

Now out in patterns: www.cell.com/patterns/ful...

21.11.2025 08:01 👍 28 🔁 10 💬 1 📌 1
Preview
Predicting upcoming visual features during eye movements yields scene representations aligned with human visual cortex Scenes are complex, yet structured collections of parts, including objects and surfaces, that exhibit spatial and semantic relations to one another. An effective visual system therefore needs unified ...

🚨New Preprint!
How can we model natural scene representations in visual cortex? A solution is in active vision: predict the features of the next glimpse! arxiv.org/abs/2511.12715

+ @adriendoerig.bsky.social , @alexanderkroner.bsky.social , @carmenamme.bsky.social , @timkietzmann.bsky.social
🧵 1/14

18.11.2025 12:34 👍 85 🔁 28 💬 3 📌 5
Post image

6. The AI scientist took 45 minutes and $8.25 in LLM tokens to find a new tuning equation that fits the data better, and predicts the population code’s high-dimensional structure – even though we had only tasked it to model single-cell tuning.

14.11.2025 18:07 👍 12 🔁 3 💬 1 📌 0

New preprint led by @pablooyarzo.bsky.social together with @kohitij.bsky.social, Diego Vidaurre & Radek Cichy.

Using EEG + fMRI, we show that when humans recognize images that feedforward CNNs fail on, the brain recruits cortex-wide recurrent resources.

www.biorxiv.org/content/10.1... (1/n)

07.11.2025 09:39 👍 26 🔁 5 💬 1 📌 1

Thanks! We’ll put the code and chat interface out soon :)

04.11.2025 16:27 👍 1 🔁 0 💬 0 📌 0

Congratulations!!

04.11.2025 13:52 👍 1 🔁 0 💬 0 📌 0

Thanks! 🧠✨

03.11.2025 17:23 👍 2 🔁 0 💬 1 📌 0

Thanks to all coauthors! 🦾
@anthesdaniel.bsky.social
@adriendoerig.bsky.social
@sushrutthorat.bsky.social
Peter König
@timkietzmann.bsky.social

/fin

03.11.2025 15:17 👍 4 🔁 0 💬 0 📌 0
Preview
Brain-language fusion enables interactive neural readout and in-silico experimentation Large language models (LLMs) have revolutionized human-machine interaction, and have been extended by embedding diverse modalities such as images into a shared language space. Yet, neural decoding has...

We are convinced that these results mark a shift from static neural decoding toward interactive, generative brain-language interfaces.

Preprint: www.arxiv.org/abs/2509.23941

03.11.2025 15:17 👍 5 🔁 0 💬 2 📌 0
Post image Post image

CorText also responds to in-silico microstimulations in line with experimental predictions: For example, when amplifying face-selective voxels for trials where no people were shown to the participant, CorText starts hallucinating them. With inhibition we can "remove people”. 7/n

03.11.2025 15:17 👍 5 🔁 0 💬 1 📌 1
Post image

Following Shirakawa et al. (2025), we test zero-shot neural decoding: When entire semantic categories (e.g., zebras, surfers, airplanes) are withheld during training, the model can still give meaningful descriptions of the visual content. 6/n

03.11.2025 15:17 👍 2 🔁 0 💬 1 📌 0
Post image

What can we do with it? For example, we can have CorText answer questions about a visual scene (“What’s in this image?” “How many people are there”?) that a person saw while in an fMRI scanner. CorText never sees the actual image, only the brain scan. 5/n

03.11.2025 15:17 👍 2 🔁 1 💬 1 📌 0

By moving neural data into LLM token space, we gain open-ended, linguistic access to brain scans as experimental probes. At the same time, this has the potential to unlock many additional downstream capabilities (think reasoning, in-context learning, web-search, etc). 4/n

03.11.2025 15:17 👍 1 🔁 0 💬 2 📌 0

To accomplish this, CorText fuses fMRI data into the latent space of an LLM, turning neural signal into tokens that the model can reason about in response to questions. This sets it apart from existing decoding techniques, which map brain data into static embeddings/output. 3/n

03.11.2025 15:17 👍 2 🔁 0 💬 1 📌 0