Pablo Marcos-Manchón's Avatar

Pablo Marcos-Manchón

@jazzmaniatico

ML Engineer trying to do neuroscience

37
Followers
62
Following
9
Posts
20.11.2024
Joined
Posts Following

Latest posts by Pablo Marcos-Manchón @jazzmaniatico

Post image

The #UBneuro PhD Welcome Day 2026 was a great success! 🎓✨

New doctoral researchers connected with the community, explored responsible research, and visited the Cognitive Neuroscience Unit.

A strong start for the next generation of neuroscientists! 🧠

02.03.2026 11:06 👍 4 🔁 2 💬 0 📌 0
OSF

Is optimism socially transmissible?

Our new study shows it propagates via social prediction errors. When we "imagine together," simulation discrepancies drive an update in expectations to align with the group. A dynamic resource shaped by social interaction.

Link study: osf.io/preprints/ps...

19.01.2026 14:56 👍 5 🔁 4 💬 1 📌 0
Post image

This year at #CCN25 we showed the importance of OOD evaluation to adjudicate between brain models. Our results demonstrate these trivial but key facts :
- high encoding accuracy ≠ functional convergence
- human brain ≠ NES console ≠ 4-layers CNN
- videogames are cool

w/ @lune-bellec.bsky.social 🙌

13.08.2025 15:51 👍 7 🔁 3 💬 0 📌 1
What do representations tell us about a system? Image of a mouse with a scope showing a vector of activity patterns, and a neural network with a vector of unit activity patterns
Common analyses of neural representations: Encoding models (relating activity to task features) drawing of an arrow from a trace saying [on_____on____] to a neuron and spike train. Comparing models via neural predictivity: comparing two neural networks by their R^2 to mouse brain activity. RSA: assessing brain-brain or model-brain correspondence using representational dissimilarity matrices

What do representations tell us about a system? Image of a mouse with a scope showing a vector of activity patterns, and a neural network with a vector of unit activity patterns Common analyses of neural representations: Encoding models (relating activity to task features) drawing of an arrow from a trace saying [on_____on____] to a neuron and spike train. Comparing models via neural predictivity: comparing two neural networks by their R^2 to mouse brain activity. RSA: assessing brain-brain or model-brain correspondence using representational dissimilarity matrices

In neuroscience, we often try to understand systems by analyzing their representations — using tools like regression or RSA. But are these analyses biased towards discovering a subset of what a system represents? If you're interested in this question, check out our new commentary! Thread:

05.08.2025 14:36 👍 170 🔁 53 💬 5 📌 0

🚨 New preprint alert!

Excited to share our latest work on alpha/beta activity, eye movements, and memory.

Across 4 experiments combining scalp EEG/iEEG with eye tracking, we show that alpha/beta activity directly reflects eye movements, and only indirectly relates to memory.

👇 Highlights (1/7):

30.07.2025 19:32 👍 47 🔁 27 💬 1 📌 2
Preview
Movie-watching evokes ripple-like activity within events and at event boundaries Nature Communications - The neural processes involved in memory formation for realistic experiences remain poorly understood. Here, the authors found that ripple-like activity in the human...

🧠 Paper out!

We investigated how hippocampal and cortical ripples support memory during movie watching. We found that:

🎬 Hippocampal ripples mark event boundaries
🧩 Cortical ripples predict later recall

Ripples may help transform real-life experiences into lasting memories!

rdcu.be/eui9l

01.07.2025 13:26 👍 154 🔁 67 💬 8 📌 2
Preview
GitHub - memory-formation/convergent-transformations: Convergent transformations of visual representation in brains and models. P. Marcos-Manchón and L. Fuentemilla (Under review) Convergent transformations of visual representation in brains and models. P. Marcos-Manchón and L. Fuentemilla (Under review) - memory-formation/convergent-transformations

In summary, this shared representational geometry provides a powerful framework to study the brain's functional organization and trace how information is routed through the cortex.

We welcome your questions!

💻 Code: github.com/memory-forma...

(8/8)

25.07.2025 15:22 👍 0 🔁 0 💬 0 📌 0
Split-panel brain and graph plots showing inter-subject representational alignment for social vs non-social scenes. The lateral pathway only emerges during social scene perception.

Split-panel brain and graph plots showing inter-subject representational alignment for social vs non-social scenes. The lateral pathway only emerges during social scene perception.

Diving deeper into the LOTC hub's social vs non-social component:

Alignment across brains along the lateral stream (EVC→LOTC) is present only when viewing social scenes (with people or animals).

This supports its proposed role as a specialized "third visual pathway" for social perception.

⬇️ (7/8)

25.07.2025 15:22 👍 1 🔁 0 💬 1 📌 0
Scatter plots of shared representational components in three brain hubs (KMCCA top 2 dimensions). Early visual cortex shows low-level structure; the ventral hub encodes scene layout; LOTC separates social (human, animal) from non-social stimuli.

Scatter plots of shared representational components in three brain hubs (KMCCA top 2 dimensions). Early visual cortex shows low-level structure; the ventral hub encodes scene layout; LOTC separates social (human, animal) from non-social stimuli.

So what information does each hub actually encode?

Using KMCCA, we studied the primary dimension that organizes each hub's information:

👁️ EVC: Low-level visual features
🏞️ Ventral Hub: Scene & object structure
👨‍👩‍👧‍👦 LOTC Hub: Social vs. non-social content

⬇️ (6/8)

25.07.2025 15:22 👍 0 🔁 0 💬 1 📌 0
Comparison of brain alignment with deep vision (left) and language models (right). Vision models align broadly across cortex, with early areas matching shallow layers and higher areas matching deeper layers. Language models only align with LOTC. Line plots show RSA scores across model depth for three brain hubs.

Comparison of brain alignment with deep vision (left) and language models (right). Vision models align broadly across cortex, with early areas matching shallow layers and higher areas matching deeper layers. Language models only align with LOTC. Line plots show RSA scores across model depth for three brain hubs.

Vision DNNs capture this shared geometry, with each brain hub showing a different layer alignment profile:

🧠 Early visual ↔️ Shallow DNN layers
🧠 Ventral hub ↔️ Mixed DNN layers
🧠 LOTC ↔️ Deep DNN layers

Language Models only align with the high-level LOTC hub.

⬇️ (5/8)

25.07.2025 15:22 👍 0 🔁 0 💬 1 📌 0
Whole-brain connectivity graph based on representational similarity across individuals. Nodes represent cortical areas; edges reflect shared representational geometry. Two main subnetworks emerge along ventral and lateral visual pathways.

Whole-brain connectivity graph based on representational similarity across individuals. Nodes represent cortical areas; edges reflect shared representational geometry. Two main subnetworks emerge along ventral and lateral visual pathways.

This shared representational geometry is so consistent across people that we could map a whole-brain connectivity network based on it, revealing interactions between visual, memory and prefrontal areas.

⬇️ (4/8)

25.07.2025 15:22 👍 0 🔁 0 💬 1 📌 0
Three brain regions show high inter-subject representational similarity: early visual cortex, ventral hub, and LOTC. A connectivity graph shows how these hubs are embedded in two distinct streams based on representational geometry.

Three brain regions show high inter-subject representational similarity: early visual cortex, ventral hub, and LOTC. A connectivity graph shows how these hubs are embedded in two distinct streams based on representational geometry.

We identified 3 cortical hubs with highly consistent representations across all individuals:
📍 Early visual cortex (V1–V4)
📍 Ventral hub (scene/object areas ~PPA)
📍 LOTC Hub (hMT+/TPOJ)

These hubs form two pathways:
- Classical ventral stream (EVC → Ventral)
- Lateral stream (EVC → LOTC)

⬇️ (3/8)

25.07.2025 15:22 👍 0 🔁 0 💬 1 📌 0
Diagram showing how brain activity, vision models, and language models are compared using RSA to analyze representational alignment across stimuli, models, and brain regions.

Diagram showing how brain activity, vision models, and language models are compared using RSA to analyze representational alignment across stimuli, models, and brain regions.

Using Representational Similarity Analysis (RSA) on fMRI data from people viewing diverse scenes, we measure:

- Inter-subject RSA: Are visual representations shared across individuals?
- Brain-Model RSA: Is this shared information low-level (visual) or high-level (semantic)?

Methods ⬇️ (2/8)

25.07.2025 15:22 👍 1 🔁 0 💬 1 📌 0
Preview
Convergent transformations of visual representation in brains and models A fundamental question in cognitive neuroscience is what shapes visual perception: the external world's structure or the brain's internal architecture. Although some perceptual variability can be trac...

🧠🚨 How does the brain represent what we see? Is visual input transformed to form these representations in similar ways across people and even AI models like DNNs?

We explore these questions using fMRI and large-scale representational alignment analyses.

🔗 arxiv.org/abs/2507.13941

Thread👇 (1/8)

25.07.2025 15:22 👍 14 🔁 7 💬 3 📌 3

Deep learning models and brains share fascinating parallels in their ability to process and instantly integrate new knowledge.

Join us this year at ICON 2025 to discuss how sudden learning emerges across artificial and biological systems! 🧠🤖

23.04.2025 12:28 👍 4 🔁 3 💬 0 📌 0
Preview
Anticipating multisensory environments: Evidence for a supra-modal predictive system Our perceptual experience is generally framed in multisensory environments abundant in predictive information. Previous research on statistical learni…

Very happy to announce that the latest work "Anticipating multisensory environments: Evidence for a supra-modal predictive system" from @alepebel.bsky.social @fuentemilla.bsky.social and myself has been published in Cognition!

www.sciencedirect.com/science/arti...

11.10.2024 13:31 👍 8 🔁 5 💬 1 📌 0