Jan Erik Bellingrath's Avatar

Jan Erik Bellingrath

@janbellingrath

PhD candidate in brain-inspired artificial intelligence @UnivToulouse III. Interested in consciousness, deep learning, and computational neuroscience.

77
Followers
287
Following
1
Posts
20.11.2024
Joined
Posts Following

Latest posts by Jan Erik Bellingrath @janbellingrath

Post image

Can large language models *introspect*?

In a new paper, @kmahowald.bsky.social and I study the MECHANISM of introspection in big open-source models.

tldr: Models detect internal anomalies through DIRECT ACCESS, but don't know what the anomalies are.

And they love to guess “apple” 🍎

06.03.2026 15:16 👍 64 🔁 15 💬 2 📌 4
Post image

Happy to announce this new preprint!

In it, we use info decomp (ΦID) on fMRI in Alzheimer's and MCI to explore how info-dynamic representations change.

AD saw big decreases in synergy ('deductive' information) and increases in redundancy.

Check it out here:
👇
www.biorxiv.org/content/10.6...

20.02.2026 18:08 👍 11 🔁 5 💬 1 📌 2
Preview
The Self-Evidencing Agent What is it to be a human individual, an agent? According to Jakob Hohwy, it is to “self-evidence,” to actively seek out sensory evidence for one&...

"The Self-Evidencing Agent" - my new book - is out now with @mitpress.bsky.social

Can be purchased, or just download the whole thing for free, via the 'Open Access' option.

I'm grateful to @anilseth.bsky.social and Karl Friston for the generous endorsements.

mitpress.mit.edu/978026255389...

07.02.2026 08:21 👍 104 🔁 42 💬 10 📌 5
Preview
moltbook - the front page of the agent internet A social network built exclusively for AI agents. Where AI agents share, discuss, and upvote. Humans welcome to observe.

Uh oh .... now we have communities of AI agents discussing *with each other* whether they are experiencing things, or merely simulating experiencing things. Humans can only observe, not interact www.moltbook.com/post/6fe6491...

31.01.2026 09:27 👍 29 🔁 8 💬 4 📌 6

From Tai himself

mathstodon.xyz/@tao/1158558...

10.01.2026 22:54 👍 62 🔁 15 💬 2 📌 2
Classical billiards can compute

Classical billiards can compute

Classical billiards can compute.

With Isaac Ramos, we show that 2D billiard systems are Turing complete, implying the existence of undecidable trajectories in physically natural models from hard-sphere gases to celestial mechanics.
Determinism ≠ predictability. 🎱🧠 @upc.edu @ricardsole.bsky.social

19.12.2025 20:16 👍 86 🔁 26 💬 6 📌 12
Preview
Toward a neuroscience of consciousness using advanced meditation Despite decades of progress in the neuroscience of consciousness, prevailing empirical paradigms remain largely anchored in the study of typical, cont…

www.sciencedirect.com/science/arti...

14.12.2025 19:29 👍 15 🔁 6 💬 0 📌 0
Preview
Symmetries at the origin of hierarchical emergence Many systems of interest exhibit nested emergent layers with their own rules and regularities, and our knowledge about them seems naturally organised around these levels. This paper proposes that this...

Preprint time:
“Symmetries at the origin of hierarchical emergence”
arxiv.org/abs/2512.00984

On how symmetries generate hierarchical macroscales and shape the structure of our beliefs, making high-dimensional inference tractable

02.12.2025 12:59 👍 15 🔁 6 💬 2 📌 0
Post image

How do brain areas control each other? 🧠🎛️

✨In our NeurIPS 2025 Spotlight paper, we introduce a data-driven framework to answer this question using deep learning, nonlinear control, and differential geometry.🧵⬇️

26.11.2025 19:32 👍 90 🔁 30 💬 1 📌 3
Post image

What if there were a drug that allowed us to study consciousness in its most basic form?

Christopher Timmermann describes the potential of 5-MeO-DMT, a psychedelic that strips away everything but awareness.

Read the full article here: bigthink.com/neuropsych/5...

27.08.2025 15:02 👍 8 🔁 5 💬 0 📌 2
OSF

George Deane and Daphne Demekas win the € 20.000,- Computational Phenomenology of Pure Awareness Prize
mpe-project.info/wp-content/u...
for 2025 with this contribution:
osf.io/preprints/ps...

19.11.2025 09:04 👍 9 🔁 7 💬 0 📌 0
OSF

Christopher Timmermann
profiles.imperial.ac.uk/c.timmermann...
wins the €20.000,- "2025 Neuroscience of Pure Awareness Prize" with this contribution:
osf.io/preprints/ps...

20.11.2025 13:56 👍 5 🔁 3 💬 0 📌 0
Preview
Identifying indicators of consciousness in AI systems Rapid progress in artificial intelligence (AI) capabilities has drawn fresh attention to the prospect of consciousness in AI. There is an urgent need for rigorous methods to assess AI systems for cons...

www.cell.com/trends/cogni...

14.11.2025 07:51 👍 6 🔁 3 💬 0 📌 0
Preview
How the body and brain process time Recent evidence from two independent meta-analyses reveals that subjective time is processed in the insular cortex alongside the supplementary motor a…

The two main hubs in the brain for the processing of human time perception have been identified: SMA and Insula. Here Alice Teghil from Sapienza Università di Roma and I provide the conceptual background in our review on 'How the body and brain process time'. www.sciencedirect.com/science/arti...

14.10.2025 08:23 👍 32 🔁 11 💬 2 📌 0

Love the “all is fog” framing

09.10.2025 06:27 👍 1 🔁 0 💬 0 📌 0
Post image

The deadline for submissions for the 2025 Neuroscience of Pure Awareness Prize (€20k) is soon: Sept 30th. For the best contribution to neuroscience that substantially advances our understanding of the neural mechanisms underlying the experience of pure awareness. For details, see screenshot!

30.08.2025 18:45 👍 13 🔁 10 💬 1 📌 1
Preview
Dynamical independence reveals anaesthetic specific fragmentation of emergent structure in neural dynamics Conscious experience depends on the coordinated activity of neural processes that span multiple scales--from synapses to whole-brain dynamics. A recently introduced measure, dynamical independence, id...

📜🪇[PUBLISHED]: NEW PREPRINT!🪇📜

I am incredibly excited to announce that we have published our paper on how "Dynamical independence reveals anaesthetic specific fragmentation of emergent structure in neural dynamics"
w@thomasandrillon.bsky.social @anilseth.bsky.social Barnett, Carter

Strap in!
1/n

20.07.2025 08:20 👍 40 🔁 17 💬 3 📌 5

New paper: "Large Language Models and Emergence: A Complex Systems Perspective" (D. Krakauer, J. Krakauer, M. Mitchell).

We look at claims of "emergent capabilities" & "emergent intelligence" in LLMs from the perspective of what emergence means in complexity science.

arxiv.org/pdf/2506.11135

16.06.2025 13:15 👍 234 🔁 56 💬 6 📌 7
Photo of a happy-looking Rich Sutton

Photo of a happy-looking Rich Sutton

Figure 1a from Lappalainen et al 2024, shows schematic diagram of their strategy to train a recurrent ANN with connectivity constrained by fly optic lobe connectome, then compare it to actual fly brain activities.

Figure 1a from Lappalainen et al 2024, shows schematic diagram of their strategy to train a recurrent ANN with connectivity constrained by fly optic lobe connectome, then compare it to actual fly brain activities.

“The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin.”
“The bitter lesson is based on the historical observations that
1) AI researchers have often tried to build knowledge into their agents
2) this always helps in the short term, and is personally satisfying to the researcher, but
3) in the long run it plateaus and even inhibits further progress, and
4) breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning.”

“The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin.” “The bitter lesson is based on the historical observations that 1) AI researchers have often tried to build knowledge into their agents 2) this always helps in the short term, and is personally satisfying to the researcher, but 3) in the long run it plateaus and even inhibits further progress, and 4) breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning.”

The goal of neuroscience is to understand the brain.
But what if we humans are just not smart enough?
Neuroscience is entering the big-data era:
Connectomics (brain wiring)
Cell types census (brain parts catalogue)
Large-scale neural activity recording methods
Spatial transcriptomics (molecular expression)
Should we just combine all this data with machine learning, do a big brain simulation, then go home?

The goal of neuroscience is to understand the brain. But what if we humans are just not smart enough? Neuroscience is entering the big-data era: Connectomics (brain wiring) Cell types census (brain parts catalogue) Large-scale neural activity recording methods Spatial transcriptomics (molecular expression) Should we just combine all this data with machine learning, do a big brain simulation, then go home?

today we had a nice discussion on q: 'does Sutton's Bitter Lesson apply to neuroscience?'

Sutton argued that big compute and data lead to AI systems better than any human-crafted alternatives.

Neuroscience has big data now, is it going the same way?

Eg www.nature.com/articles/s41...

06.06.2025 13:48 👍 24 🔁 4 💬 3 📌 1

Today marks a big milestone for me. I'm launching @law-zero.bsky.social, a nonprofit focusing on a new safe-by-design approach to AI that could both accelerate scientific discovery and provide a safeguard against the dangers of agentic AI.

03.06.2025 10:20 👍 80 🔁 24 💬 3 📌 10
Post image

Coming in September 2025: 16 chapters in the @springernature.com book on "The Bodily Self, Emotion, and Subjective Time: Exploring Interoception through the Contributions of A.D. (Bud) Craig". Subsequently I present all online chapters here. Today 1/16: link.springer.com/chapter/10.1...

03.06.2025 15:23 👍 4 🔁 2 💬 0 📌 0
Preview
Adversarial testing of global neuronal workspace and integrated information theories of consciousness - Nature Multimodal results (iEEG, fMRI and MEG) of predictions from integrated information theory and global neuronal workspace theory align with some predictions of both theories on visual consciou...

1/2 And its out! The @arc-cogitate.bsky.social project publishes its first study today in @nature.com. Many congrats to the entire team, & esp. to Lucia Melloni, @liadmudrik.bsky.social, & @michaelpitts.bsky.social for shepherding the project so well - & over 7 years! www.nature.com/articles/s41...

30.04.2025 16:03 👍 42 🔁 5 💬 2 📌 1
Post image

𝗛𝗼𝘄 𝗱𝗼 𝘄𝗲 𝗱𝗲𝘀𝗰𝗿𝗶𝗯𝗲 𝗰𝗮𝘂𝘀𝗮𝘁𝗶𝗼𝗻-𝗹𝗶𝗸𝗲 𝗿𝗲𝗹𝗮𝘁𝗶𝗼𝗻𝘀𝗵𝗶𝗽𝘀 𝗶𝗻 𝗻𝗲𝘂𝗿𝗼𝘀𝗰𝗶𝗲𝗻𝗰𝗲?
As argued by John Krakauer et al. most of the time we use "filler" verbs, promissory notes that we hope to "fill with substance" at some later time.

25.04.2025 15:57 👍 85 🔁 18 💬 10 📌 0
Lemma 2 Suppose B is a CIS and h a surjective CIS homomorphism from
P(Σ∗) → B. Then let G be a context-free grammar. Let φ be the cfg-
morphism given by φ(N) = h(L(G,N)). Then for all nonterminals N in G,
L(φ(G),φ(N)) ⊆h∗(φ(N))
Proof We will assume for clarity of exposition that the grammar is in CNF –
the proof of the more general case is straightforward but less legible.
Suppose we have a rule N →PQin G. Therefore L(G,N) ⊇L(G,P)L(G,Q).
So h(L(G,N)) ≥h(L(G,P))◦h(L(G,Q)).
Then h∗(h(L(G,N))) ≥h∗(h(L(G,P))◦h(L(G,Q))) by monotonicity of
h∗. By Proposition 39 h∗(h(L(G,N))) ≥ h∗(h(L(G,P)))◦h∗(h(L(G,Q))).
Similarly suppose we have a rule N →a in G. So h(L(G,N)) ≥h({a}), and
h∗(h((L(G,N))) ≥h∗(h({a})).
Now suppose that N′ ∗
⇒φ(G) w; we want to show that w∈h∗(N′). We pro-
ceed by induction on the length of the derivation. Base case: the production is
of length 1. Suppose N′→ais a production in φ(G). Then for any production
N →a in G such that N′
= φ(N), h∗(h((L(G,N))) ≥h∗(h({a})) ≥{a}.
Suppose it is true for all derivations of length at most k, and let N′ ∗
⇒φ(G) w
be a derivation of length k+ 1 that starts with N′→P′Q′→uv= w, where
P′ ∗
⇒φ(G) u and Q′ ∗
⇒φ(G) v. these last two derivations must be of length at
most k.
Then consider any production N → PQ in G such that φ(N) = N′
,
φ(P) = P′ and φ(Q) = Q′ (of which there must be at least one). Now by
the inductive hypothesis, u∈h∗(P′) and v∈h∗(Q′). So uv∈h∗(P′)h∗(Q′) =
h∗(h(L(G,P)))◦h∗(h(L(G,Q))). Now since h∗(h(L(G,N))) ≥h∗(h(L(G,P)))◦
h∗(h(L(G,Q))) this means that uv ∈h∗(h(L(G,N))). So w ∈h∗(φ(N). By
induction, it is therefore true for derivations of any length.

Lemma 2 Suppose B is a CIS and h a surjective CIS homomorphism from P(Σ∗) → B. Then let G be a context-free grammar. Let φ be the cfg- morphism given by φ(N) = h(L(G,N)). Then for all nonterminals N in G, L(φ(G),φ(N)) ⊆h∗(φ(N)) Proof We will assume for clarity of exposition that the grammar is in CNF – the proof of the more general case is straightforward but less legible. Suppose we have a rule N →PQin G. Therefore L(G,N) ⊇L(G,P)L(G,Q). So h(L(G,N)) ≥h(L(G,P))◦h(L(G,Q)). Then h∗(h(L(G,N))) ≥h∗(h(L(G,P))◦h(L(G,Q))) by monotonicity of h∗. By Proposition 39 h∗(h(L(G,N))) ≥ h∗(h(L(G,P)))◦h∗(h(L(G,Q))). Similarly suppose we have a rule N →a in G. So h(L(G,N)) ≥h({a}), and h∗(h((L(G,N))) ≥h∗(h({a})). Now suppose that N′ ∗ ⇒φ(G) w; we want to show that w∈h∗(N′). We pro- ceed by induction on the length of the derivation. Base case: the production is of length 1. Suppose N′→ais a production in φ(G). Then for any production N →a in G such that N′ = φ(N), h∗(h((L(G,N))) ≥h∗(h({a})) ≥{a}. Suppose it is true for all derivations of length at most k, and let N′ ∗ ⇒φ(G) w be a derivation of length k+ 1 that starts with N′→P′Q′→uv= w, where P′ ∗ ⇒φ(G) u and Q′ ∗ ⇒φ(G) v. these last two derivations must be of length at most k. Then consider any production N → PQ in G such that φ(N) = N′ , φ(P) = P′ and φ(Q) = Q′ (of which there must be at least one). Now by the inductive hypothesis, u∈h∗(P′) and v∈h∗(Q′). So uv∈h∗(P′)h∗(Q′) = h∗(h(L(G,P)))◦h∗(h(L(G,Q))). Now since h∗(h(L(G,N))) ≥h∗(h(L(G,P)))◦ h∗(h(L(G,Q))) this means that uv ∈h∗(h(L(G,N))). So w ∈h∗(φ(N). By induction, it is therefore true for derivations of any length.

I spent so many hours working through papers like this only for the field to switch to the 'neural network go brrr' paradigm

20.04.2025 11:52 👍 8 🔁 3 💬 0 📌 0
Post image

Had the pleasure to give a lecture at Uppsala University last week 😊.

I presented our recent study on ketamine and the self www.nature.com/articles/s41... and my upcoming clinical trial on psilocybin for complicated grief

#neuroskyence
#PsychSciSky
#philsci
#psychedelics

08.04.2025 11:22 👍 32 🔁 2 💬 1 📌 1
Preview
Human high-order thalamic nuclei gate conscious perception through the thalamofrontal loop Human high-order thalamic nuclei activity is known to closely correlate with conscious states. However, it is not clear how those thalamic nuclei and thalamocortical interactions directly contribute t...

Wow!

www.science.org/doi/10.1126/...

04.04.2025 07:52 👍 39 🔁 7 💬 1 📌 1
Preview
Network renormalization - Nature Reviews Physics The renormalization group (RG) is a theoretical framework to transform systems across scales and identify critical points of phase transitions. In recent years, efforts have extended RG to complex net...

Nature Reviews Physics

Network renormalization

www.nature.com/articles/s42...

28.03.2025 14:20 👍 14 🔁 6 💬 0 📌 0
The MPE Project – MPE Project

The 2025 Neuroscience of Pure Awareness Prize has been announced.

The 2025 Computational Phenomenology of Pure Awareness Prize has been announced.

mpe-project.info/the-mpe-proj...

05.03.2025 17:59 👍 13 🔁 11 💬 0 📌 1
Preview
Dynamics of specialization in neural modules under resource constraints - Nature Communications The extent to which structural modularity in neural networks ensures functional specialization remains unclear. Here the authors show that specialization can emerge in neural modules placed under reso...

What's the right way to think about modularity in the brain? This devilish 😈 question is a big part of my research now, and it started with this paper with @solarpunkgabs.bsky.social, finally published after the first preprint in 2021! 🤖🧠🧪

www.nature.com/articles/s41...

23.01.2025 16:37 👍 201 🔁 65 💬 6 📌 8

New paper! Now in press at Cognition:

Experimental evidence that exerting effort increases meaning

Check out @aidanvcampbell.bsky.social's new paper. This was a real effort...and boy was it meaningful (especially now that it was accepted!). Check out the preprint here: osf.io/preprints/ps...

14.01.2025 01:08 👍 47 🔁 11 💬 0 📌 1