As always, message me for a link or, better yet, contact @joebarnby.com
As always, message me for a link or, better yet, contact @joebarnby.com
In this talk Andreas will speak about the implementation of I-POMDPs, results obtained through practical studies and observations on challenges and further approaches to making behaviour in experiments tractable, starting from a well-known implementation of I-POMDPs for a multi-round trust task.
However, implementing them in a way that allows for acceptable computation times while having enough parameters to describe the nuances of human behaviour is difficult and an ongoing challenge in research.
These processes provide interpretable parametrizations of agent interactions, with factors like fairness mindedness/inequality aversion, irritability or risk aversion, whilst also having established procedures for learning, models of other actors/theory of mind and planning.
To gain insights into choice making in human social tasks, factors underlying (un)cooperative actions and the interpretation of others' actions and signals in social settings, Interactive Partially Observable Markov Decision Processes are an indispensable β yet intricate - model-building tool.
Very happy to announce next week's SoCR lab guest, Andreas Hula, who will be talking about the uses and misuses of I-POMDPs in model-building human social decision-making π§΅
This is happening tomorrow!
Very much looking forward to this talk and interesting chats!
Apparently Ramsey invented polymarketβ¦. (in Resnik _Choices: An Introduction to Decision Theory_)
However, social behavior depends not only on who is involved, but on the possible interactions among individuals within a given situation! If, like me, you are curious to hear more, consider reaching out to me on here or @joebarnby.com on email or LinkedIn.
A dominant framework in social neuroscience is agent-centric representation: information about beliefs, abilities, or attitudes is tagged to individuals such as oneself or an interaction partner.
Navigating social environments is a fundamental computational challenge for the brain.
In this talk, Marco Wittmann examines how social information is represented to support flexible decision-making.
Talk Poster, Prof. Marco Wittmann, 19/02/2026 @ 9am GMT
It's a pleasure for me to announce our next Social Computation and Representation Lab invited speaker @mkwittmann.bsky.social for a talk on dimensionality reduction and basis functions in social cognition
Je me demande sβils ont pris cette idΓ©e de la cuisine perse
This weekend, I'm going on a sponsored walk to help raise money for my friend's younger brother's cancer treatment. If able, I'd really appreciate if you could share or sponsor me here: sites.google.com/view/alex-wa...
Lucky students!! Looks promising
NEPTUNE project logo with affiliations & funders
New job alert π« Weβre hiring a 3-year postdoc for the NEPTUNE project to study the causal mechanisms of paranoia and social learning.
Work with us on experimental psychopharmacology (THC), social cognition, and psychosis π§βπ¬
Apply here: lnkd.in/gQqnNvjR](my.corehr.com/pls/kclrecru...)
Please RT :)
Curious why we sometimes see minds where there are none? π§ Join us to uncover the foundations of attributed agency.
Fully funded PhD for next year as part of DRIVE-Health, working with me, Adam Hampshire & @stefansarkadi.bsky.social
Deadline: 12/1/26
Reach out for an informal chat!
Iβm a bit of a scientist myself (I did well in high school maths)
over here weβre slowly all getting replaced by new Tim Williamsons
Connecteurs logiques ou plus gΓ©nΓ©ralement marqueurs de relation?
A couple of things worth adding IMO although the impact is a live debate
- This ToMNet paper arxiv.org/abs/1802.07740
- Work on Multiagent RL arxiv.org/abs/1911.10635
Reminder that this is happening tomorrow! Looking forward to seeing some of you there :)
You'll find the fascinating paper here πhttps://www.nature.com/articles/s41562-025-02269-4
As always, feel free to reach out to me on here or @joebarnby.com via email for a link! We are looking forward to seeing many of you next Thursday :)
I am pleased to announce that @davidschultner.bsky.social will be our next guest as part of the SoCR Lab's Invited Talk Series. Dr. Schultner will present recent work that offers evidence for a parsimonious and mechanistic explanation of human social learning strategies via reward learning!
New from us, led by the fantastic @elisavetpappa.bsky.social:
Who Are People with Psychosis Delusional about? A Study of Social Agents in the Phenomenology of Delusions karger.com/psp/article/...
Proudly published with @andreaeyleen.bsky.social:
A metatheory of classical and modern connectionism. doi.org/10.1037/rev0...
We touch on what has been up with connectionism as a framework for computational modelling β & for everything it seems these days with AI and LLMs β pre-2010 vs post.
1/n
Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users β in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously toa) counter the technology industryβs marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.
Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAIβs ChatGPT and Appleβs Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).
Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.
Protecting the Ecosystem of Human Knowledge: Five Principles
Finally! π€© Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...
We unpick the tech industryβs marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
Itβs even a bit bizarre (to not say nonsensical) to read consciousness into theTuring Test given that Turing explicitly rejects a counter-argument from consciousness as not being measurable in the 1950 paper.
Congrats Kenny!