Sepehr Razavi's Avatar

Sepehr Razavi

@srazavi

Doctoral student @ox.ac.uk and Member of Social Computation and Representation Lab - https://www.socrlab.net/people

221
Followers
438
Following
96
Posts
22.09.2023
Joined
Posts Following

Latest posts by Sepehr Razavi @srazavi

As always, message me for a link or, better yet, contact @joebarnby.com

06.03.2026 11:34 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

In this talk Andreas will speak about the implementation of I-POMDPs, results obtained through practical studies and observations on challenges and further approaches to making behaviour in experiments tractable, starting from a well-known implementation of I-POMDPs for a multi-round trust task.

06.03.2026 11:33 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

However, implementing them in a way that allows for acceptable computation times while having enough parameters to describe the nuances of human behaviour is difficult and an ongoing challenge in research.

06.03.2026 11:32 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

These processes provide interpretable parametrizations of agent interactions, with factors like fairness mindedness/inequality aversion, irritability or risk aversion, whilst also having established procedures for learning, models of other actors/theory of mind and planning.

06.03.2026 11:31 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

To gain insights into choice making in human social tasks, factors underlying (un)cooperative actions and the interpretation of others' actions and signals in social settings, Interactive Partially Observable Markov Decision Processes are an indispensable – yet intricate - model-building tool.

06.03.2026 11:31 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Very happy to announce next week's SoCR lab guest, Andreas Hula, who will be talking about the uses and misuses of I-POMDPs in model-building human social decision-making 🧡

06.03.2026 11:29 πŸ‘ 8 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

This is happening tomorrow!

18.02.2026 07:08 πŸ‘ 4 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Very much looking forward to this talk and interesting chats!

16.02.2026 17:18 πŸ‘ 5 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Post image

Apparently Ramsey invented polymarket…. (in Resnik _Choices: An Introduction to Decision Theory_)

13.02.2026 12:31 πŸ‘ 1 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

However, social behavior depends not only on who is involved, but on the possible interactions among individuals within a given situation! If, like me, you are curious to hear more, consider reaching out to me on here or @joebarnby.com on email or LinkedIn.

09.02.2026 11:48 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

A dominant framework in social neuroscience is agent-centric representation: information about beliefs, abilities, or attitudes is tagged to individuals such as oneself or an interaction partner.

09.02.2026 11:47 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Navigating social environments is a fundamental computational challenge for the brain.

In this talk, Marco Wittmann examines how social information is represented to support flexible decision-making.

09.02.2026 11:46 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Talk Poster, Prof. Marco Wittmann, 19/02/2026 @ 9am GMT

Talk Poster, Prof. Marco Wittmann, 19/02/2026 @ 9am GMT

It's a pleasure for me to announce our next Social Computation and Representation Lab invited speaker @mkwittmann.bsky.social for a talk on dimensionality reduction and basis functions in social cognition

09.02.2026 11:41 πŸ‘ 6 πŸ” 1 πŸ’¬ 1 πŸ“Œ 2
Post image

Je me demande s’ils ont pris cette idΓ©e de la cuisine perse

01.02.2026 13:12 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Event On 17th January 2026, we will be going on a sponsored 10km walk to raise money to help fund Alex's cancer treatment. To sponsor us, please donate here: https://www.justgiving.com/page/alex-warwick-20...

This weekend, I'm going on a sponsored walk to help raise money for my friend's younger brother's cancer treatment. If able, I'd really appreciate if you could share or sponsor me here: sites.google.com/view/alex-wa...

12.01.2026 10:53 πŸ‘ 3 πŸ” 2 πŸ’¬ 0 πŸ“Œ 1

Lucky students!! Looks promising

22.12.2025 16:24 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
NEPTUNE project logo with affiliations & funders

NEPTUNE project logo with affiliations & funders

New job alert πŸ’« We’re hiring a 3-year postdoc for the NEPTUNE project to study the causal mechanisms of paranoia and social learning.

Work with us on experimental psychopharmacology (THC), social cognition, and psychosis πŸ§‘β€πŸ”¬

Apply here: lnkd.in/gQqnNvjR](my.corehr.com/pls/kclrecru...)

Please RT :)

18.12.2025 08:27 πŸ‘ 12 πŸ” 14 πŸ’¬ 1 πŸ“Œ 0

Curious why we sometimes see minds where there are none? 🧠 Join us to uncover the foundations of attributed agency.

Fully funded PhD for next year as part of DRIVE-Health, working with me, Adam Hampshire & @stefansarkadi.bsky.social

Deadline: 12/1/26
Reach out for an informal chat!

02.12.2025 00:47 πŸ‘ 9 πŸ” 10 πŸ’¬ 0 πŸ“Œ 0

I’m a bit of a scientist myself (I did well in high school maths)

07.12.2025 09:57 πŸ‘ 4 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

over here we’re slowly all getting replaced by new Tim Williamsons

04.12.2025 22:14 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Connecteurs logiques ou plus gΓ©nΓ©ralement marqueurs de relation?

28.11.2025 15:06 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Machine Theory of Mind Theory of mind (ToM; Premack & Woodruff, 1978) broadly refers to humans' ability to represent the mental states of others, including their desires, beliefs, and intentions. We propose to train a machi...

A couple of things worth adding IMO although the impact is a live debate
- This ToMNet paper arxiv.org/abs/1802.07740
- Work on Multiagent RL arxiv.org/abs/1911.10635

18.11.2025 19:19 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Reminder that this is happening tomorrow! Looking forward to seeing some of you there :)

12.11.2025 09:12 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Feature-based reward learning shapes human social learning strategies - Nature Human Behaviour This research advances a mechanistic reward learning account of social learning strategies. Through experiments and simulations, it shows how individuals learn to learn from others, dynamically shapin...

You'll find the fascinating paper here πŸ‘‰https://www.nature.com/articles/s41562-025-02269-4

As always, feel free to reach out to me on here or @joebarnby.com via email for a link! We are looking forward to seeing many of you next Thursday :)

05.11.2025 15:44 πŸ‘ 4 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
Post image

I am pleased to announce that @davidschultner.bsky.social will be our next guest as part of the SoCR Lab's Invited Talk Series. Dr. Schultner will present recent work that offers evidence for a parsimonious and mechanistic explanation of human social learning strategies via reward learning!

05.11.2025 15:42 πŸ‘ 8 πŸ” 2 πŸ’¬ 1 πŸ“Œ 1
Preview
Who Are People with Psychosis Delusional about? A Study of Social Agents in the Phenomenology of Delusions Abstract. Introduction: Delusions frequently involve strong beliefs about, or interactions with, illusory social agents. Although such agents have been systematically described in hallucinations, few ...

New from us, led by the fantastic @elisavetpappa.bsky.social:

Who Are People with Psychosis Delusional about? A Study of Social Agents in the Phenomenology of Delusions karger.com/psp/article/...

05.11.2025 08:56 πŸ‘ 17 πŸ” 9 πŸ’¬ 0 πŸ“Œ 0

Proudly published with @andreaeyleen.bsky.social:

A metatheory of classical and modern connectionism. doi.org/10.1037/rev0...

We touch on what has been up with connectionism as a framework for computational modelling β€” & for everything it seems these days with AI and LLMs β€” pre-2010 vs post.

1/n

17.10.2025 12:53 πŸ‘ 81 πŸ” 26 πŸ’¬ 6 πŸ“Œ 18
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users β€” in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues.

Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users β€” in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously toa) counter the technology industry’s marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Protecting the Ecosystem of Human Knowledge: Five Principles

Protecting the Ecosystem of Human Knowledge: Five Principles

Finally! 🀩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n

06.09.2025 08:13 πŸ‘ 3787 πŸ” 1897 πŸ’¬ 110 πŸ“Œ 390

It’s even a bit bizarre (to not say nonsensical) to read consciousness into theTuring Test given that Turing explicitly rejects a counter-argument from consciousness as not being measurable in the 1950 paper.

17.10.2025 19:32 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Congrats Kenny!

13.10.2025 19:19 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0