Tony Chemero's Avatar

Tony Chemero

@tonychemero

Philosophy and Psychology professor at University of Cincinnati. Embodied cognition, AI, social cognition, phenomenology, critical theory. Against fascism. (he/him) New book: https://cup.columbia.edu/book/intertwined-creatures/9780231223195/

629
Followers
525
Following
135
Posts
26.11.2024
Joined
Posts Following

Latest posts by Tony Chemero @tonychemero

A torta rustica cooling on the stove.

A torta rustica cooling on the stove.

Torta rustica, because it feels like Spring today.

06.03.2026 23:29 👍 9 🔁 0 💬 1 📌 0
Preview
Particle-armored liquid robots Particle-armored liquid robots emulate cellular deformability and adaptability for versatile robotic functions.

Next up: Terminator 2

www.science.org/doi/10.1126/...

04.03.2026 23:06 👍 2 🔁 0 💬 0 📌 0
Preview
James J. Gibson (1904–1979) In this entry, we detail the academic journey of J. J. Gibson through his career as a perceptual psychologist. We make special emphasis on direct perception and the theory of affordances.

@tonychemero.bsky.social and I have written a short James Gibson's bio for The Palgrave Encyclopaedia for Theoretical and Philosophical Psychology. Check it out! ⬇️
doi.org/10.1007/978-...

03.03.2026 17:41 👍 10 🔁 6 💬 1 📌 0
Preview
Large-scale online deanonymization with LLMs We show that large language models can be used to perform at-scale deanonymization. With full Internet access, our agent can re-identify Hacker News users and Anthropic Interviewer participants at hig...

Oh dear.

arxiv.org/abs/2602.16800

02.03.2026 17:31 👍 56 🔁 34 💬 3 📌 7

I think Dan Hutto and Erik Myin endorse the second of these.

01.03.2026 00:17 👍 3 🔁 0 💬 0 📌 0
Post image

Berlin! (with @tonychemero.bsky.social, @francesegan.bsky.social, and @laurennross.bsky.social)

26.02.2026 16:35 👍 19 🔁 2 💬 0 📌 0
NYT headline: Denmark rejects Trump’s plan to send hospital boat to Greenland.

NYT headline: Denmark rejects Trump’s plan to send hospital boat to Greenland.

Maybe Trump (and the republicans in Congress) could send a “great hospital boat” to all US citizens they screwed out of healthcare.

22.02.2026 17:11 👍 4 🔁 0 💬 0 📌 0

Put her in a choke hold and the other kids jumped in to beat his ass. The kids are alright.

21.02.2026 21:35 👍 2406 🔁 828 💬 107 📌 130
Preview
OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws In a landmark study, OpenAI researchers reveal that large language models will always produce plausible but false outputs, even with perfect data, due to fundamental statistical and computational limi...

OpenAI ”acknowledged in its own research that LLMs will always produce hallucinations due to fundamental mathematical constraints that cannot be solved through better engineering, marking a significant admission from one of the AI industry’s leading companies.”

You can’t trust chatbots.

15.02.2026 20:25 👍 1774 🔁 833 💬 19 📌 172
Preview
Frankie Egan: Brains Blog precis of Deflating Mental Representation  — The Brains Blog Brains Blog precis of Deflating Mental Representation  In the book I propose what I call a deflationary account of mental representation, characterized by three claims:  (1…

The Brains Blog is hosting a symposium on my book Deflating Mental Representation this week. Comments by Mazviita Chirimuuta, Caitlin Mace & Adina Roskies, and Oron Shagrir.
philosophyofbrains.com/2026/01/12/f...

12.01.2026 13:24 👍 38 🔁 19 💬 0 📌 1

At least one person gets it. A victory.

14.02.2026 17:26 👍 10 🔁 1 💬 0 📌 0
Preview
‘The time of monsters’: everyone is quoting Gramsci – but what did he actually say? Line handily sums up people’s bewilderment at state of world, but it isn’t quite what the Marxist thinker wrote

In case you had questions about the difference between US news outlets and others around the world, here is a story in @theguardian.com about Gramsci and contemporary politics.

www.theguardian.com/world/2026/f...

14.02.2026 17:03 👍 3 🔁 0 💬 1 📌 0
Post image

UPDATE: Hegseth has included VANDERBILT UNIVERSITY on a list of schools off limits for tuition assistance for military officers as part of his campaign against schools he describes as “biased” against the US military.🤔 www.cnn.com/2026/02/13/p...

14.02.2026 16:49 👍 889 🔁 421 💬 159 📌 71

Perfect.

13.02.2026 17:21 👍 5 🔁 0 💬 0 📌 0
"While most AI tries to fix humans 
@simile_ai
 is building AI that understands them.

They build digital twins that capture someone’s worldview, then simulate how customers, employees or entire populations will actually respond to change.

Born out of Stanford generative agent research. Now backed by $100M to turn that into a category.

AI is getting smarter and Simile is making it more human. We're proud to be in their corner."

"While most AI tries to fix humans @simile_ai is building AI that understands them. They build digital twins that capture someone’s worldview, then simulate how customers, employees or entire populations will actually respond to change. Born out of Stanford generative agent research. Now backed by $100M to turn that into a category. AI is getting smarter and Simile is making it more human. We're proud to be in their corner."

A proposed solution is to build generative agents that represent specific individuals (Box 1). One
such study [6] recruited a sample of ~1000 US participants nationally representative for age, gender,
race, region, education, and political ideology; programmed an LLM chatbot to interview each
participant for 2 h; and asked the participants to complete a battery of questionnaires and tasks.
They then used the interview transcripts to prompt ~1000 LLM agents to role-play each of the
human participants on the same questionnaires and tasks. Observing a high correspondence between
the responses of the generative agents and their human counterparts, the researchers concluded
that LLMs prompted in this way can capture the ‘idiosyncratic nature’ of real people across
a range of situations [57]. Some researchers propose making generative agents even more representative
by training them on their human counterparts’ ‘emails, messages and social media
posts’, aswell as ‘text generated by friends, family or coworkers’ [23]. (We note this raises critical
questions about informed consent; see Outstanding questions.) The logic here is that, because
generative agents are built to represent a diverse sample of specific individuals, researchers
could then run thousands of experiments on the generative agents and feel confident that the resultant
data are faithful to the original samples. Researchers could even populate virtual worlds with
generative agents, running large-scale simulations to test interventions and policies (Box 2).
Nevertheless, the generative agents paradigm faces hard limits to its potential representativeness.
By design, generative agents can only represent individuals who consent to sharing sensitive
data with scientists, which carries substantial privacy risks [6,58]. Given these risks, people

A proposed solution is to build generative agents that represent specific individuals (Box 1). One such study [6] recruited a sample of ~1000 US participants nationally representative for age, gender, race, region, education, and political ideology; programmed an LLM chatbot to interview each participant for 2 h; and asked the participants to complete a battery of questionnaires and tasks. They then used the interview transcripts to prompt ~1000 LLM agents to role-play each of the human participants on the same questionnaires and tasks. Observing a high correspondence between the responses of the generative agents and their human counterparts, the researchers concluded that LLMs prompted in this way can capture the ‘idiosyncratic nature’ of real people across a range of situations [57]. Some researchers propose making generative agents even more representative by training them on their human counterparts’ ‘emails, messages and social media posts’, aswell as ‘text generated by friends, family or coworkers’ [23]. (We note this raises critical questions about informed consent; see Outstanding questions.) The logic here is that, because generative agents are built to represent a diverse sample of specific individuals, researchers could then run thousands of experiments on the generative agents and feel confident that the resultant data are faithful to the original samples. Researchers could even populate virtual worlds with generative agents, running large-scale simulations to test interventions and policies (Box 2). Nevertheless, the generative agents paradigm faces hard limits to its potential representativeness. By design, generative agents can only represent individuals who consent to sharing sensitive data with scientists, which carries substantial privacy risks [6,58]. Given these risks, people

with stronger privacy concerns are less likely to consent to such studies. Members of marginalized
groups in the USA, including women, gender minorities, people of color, and disabled people,
have heightened privacy concerns and more negative attitudes about AI [59,60]ii–iv. These
groups have historically faced disproportionate surveillance [61,62] and theft of their biometric
and behavioral data for scientific research [63–65], including training machine learning models
[66]. Regimes of digital surveillance spread globally [67], creating frictions where global north ideologies
touch down in the global south [68]. These entrenched and repeating patterns raise cascading
problems for the generative agents approach: first, members of marginalized groups are
less likely to participate and, second, those who do will be less representative of their groups. Any
attempt to build AI Surrogates that are truly representative of diverse populations will likely face a
hard limit that marginalized people are (justifiably) less willing to entrust their data to scientists.

with stronger privacy concerns are less likely to consent to such studies. Members of marginalized groups in the USA, including women, gender minorities, people of color, and disabled people, have heightened privacy concerns and more negative attitudes about AI [59,60]ii–iv. These groups have historically faced disproportionate surveillance [61,62] and theft of their biometric and behavioral data for scientific research [63–65], including training machine learning models [66]. Regimes of digital surveillance spread globally [67], creating frictions where global north ideologies touch down in the global south [68]. These entrenched and repeating patterns raise cascading problems for the generative agents approach: first, members of marginalized groups are less likely to participate and, second, those who do will be less representative of their groups. Any attempt to build AI Surrogates that are truly representative of diverse populations will likely face a hard limit that marginalized people are (justifiably) less willing to entrust their data to scientists.

Box 2. Generative agents and simulated worlds
Researchers note that ‘many of themost interesting research questions, such as the psychology ofworld leaders, the effects
of large-scale policy change, or the effects of large-scale events on the general public’ are ‘logistically infeasible’ to study in
the laboratory ‘with any realistic amount of resources’ [23]. In response, generative agents populating simulated worlds are
seen as promising research paths. For example, researchers could create generative agents based on the profiles of Palo
Alto residents and simulate how the community would respond to different pandemic interventionsv. Much of the technical
research on artificial agents acting in simulated worlds originates in fields beyond cognitive science, including computer science,
sociology, economics, political science, computational social science, as well as private industry [9,112–116].
Developers of these agent architectures have lofty ambitions. They believe that this technology can ‘test interventions and
theories and gain real-world insights’ [58], serving as ‘a high-fidelity platformfor policy outcome evaluation’ to enable ‘datadriven
policy selection’ [115]. Given these ambitions, validating that these models can generalize to the real world is imperative
[116], and some researchers caution that ‘current architectures must cover some distance before their use is reliable’
[58]. Yet, such validation faces a paradox: these models can only be validated against the ground truth of real-world data,
but their appeal lies in simulating scenarios where ground truth is not available. Some researchers [22] propose to meet this
challenge by identifying ‘the most proximal cases for which ground-truth data from human subjects is available’ and using
those cases to validate the simulation’s predictions ‘before turning the model to a domain in which no ground truth exists’.
However, there is currently ‘no consensus’ around how proximal is proximal enough [116].
Imp…

Box 2. Generative agents and simulated worlds Researchers note that ‘many of themost interesting research questions, such as the psychology ofworld leaders, the effects of large-scale policy change, or the effects of large-scale events on the general public’ are ‘logistically infeasible’ to study in the laboratory ‘with any realistic amount of resources’ [23]. In response, generative agents populating simulated worlds are seen as promising research paths. For example, researchers could create generative agents based on the profiles of Palo Alto residents and simulate how the community would respond to different pandemic interventionsv. Much of the technical research on artificial agents acting in simulated worlds originates in fields beyond cognitive science, including computer science, sociology, economics, political science, computational social science, as well as private industry [9,112–116]. Developers of these agent architectures have lofty ambitions. They believe that this technology can ‘test interventions and theories and gain real-world insights’ [58], serving as ‘a high-fidelity platformfor policy outcome evaluation’ to enable ‘datadriven policy selection’ [115]. Given these ambitions, validating that these models can generalize to the real world is imperative [116], and some researchers caution that ‘current architectures must cover some distance before their use is reliable’ [58]. Yet, such validation faces a paradox: these models can only be validated against the ground truth of real-world data, but their appeal lies in simulating scenarios where ground truth is not available. Some researchers [22] propose to meet this challenge by identifying ‘the most proximal cases for which ground-truth data from human subjects is available’ and using those cases to validate the simulation’s predictions ‘before turning the model to a domain in which no ground truth exists’. However, there is currently ‘no consensus’ around how proximal is proximal enough [116]. Imp…

Stanford CS researchers just got a huge payday for promising AI agents that can simulate the real world. @mjcrockett.bsky.social and I wrote about these researcher's vision. Screen shotting quite a lengthy part of our paper, because we spent A LOT of time thinking about the paucity of this promise

13.02.2026 14:43 👍 82 🔁 24 💬 5 📌 6

I will ask the press.

11.02.2026 14:40 👍 2 🔁 0 💬 0 📌 0
Workshop 2026 | Merex

Later this month in Berlin, with @laurennross.bsky.social @okaydaniellle.bsky.social @gualtiero.bsky.social
@francesegan.bsky.social @davidcolaco.bsky.social

www.merex-project.org/workshop-2026

10.02.2026 18:36 👍 17 🔁 4 💬 1 📌 1

Brian was my PhD advisor. He was, as Melanie says, a wonderful human. I am very sad to miss this.

08.02.2026 16:33 👍 13 🔁 1 💬 2 📌 0
Video thumbnail

“The reason Trump didn’t apologize for the Obama post is because he is a small, petty, fragile man who cannot take responsibility for his own actions. And the second reason, frankly, is because he is a racist.”

07.02.2026 22:36 👍 4802 🔁 1206 💬 232 📌 60

Not directly. In the new book I don’t think I even mention poverty of the stimulus. I am responding instead to Chomsky’s 2015 book, “What kind of creatures are we?” In that book, Chomsky just repeats the Cartesian theory of mind and language that he has defended since the 1950s.

05.02.2026 23:47 👍 2 🔁 0 💬 1 📌 0
Video thumbnail

Final reminder 📢 We are looking for a #philbio or #philphysics postdoc for an interdisciplinary project exploring the boundary between living and nonliving systems through the lens of self-organization & active matter 👇 www.kuleuven.be/personeel/jo... #philjobs #philsky #HPS #devbio Please share!

05.02.2026 15:45 👍 30 🔁 24 💬 1 📌 0

New book with more Chomsky-bashing. The website says March 2026, but it is available now.

cup.columbia.edu/book/intertw...

05.02.2026 15:42 👍 24 🔁 8 💬 4 📌 0
Trump Scolds Female Reporter For Being Adult

Trump Scolds Female Reporter For Being Adult

Trump Scolds Female Reporter For Being Adult

04.02.2026 22:40 👍 4544 🔁 631 💬 54 📌 25

Talking shit about Chomsky before it was cool…

04.02.2026 22:14 👍 12 🔁 1 💬 0 📌 0

My just published book also shits on Chomsky. It is apparently a theme.

04.02.2026 22:13 👍 4 🔁 0 💬 1 📌 0
Politics / June 18, 2025
Abolishing ICE Is the Bare Minimum
ICE agents aren’t out of control. They are performing their designed role as fascism’s storm troopers.

Politics / June 18, 2025 Abolishing ICE Is the Bare Minimum ICE agents aren’t out of control. They are performing their designed role as fascism’s storm troopers.

tapping the sign www.thenation.com/article/poli...

07.01.2026 18:26 👍 2104 🔁 509 💬 15 📌 7

Clarificatory question: does being California Sober count as doing Dry January?

07.01.2026 17:15 👍 0 🔁 0 💬 1 📌 1

What do these companies have in common?

-Cigna
-Comcast
-General Mills
-Allstate
-Marriott
-Hilton
-Walmart
-Amazon
-Microsoft
-Meta

All promised after January 6, 2021 to stop funding lawmakers who tried to overturn the 2020 election.

And all have broken that promise.

06.01.2026 17:15 👍 17606 🔁 8009 💬 492 📌 315

trump did january 6th. republicans did january 6th. violent traitors did january 6th. they stormed the capitol, destroyed the building and threatened to kill members of congress. ashli babbitt was a violent insurrectionist. the capitol police protected people. these are truths we need to repeat.

06.01.2026 19:30 👍 4935 🔁 1447 💬 49 📌 31
Preview
DUCOG — Conference 2025 Dubrovnik Conference on Cognitive Science

📢 Call for Posters

XVII Dubrovnik Conference on Cognitive Science: Adaptations across different timescales
May 21–24, 2026 | Dubrovnik, Croatia
📝 Abstract deadline: Feb 28, 2026

More info: ducog.cecog.eu

#CognitiveScience #EmbodiedCognition #ConferenceCFP

05.01.2026 21:05 👍 0 🔁 1 💬 0 📌 0