A torta rustica cooling on the stove.
Torta rustica, because it feels like Spring today.
@tonychemero
Philosophy and Psychology professor at University of Cincinnati. Embodied cognition, AI, social cognition, phenomenology, critical theory. Against fascism. (he/him) New book: https://cup.columbia.edu/book/intertwined-creatures/9780231223195/
A torta rustica cooling on the stove.
Torta rustica, because it feels like Spring today.
@tonychemero.bsky.social and I have written a short James Gibson's bio for The Palgrave Encyclopaedia for Theoretical and Philosophical Psychology. Check it out! ⬇️
doi.org/10.1007/978-...
I think Dan Hutto and Erik Myin endorse the second of these.
Berlin! (with @tonychemero.bsky.social, @francesegan.bsky.social, and @laurennross.bsky.social)
NYT headline: Denmark rejects Trump’s plan to send hospital boat to Greenland.
Maybe Trump (and the republicans in Congress) could send a “great hospital boat” to all US citizens they screwed out of healthcare.
Put her in a choke hold and the other kids jumped in to beat his ass. The kids are alright.
OpenAI ”acknowledged in its own research that LLMs will always produce hallucinations due to fundamental mathematical constraints that cannot be solved through better engineering, marking a significant admission from one of the AI industry’s leading companies.”
You can’t trust chatbots.
The Brains Blog is hosting a symposium on my book Deflating Mental Representation this week. Comments by Mazviita Chirimuuta, Caitlin Mace & Adina Roskies, and Oron Shagrir.
philosophyofbrains.com/2026/01/12/f...
At least one person gets it. A victory.
In case you had questions about the difference between US news outlets and others around the world, here is a story in @theguardian.com about Gramsci and contemporary politics.
www.theguardian.com/world/2026/f...
UPDATE: Hegseth has included VANDERBILT UNIVERSITY on a list of schools off limits for tuition assistance for military officers as part of his campaign against schools he describes as “biased” against the US military.🤔 www.cnn.com/2026/02/13/p...
Perfect.
"While most AI tries to fix humans @simile_ai is building AI that understands them. They build digital twins that capture someone’s worldview, then simulate how customers, employees or entire populations will actually respond to change. Born out of Stanford generative agent research. Now backed by $100M to turn that into a category. AI is getting smarter and Simile is making it more human. We're proud to be in their corner."
A proposed solution is to build generative agents that represent specific individuals (Box 1). One such study [6] recruited a sample of ~1000 US participants nationally representative for age, gender, race, region, education, and political ideology; programmed an LLM chatbot to interview each participant for 2 h; and asked the participants to complete a battery of questionnaires and tasks. They then used the interview transcripts to prompt ~1000 LLM agents to role-play each of the human participants on the same questionnaires and tasks. Observing a high correspondence between the responses of the generative agents and their human counterparts, the researchers concluded that LLMs prompted in this way can capture the ‘idiosyncratic nature’ of real people across a range of situations [57]. Some researchers propose making generative agents even more representative by training them on their human counterparts’ ‘emails, messages and social media posts’, aswell as ‘text generated by friends, family or coworkers’ [23]. (We note this raises critical questions about informed consent; see Outstanding questions.) The logic here is that, because generative agents are built to represent a diverse sample of specific individuals, researchers could then run thousands of experiments on the generative agents and feel confident that the resultant data are faithful to the original samples. Researchers could even populate virtual worlds with generative agents, running large-scale simulations to test interventions and policies (Box 2). Nevertheless, the generative agents paradigm faces hard limits to its potential representativeness. By design, generative agents can only represent individuals who consent to sharing sensitive data with scientists, which carries substantial privacy risks [6,58]. Given these risks, people
with stronger privacy concerns are less likely to consent to such studies. Members of marginalized groups in the USA, including women, gender minorities, people of color, and disabled people, have heightened privacy concerns and more negative attitudes about AI [59,60]ii–iv. These groups have historically faced disproportionate surveillance [61,62] and theft of their biometric and behavioral data for scientific research [63–65], including training machine learning models [66]. Regimes of digital surveillance spread globally [67], creating frictions where global north ideologies touch down in the global south [68]. These entrenched and repeating patterns raise cascading problems for the generative agents approach: first, members of marginalized groups are less likely to participate and, second, those who do will be less representative of their groups. Any attempt to build AI Surrogates that are truly representative of diverse populations will likely face a hard limit that marginalized people are (justifiably) less willing to entrust their data to scientists.
Box 2. Generative agents and simulated worlds Researchers note that ‘many of themost interesting research questions, such as the psychology ofworld leaders, the effects of large-scale policy change, or the effects of large-scale events on the general public’ are ‘logistically infeasible’ to study in the laboratory ‘with any realistic amount of resources’ [23]. In response, generative agents populating simulated worlds are seen as promising research paths. For example, researchers could create generative agents based on the profiles of Palo Alto residents and simulate how the community would respond to different pandemic interventionsv. Much of the technical research on artificial agents acting in simulated worlds originates in fields beyond cognitive science, including computer science, sociology, economics, political science, computational social science, as well as private industry [9,112–116]. Developers of these agent architectures have lofty ambitions. They believe that this technology can ‘test interventions and theories and gain real-world insights’ [58], serving as ‘a high-fidelity platformfor policy outcome evaluation’ to enable ‘datadriven policy selection’ [115]. Given these ambitions, validating that these models can generalize to the real world is imperative [116], and some researchers caution that ‘current architectures must cover some distance before their use is reliable’ [58]. Yet, such validation faces a paradox: these models can only be validated against the ground truth of real-world data, but their appeal lies in simulating scenarios where ground truth is not available. Some researchers [22] propose to meet this challenge by identifying ‘the most proximal cases for which ground-truth data from human subjects is available’ and using those cases to validate the simulation’s predictions ‘before turning the model to a domain in which no ground truth exists’. However, there is currently ‘no consensus’ around how proximal is proximal enough [116]. Imp…
Stanford CS researchers just got a huge payday for promising AI agents that can simulate the real world. @mjcrockett.bsky.social and I wrote about these researcher's vision. Screen shotting quite a lengthy part of our paper, because we spent A LOT of time thinking about the paucity of this promise
I will ask the press.
Later this month in Berlin, with @laurennross.bsky.social @okaydaniellle.bsky.social @gualtiero.bsky.social
@francesegan.bsky.social @davidcolaco.bsky.social
www.merex-project.org/workshop-2026
Brian was my PhD advisor. He was, as Melanie says, a wonderful human. I am very sad to miss this.
“The reason Trump didn’t apologize for the Obama post is because he is a small, petty, fragile man who cannot take responsibility for his own actions. And the second reason, frankly, is because he is a racist.”
Not directly. In the new book I don’t think I even mention poverty of the stimulus. I am responding instead to Chomsky’s 2015 book, “What kind of creatures are we?” In that book, Chomsky just repeats the Cartesian theory of mind and language that he has defended since the 1950s.
Final reminder 📢 We are looking for a #philbio or #philphysics postdoc for an interdisciplinary project exploring the boundary between living and nonliving systems through the lens of self-organization & active matter 👇 www.kuleuven.be/personeel/jo... #philjobs #philsky #HPS #devbio Please share!
New book with more Chomsky-bashing. The website says March 2026, but it is available now.
cup.columbia.edu/book/intertw...
Trump Scolds Female Reporter For Being Adult
Trump Scolds Female Reporter For Being Adult
Talking shit about Chomsky before it was cool…
My just published book also shits on Chomsky. It is apparently a theme.
Politics / June 18, 2025 Abolishing ICE Is the Bare Minimum ICE agents aren’t out of control. They are performing their designed role as fascism’s storm troopers.
tapping the sign www.thenation.com/article/poli...
Clarificatory question: does being California Sober count as doing Dry January?
What do these companies have in common?
-Cigna
-Comcast
-General Mills
-Allstate
-Marriott
-Hilton
-Walmart
-Amazon
-Microsoft
-Meta
All promised after January 6, 2021 to stop funding lawmakers who tried to overturn the 2020 election.
And all have broken that promise.
trump did january 6th. republicans did january 6th. violent traitors did january 6th. they stormed the capitol, destroyed the building and threatened to kill members of congress. ashli babbitt was a violent insurrectionist. the capitol police protected people. these are truths we need to repeat.
📢 Call for Posters
XVII Dubrovnik Conference on Cognitive Science: Adaptations across different timescales
May 21–24, 2026 | Dubrovnik, Croatia
📝 Abstract deadline: Feb 28, 2026
More info: ducog.cecog.eu
#CognitiveScience #EmbodiedCognition #ConferenceCFP