Black and blue butterflies made out of paper structured as if they are flying away from pink AI letters made out of paper. It is on a white background.
How to Count AIs: Individuation and Liability for #AIAgents
This Article is the first to diagnose the legal problem of identifying #AIs.
Authors: Yonathan A. Arbel, Simon Goldstein, Peter Salib
Read More: spkl.io/63329Avgjv
@arbel.bsky.social @petersalib.bsky.social
26.02.2026 16:00
π 0
π 1
π¬ 0
π 0
The result: emergent, persistent, legally legible AI entities that respond rationally to tort, regulatory, and economic incentives.
Download before the singularity!
papers.ssrn.com/sol3/papers....
26.02.2026 06:23
π 0
π 0
π¬ 0
π 0
Because AIs are resource-constrained (compute above all), the AIs controlling an A-corpβs assets have powerful incentives to delegate authority only to sub-agents that share their goals.
Internally misaligned A-corps are selected out by resource exhaustion and competition.
26.02.2026 06:23
π 1
π 0
π¬ 1
π 0
Our solution: the Algorithmic Corporation (βA-corpβ) π©
A-corps are legal persons: owned by identifiable humans, able to hold property, contract, and be sued, but governed by AIs via cryptographically secure keys and fine-grained, revocable permissions.
26.02.2026 06:23
π 0
π 0
π¬ 1
π 0
Thick identity: individuating stable, goal-coherent AI entities that law can directly govern when human oversight inevitably falls short.
Thin identity is necessary but insufficient. Principalβagent problems explode with truly autonomous systems.
26.02.2026 06:23
π 0
π 0
π¬ 1
π 0
We distinguish two problems:
Thin identity: tying every action back to a responsible human principal.
Thatβs the sort of problem legal scholarship started thinking about when they argue over holding users or developers liable.
But they havenβt solved the problem of how to exactly identify agents.
26.02.2026 06:23
π 0
π 0
π¬ 1
π 0
Law needed some principle to attach liability, mens rea, and consequences to a loose group of β you get it β agents.
AI agents present this problem but much more sharply. They have no bodies. They fork, swarm, merge, and vanish in milliseconds.
26.02.2026 06:23
π 0
π 0
π¬ 1
π 0
When someone does something wrong, how do we know who did it?
When only neighbors were involved, that was hard but solvable.
When individuals started cooperating over large distances, or in large groups that shifted members, that became much harder.
26.02.2026 06:23
π 0
π 0
π¬ 1
π 0
In How to Count AIs: Individuation and Liability for AI Agents, with @psalib
and philosopher Simon Goldstein, we apply an old legal dilemma to a new problem and offer a time-tested solution.
Paper (just posted): papers.ssrn.com/sol3/papers.cf
m?abstract_id=6273198
26.02.2026 06:23
π 0
π 1
π¬ 1
π 0
π¨Very soon, billions of AI agents will swarm the economyβcopying, splitting, merging at will.
Just as soon, someone will get hurt.
Is law ready for this moment?
We think not quite. But we have a solution.
26.02.2026 06:23
π 2
π 0
π¬ 1
π 0
It's just the first study. Much more work is needed to confirm these results and also evaluate consistency and reliability. It's not clear what aspects of model training contribute the most (RLHF?)
12.08.2025 13:35
π 0
π 0
π¬ 0
π 0
This, alongside other results, is consistent with LLMs having picked up something deep about reasonableness. If true, that would allow tool building for legal and agent applications, with important AI safety implications. but...
12.08.2025 13:35
π 0
π 0
π¬ 1
π 0
Findings: humans judge harshly people who didn't take the socially common level of precautions, even though textbook says that shouldn't matter.
Textbook does say they should care about costs, but humans don't care much.
LLM repeat this exact pattern in RCTs. (but not all)
12.08.2025 13:35
π 0
π 0
π¬ 1
π 0
To test whether LLMs may have picked up the deeper structure of reasonableness judgment (not just parroting), I offer a new methodology: Silicon RCTs (S-RCTs).
Randomize a relevant fact, run session-isolated βparticipants,β & compare LLM deltas to human deltas.
12.08.2025 13:35
π 0
π 0
π¬ 1
π 0
{A note for CS friends: a large share of law isnβt if-then rules; itβs open-textured standards like βreasonable,β βordinary meaning,β and βundue burdenβ
But our instruments are thin. Expert intuitions (judges) are criticized as elite, out-of-touch, and crypto-political, juries are noisy&biased}
12.08.2025 13:35
π 0
π 0
π¬ 1
π 0
Two hard problems meet here:
(a) courts must infer what ordinary people will reasonably think (e.g., would a teenager read the Pepsi jet as an offer?);
(b) we want AI agents to follow open-textured norms (βkeep a reasonable distanceβ)
12.08.2025 13:35
π 0
π 0
π¬ 1
π 0
Reasonableness quietly structures daily life: following distance in traffic, how loud a house party can be, what βup to 50% longerβ implies, or whether a Pepsi ad offering a jet is a joke.
These judgments are fast, intuitive, and hard to explain. In other words: they are "system-1"
12.08.2025 13:35
π 0
π 0
π¬ 1
π 0
π¨π¨π¨Can judges really know what βreasonable peopleβ think? Can AI help bridge that gapβand can AI agents themselves behave βreasonablyβ in the wild?
A new draft, The Silicon Reasonable Person, asks these questions.
tl;dr: early signs point to yes, with careful limits.
12.08.2025 13:35
π 1
π 0
π¬ 1
π 0
While students hit the books, AI professors head to Alabama. Great start to the AI Safety Law Roundtable.
It's a real treat to collaborate with @arbel.bsky.social & @petersalib.bsky.social to encourage more profs to evaluate the promise & peril of AI innovation. Stay tuned for more!
25.04.2025 16:46
π 1
π 1
π¬ 0
π 0
I ended up using Walt. I attach the semester plan. I was underwhelmed but it was serviceable
10.04.2025 00:02
π 0
π 0
π¬ 1
π 0
Oh wow
01.10.2023 04:23
π 9
π 0
π¬ 0
π 0