Yonathan Arbel's Avatar

Yonathan Arbel

@arbel

Law Prof, Contracts, AI, Defamation

1,287
Followers
25
Following
20
Posts
01.10.2023
Joined
Posts Following

Latest posts by Yonathan Arbel @arbel

Black and blue butterflies made out of paper structured as if they are flying away from pink AI letters made out of paper. It is on a white background.

Black and blue butterflies made out of paper structured as if they are flying away from pink AI letters made out of paper. It is on a white background.

How to Count AIs: Individuation and Liability for #AIAgents

This Article is the first to diagnose the legal problem of identifying #AIs.

Authors: Yonathan A. Arbel, Simon Goldstein, Peter Salib

Read More: spkl.io/63329Avgjv
@arbel.bsky.social @petersalib.bsky.social

26.02.2026 16:00 πŸ‘ 0 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

The result: emergent, persistent, legally legible AI entities that respond rationally to tort, regulatory, and economic incentives.
Download before the singularity!

papers.ssrn.com/sol3/papers....

26.02.2026 06:23 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Because AIs are resource-constrained (compute above all), the AIs controlling an A-corp’s assets have powerful incentives to delegate authority only to sub-agents that share their goals.
Internally misaligned A-corps are selected out by resource exhaustion and competition.

26.02.2026 06:23 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Our solution: the Algorithmic Corporation (β€œA-corp”) 🎩

A-corps are legal persons: owned by identifiable humans, able to hold property, contract, and be sued, but governed by AIs via cryptographically secure keys and fine-grained, revocable permissions.

26.02.2026 06:23 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Thick identity: individuating stable, goal-coherent AI entities that law can directly govern when human oversight inevitably falls short.

Thin identity is necessary but insufficient. Principal–agent problems explode with truly autonomous systems.

26.02.2026 06:23 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

We distinguish two problems:
Thin identity: tying every action back to a responsible human principal.

That’s the sort of problem legal scholarship started thinking about when they argue over holding users or developers liable.

But they haven’t solved the problem of how to exactly identify agents.

26.02.2026 06:23 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Law needed some principle to attach liability, mens rea, and consequences to a loose group of β€” you get it β€” agents.

AI agents present this problem but much more sharply. They have no bodies. They fork, swarm, merge, and vanish in milliseconds.

26.02.2026 06:23 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

When someone does something wrong, how do we know who did it?

When only neighbors were involved, that was hard but solvable.

When individuals started cooperating over large distances, or in large groups that shifted members, that became much harder.

26.02.2026 06:23 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

In How to Count AIs: Individuation and Liability for AI Agents, with @psalib
and philosopher Simon Goldstein, we apply an old legal dilemma to a new problem and offer a time-tested solution.

Paper (just posted): papers.ssrn.com/sol3/papers.cf
m?abstract_id=6273198

26.02.2026 06:23 πŸ‘ 0 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

🚨Very soon, billions of AI agents will swarm the economyβ€”copying, splitting, merging at will.

Just as soon, someone will get hurt.

Is law ready for this moment?

We think not quite. But we have a solution.

26.02.2026 06:23 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

It's just the first study. Much more work is needed to confirm these results and also evaluate consistency and reliability. It's not clear what aspects of model training contribute the most (RLHF?)

12.08.2025 13:35 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

This, alongside other results, is consistent with LLMs having picked up something deep about reasonableness. If true, that would allow tool building for legal and agent applications, with important AI safety implications. but...

12.08.2025 13:35 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Findings: humans judge harshly people who didn't take the socially common level of precautions, even though textbook says that shouldn't matter.

Textbook does say they should care about costs, but humans don't care much.

LLM repeat this exact pattern in RCTs. (but not all)

12.08.2025 13:35 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

To test whether LLMs may have picked up the deeper structure of reasonableness judgment (not just parroting), I offer a new methodology: Silicon RCTs (S-RCTs).

Randomize a relevant fact, run session-isolated β€œparticipants,” & compare LLM deltas to human deltas.

12.08.2025 13:35 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

{A note for CS friends: a large share of law isn’t if-then rules; it’s open-textured standards like β€œreasonable,” β€œordinary meaning,” and β€œundue burden”
But our instruments are thin. Expert intuitions (judges) are criticized as elite, out-of-touch, and crypto-political, juries are noisy&biased}

12.08.2025 13:35 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Two hard problems meet here:

(a) courts must infer what ordinary people will reasonably think (e.g., would a teenager read the Pepsi jet as an offer?);

(b) we want AI agents to follow open-textured norms (β€œkeep a reasonable distance”)

12.08.2025 13:35 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Reasonableness quietly structures daily life: following distance in traffic, how loud a house party can be, what β€œup to 50% longer” implies, or whether a Pepsi ad offering a jet is a joke.

These judgments are fast, intuitive, and hard to explain. In other words: they are "system-1"

12.08.2025 13:35 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
<p><br><span>The Silicon Reasonable Person: Can AI Predict How People Judge Reasonableness?</span></p> <p>In everyday life, people make countless judgments of reasonablenessβ€”judgments that determine what speed to drive on a busy street, what an advertisement like

Read (papers.ssrn.com/sol3/papers...., arxiv.org/abs/2508.02766)

or listen (auto generated)

12.08.2025 13:35 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

🚨🚨🚨Can judges really know what β€œreasonable people” think? Can AI help bridge that gapβ€”and can AI agents themselves behave β€œreasonably” in the wild?

A new draft, The Silicon Reasonable Person, asks these questions.

tl;dr: early signs point to yes, with careful limits.

12.08.2025 13:35 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image Post image Post image

While students hit the books, AI professors head to Alabama. Great start to the AI Safety Law Roundtable.

It's a real treat to collaborate with @arbel.bsky.social & @petersalib.bsky.social to encourage more profs to evaluate the promise & peril of AI innovation. Stay tuned for more!

25.04.2025 16:46 πŸ‘ 1 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Post image

I ended up using Walt. I attach the semester plan. I was underwhelmed but it was serviceable

10.04.2025 00:02 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Oh wow

01.10.2023 04:23 πŸ‘ 9 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0