Super excited to finally be able to share a project I've been working on for quite some time β a new paper on the Singularity Hypothesis! We argue that there are more good arguments for it and fewer good arguments against it than a lot of philosophers assume.
philpapers.org/archive/KIRR...
16.07.2025 15:38
π 2
π 0
π¬ 0
π 0
Philosophers and AI folks β I'm writing a paper on the singularity hypothesis, and I'm looking for some recent (i.e. since late 2024) expressions of skepticism about it from philosophers or ML folks that I can quote. The more well known the person, the better! Any ideas?
03.06.2025 10:59
π 1
π 0
π¬ 0
π 0
Excited to share a new review paper I wrote with William D'Alessandro about the range of exciting philosophical and technical work currently being done on AI safety! Forthcoming at Philosophy Compass.
philpapers.org/archive/DALA...
30.04.2025 14:42
π 2
π 0
π¬ 0
π 0
AI safety: a climb to Armageddon? - Philosophical Studies
This paper presents an argument that certain AI safety measures, rather than mitigating existential risk, may instead exacerbate it. Under certain key assumptions - the inevitability of AI failure, th...
Third, in "AI safety: A climb to Armageddon?" Herman Cappelen, Josh Dever, and John Hawthorne ask a question that gets far too little attention in AI safety: Could the work we're doing simply be ensuring that safety failures will be worse when they occur?
link.springer.com/article/10.1...
07.03.2025 05:26
π 2
π 0
π¬ 0
π 0
Those without institutional access can download Sven's paper here:
cd.kg/wp-content/u...
07.03.2025 05:25
π 0
π 0
π¬ 0
π 0
Exicted to share *three* important new papers from the special issue on AI safety!
07.03.2025 05:24
π 0
π 0
π¬ 0
π 0
AI wellbeing - Asian Journal of Philosophy
Under what conditions would an artificially intelligent system have wellbeing? Despite its clear bearing on the ethics of human interactions with artificial systems, this question has received little ...
It's finally out! π Click to find out whether YOUR AI assistant is a moral patient!
In all seriousness, though, this is an important project and I hope it helps advance discussion of the possible moral properties of artificial systems.
link.springer.com/article/10.1...
01.02.2025 22:03
π 2
π 0
π¬ 0
π 0
Those without institutional access can find the paper here: www.cd.kg/wp-content/u...
22.01.2025 17:37
π 1
π 0
π¬ 0
π 0
We argue that the best way to think about AI safety has it include *both* work on catastrophic risks and work that's traditionally been situated within AI ethics.
This matters because disciplinary boundaries affect who's treated as an expert and who gets to help set policy.
13.01.2025 19:26
π 4
π 0
π¬ 0
π 0
By now you've probably heard about AI safety β but have you ever wondered what AI safety actually *is*, or how it's related to AI ethics?
Well, you're in luck! Jacqueline Harding and I have a new paper answering these questions.
philpapers.org/archive/HARW...
13.01.2025 19:26
π 2
π 0
π¬ 1
π 0
Our goal in the paper is to provide a readable introduction to the main issues in this area, together with references to relevant literature and some of our own takes on the state of the debate. We hope the paper will serve as a go-to reference on AI risk arguments for the next couple of years.
24.01.2024 20:12
π 0
π 0
π¬ 0
π 0
Hello, world!
11.10.2023 20:42
π 7
π 0
π¬ 0
π 0