bsky.app/profile/segy...
bsky.app/profile/segy...
this is about the government using AI for surveillance
a somewhat rushed post about Anthropic telling the Secretary of War no, Anthropic's history with the government, etc.
That LLMs understand natural language as well as they do should dramatically change our understanding of the problem of 'making AI do what we want it to do'.
www.verysane.ai/p/alignment-...
it is probably not a good idea to put GPUs in space
www.verysane.ai/p/should-we-...
My take on Searle's Chinese Room, which is terminally engineer brained in that my entire argument is "okay, how would you actually build it, though"
www.verysane.ai/p/building-t...
An attempt to analyze recent legal filings about AI enabling user suicides, figure out what caused them (OpenAI being irresponsible, mostly), and figure out how to deal with this, and any similar problems in the future.
www.verysane.ai/p/ai-and-sui...
Notably the person who checked the math agrees that you cannot just draw a smooth curve on this, and their assumption that you can is bad. I noted this from the write-up and that's part of why I did not check the math.
It turns out someone else had already checked the math. I did not read any of the math because it seemed too painful, but if that's your jam, you can read about everything wrong with the math too. forum.effectivealtruism.org/posts/KgejNn...
There is some confusion about whether or not we understand LLMs. The answer is yes and no, but mostly no. It's a complicated enough question that it seemed like it needed an article.
www.verysane.ai/p/do-we-unde...
"AI 2027" argues that AI will reach roughly human level in roughly 2027. This just happens to be right about when OpenAI would expect to start to run out of money.
I argue that this is not a coincidence at all, and its predictions are all wrong.
www.verysane.ai/p/agi-probab...
This had an error in the post previously, fixed now.
What Makes AI "Generative"? It's Not What You Think
I can just tell you actually: it's how large the output space is, or equivalently, how many possible outputs there are. Bigger output spaces are just fundamentally different.
www.verysane.ai/p/what-makes...
Some Thoughts On The Platonic Representation Hypothesis,
or,
How We're Rediscovering Platonism By Doing Statistics
or,
How AI Training Is Actually Chaining Up Some Guy In A Cave,
or,
Bait for Philosophy Majors
www.verysane.ai/p/some-thoug...
The most well-known statistic about AI water use is a lie. This makes it frustrating to talk about AI and the environment, and this is a long deep dive on that specific point.
www.verysane.ai/p/the-bigges...
Pushed previously as a notes file, this is a short list of ideas which have, or are, setting the agenda for AI, presented entirely by quoting the relevant person.
www.verysane.ai/p/ai-history...
Pushed previously as a notes file, this is a short list of ideas which have, or are, setting the agenda for AI, presented entirely by quoting the relevant person.
www.verysane.ai/p/ai-history...
I couldn't find a high-level overview of what AI is and how we got where we are, so I wrote one. www.verysane.ai/p/what-is-ai