Grady Simon's Avatar

Grady Simon

@grady

Human capabilities research. Demon hunter. AGI feeler. Working to advance humanity's ability to understand and supervise AI @OpenAI.

76
Followers
296
Following
24
Posts
12.04.2023
Joined
Posts Following

Latest posts by Grady Simon @grady

Try it for a month!

05.02.2025 15:15 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

You can just stew things

04.02.2025 03:00 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Real media theorists know

01.01.2025 02:42 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Global *town* square? More like global village square

01.01.2025 02:42 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Spent my break doing some intensive R&D (resting and digesting)

31.12.2024 21:38 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Like a fox, I'm always on the hunt for new data to explain with my grand unified theory of everything.

01.12.2024 18:38 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

A smart model would also condition the generation of subsequent fields in a JSON object on prior fields, so it seems like it would be happy to condition on the CoT.

FWIW, doing CoT in this fashion is common and an officially documented pattern in OpenAI's case.

25.11.2024 17:00 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
OpenAI Platform Explore developer resources, tutorials, API docs, and dynamic examples to get the most out of OpenAI's platform.

OpenAI's implementation of structured outputs guarantees the model generates fields in the same order as they appear in the schema. platform.openai.com/docs/guides/...

Are there other ways it may confuse the model?

25.11.2024 16:51 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

re AI training data, I'm obviously biased, but IMO it's good to be able to slip your thoughts into the mind of god.

25.11.2024 14:49 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Snowden and Creepy ads and the phrase "my data" got everyone to think of creative output as something that should be protected from people who wanted to steal it.

That's true for some stuff like DMs and trade secrets, but in general, it's actually good to influence others.

25.11.2024 14:46 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

The thing I like most about Bluesky is that all the content is public, including to crawlers. Creating in the public square, in a way that *couldn't* exclude anyone from seeing and doing what they wanted with it, was the most beautiful thing about how the Web used to work.

25.11.2024 14:37 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Sensemaking? How about you go get a real job and start DOLLARmaking?

25.11.2024 04:22 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Centsmaking through sensemaking

25.11.2024 04:21 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image
23.11.2024 17:37 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I always thought it was that a surprising number of high level features can be represented linearly. You can compute a kiki -> bouba vector that works ~everywhere in the space

23.11.2024 06:18 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

The real Marginal Revolution is that now I say "on the margin" all the time

20.11.2024 06:04 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

The juicy parts of the explanations for complex phenomena do generally seem to be much less complex than the data associated with the phenomena though, so we probably have a long way to go before we cap out, but I don't see why such phenomena couldn't exist.

20.11.2024 05:50 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

It doesn't seem to me like we do.

So then for the argument that we can create any explanation any other being could create does seem to require that there be a bound on how complex the juicy part of any explanation could be, but I don't see why we should expect there to be any such bound.

20.11.2024 05:50 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

If we show data describing a complex phenomenon to GPT-6, and it can make predictive statements about it, share compact descriptions of its theory with other instances of GPT-6 and they can then do the same, etc, but no human with any amount of hand holding can do this, do we really understand?

20.11.2024 05:50 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

But what if the content of the explanation itself, the juicy part, the part we think a quantum physicist has re quantum physics (even if they can't compute the output of a quantum circuit in their head), is so large or complex that it exceeds the limits of the human mind?

20.11.2024 05:50 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

He says that if the explanation requires more compute or memory than our brains have, we can build computers to help.

If the explanation is simple, but deriving it requires processing a lot of data or proving a theorem with lots of tedious but explanation-irrelevant branches, that makes sense.

20.11.2024 05:50 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Reading The Beginning of Infinity. Deutsch argues that humans' ability to create explanations is universal in reach: any explanation any being could create, we can create. I'm confused about this.

20.11.2024 05:50 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Can’t an AI do the alignment for you? The AI would be intelligent enough, but you’re there because you can be held accountable. Hopefully we don’t build AIs that are afraid of personal consequences, but you certainly are.

13.04.2023 01:10 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Eventually, all jobs will be either artisanship or alignment. Either you do it because it’s intrinsically valuable to have a human do it, or you’re there to make sure the machines do the thing you want them to do rather than something else.

13.04.2023 01:04 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0