Try it for a month!
Try it for a month!
You can just stew things
Real media theorists know
Global *town* square? More like global village square
Spent my break doing some intensive R&D (resting and digesting)
Like a fox, I'm always on the hunt for new data to explain with my grand unified theory of everything.
A smart model would also condition the generation of subsequent fields in a JSON object on prior fields, so it seems like it would be happy to condition on the CoT.
FWIW, doing CoT in this fashion is common and an officially documented pattern in OpenAI's case.
OpenAI's implementation of structured outputs guarantees the model generates fields in the same order as they appear in the schema. platform.openai.com/docs/guides/...
Are there other ways it may confuse the model?
re AI training data, I'm obviously biased, but IMO it's good to be able to slip your thoughts into the mind of god.
Snowden and Creepy ads and the phrase "my data" got everyone to think of creative output as something that should be protected from people who wanted to steal it.
That's true for some stuff like DMs and trade secrets, but in general, it's actually good to influence others.
The thing I like most about Bluesky is that all the content is public, including to crawlers. Creating in the public square, in a way that *couldn't* exclude anyone from seeing and doing what they wanted with it, was the most beautiful thing about how the Web used to work.
Sensemaking? How about you go get a real job and start DOLLARmaking?
Centsmaking through sensemaking
I always thought it was that a surprising number of high level features can be represented linearly. You can compute a kiki -> bouba vector that works ~everywhere in the space
The real Marginal Revolution is that now I say "on the margin" all the time
The juicy parts of the explanations for complex phenomena do generally seem to be much less complex than the data associated with the phenomena though, so we probably have a long way to go before we cap out, but I don't see why such phenomena couldn't exist.
It doesn't seem to me like we do.
So then for the argument that we can create any explanation any other being could create does seem to require that there be a bound on how complex the juicy part of any explanation could be, but I don't see why we should expect there to be any such bound.
If we show data describing a complex phenomenon to GPT-6, and it can make predictive statements about it, share compact descriptions of its theory with other instances of GPT-6 and they can then do the same, etc, but no human with any amount of hand holding can do this, do we really understand?
But what if the content of the explanation itself, the juicy part, the part we think a quantum physicist has re quantum physics (even if they can't compute the output of a quantum circuit in their head), is so large or complex that it exceeds the limits of the human mind?
He says that if the explanation requires more compute or memory than our brains have, we can build computers to help.
If the explanation is simple, but deriving it requires processing a lot of data or proving a theorem with lots of tedious but explanation-irrelevant branches, that makes sense.
Reading The Beginning of Infinity. Deutsch argues that humans' ability to create explanations is universal in reach: any explanation any being could create, we can create. I'm confused about this.
Canβt an AI do the alignment for you? The AI would be intelligent enough, but youβre there because you can be held accountable. Hopefully we donβt build AIs that are afraid of personal consequences, but you certainly are.
Eventually, all jobs will be either artisanship or alignment. Either you do it because itβs intrinsically valuable to have a human do it, or youβre there to make sure the machines do the thing you want them to do rather than something else.