It would be fun for a benchmark to focus on problems that are more "visual" - truths that are easy for humans to "see" but hard for them to prove formally
It would be fun for a benchmark to focus on problems that are more "visual" - truths that are easy for humans to "see" but hard for them to prove formally
Isn't natural language still awfully close to a formal / symbolic domain? Human mathematical intuition seems grounded in spatiotemporal relationships, not natural language.
It could be called "turbulence"
The Bluesky Python SDK is so cool!
Length of chain of thought does indeed correlate to difficulty - see attached
Iβm genuinely confused by these statements. Chain of thought length absolutely does correlate to difficulty - generally the LLM will stop thinking when it reached a reasonable answer. Likewise in human reasoning!
The number of tokens doesn't necessarily stay the same, does it? LLMs can execute algorithms and output the stored values at intermediate steps as tokens, so the number of tokens / amount of computation scales up with the difficulty of the problem (size of the input, in the case of factorization)
But isnβt it just a constant amount of compute per token? Producing more tokens involves using more time and space. Chain of thought, etc.
By contrast, good explanatory scientific theories generalize to broader set of "perturbations" than just the types of experiments that went into constructing the theory. Watson and Crick's model of DNA was not just a way to predict x-ray diffraction patterns.
Totally right, you said something different. You're much more pro- this type of model learned from perturbation data.
My concern is that you end up with a causal model, yes - but the perturbations are drawn from a very constrained distribution. The ML model can more or less memorize them.
Also notable that this type of work doesn't use any of the conditional independence assumptions that are common in the causal modeling community @alxndrmlk.bsky.social
@kordinglab.bsky.social argued in a recent talk that you can't learn a model from canned data that will let you simulate perturbation experiments.
bsky.app/profile/kemp...
But this type of model seems darn close.
Cool work out of @arcinstitute.org . My question is, do models like this let us perform novel in-silico experiments the way first-principles models do, or are they just clever way of extrapolating existing experimental data from one context to another?
Two sets of connecting fly neurons with fine, wispy arbors.
Cleaning up disk space, I found this image I made for someone not long after the release of the #HHMIJanelia #Drosophila hemibrain #connectome in 2020. It shows EPG neurons in pink providing inputs to PFL1 neurons in transparent grey. I'm not sure if the image was ever used.
Philip did mention a MW talk from Zurek I think
Do we know if the number of steps they can perform is related to how many steps they saw in their training data? Can RL fine-tuning increase the number of steps?
Does anyone know what species this is? Would love to know more about what structures play the role of nervous system and muscles
Who knew that Chargaff was into this stuff
Against reductionism: "Our understanding of the world is built up of innumerable layers. Each is worth exploring, as long as we do not forget that it is one of many. Knowing all there is to know about one layer (...) would not teach us much about the rest". Erwin Chargaff
Things that arenβt chocked full of information-bearing molecules
Because the kind of theories we want involve phenomena that span 3-4 orders of magnitude in space (synapses vs. brains) and 6-7 orders of magnitude in time (action potentials vs. skill acquisition)?
Thereβs a good definition of computational universality (Church-Turing) - why couldnβt there be one of general intelligence?
If constructor theory told us something amazing *was* constructible, it might help motivate us to build it.
Conversely we could avoid wasting our time on things not even constructible in principle.
Quiet posters feed. Youβre welcome.
To all the international students, post-docs, scientists, and other academics Iβve been friends with over the years - we support you, and we want you here
What do you mean by "information about"?
No. Burning a library destroys something. Not physical information (thatβs left in the heat and ash) but knowledge about the world. Whatever the fire is destroying, the brain can create βde novoβ. Itβs not conserved.
Physics is also information-preserving.
So thereβs been no βnewβ information since the Big Bang.
But there must be some other sense in which new things do come into existence.
New information, no.
But new ideas, new knowledge, yes.
Einstein didnβt acquire relativity from observations, he invented it.
@annakaharris.bsky.social @philipgoff.bsky.social All our *discourse* about C is 3rd-person observable - neurons firing, vocal cords moving, etc. We expect a boring old physical story one day. Won't that story undercut panpsychism?
@seanmcarroll.bsky.social did you ever get a satisfying answer?