Its possible big discoveries and important steps lie more in the unpredictable combinations than the predicable ones. Collective behavior is one example (computer graphics techniques + anima behavior questions)
Its possible big discoveries and important steps lie more in the unpredictable combinations than the predicable ones. Collective behavior is one example (computer graphics techniques + anima behavior questions)
Itβs more the difficulty of altering dynamics without degrading functionality. Collective behavior is so fickle and fragile itβs quite strange science works at all, in some ways.
For example, how does noisy and finite but diverse individual reads of the literature differ from all getting summaries
We wrote about the general idea here. From what we know about collective behavior, itβs highly unlikely that altering it for short term profits is going to produce desirable outcomes at scale.
www.pnas.org/doi/10.1073/...
I think the termite analogy gets us to nearly the opposite conclusion. Animal collective behavior often relies on highly selected rules for interaction, that emergently produce functionality. Altering that with robot termites that donβt follow those rules would almost certainly collapse functioning.
A social media post from Donald J. Trump on Truth Social, posted 9 minutes ago. The text reads: "Iran tried to interfere in 2020, 2024 elections to stop Trump, and now faces renewed war with United States:" followed by a link to justthenews.com. Below is a link preview showing the article headline "Iran tried to interfere in 2020, 2024 elections to stop Trump, and now faces possible war with U.S." accompanied by a photo from what appears to be an Iranian protest or rally, showing demonstrators holding anti-Trump signs and posters with Persian text.
I am going to need every single person who said that misinformation was a moral panic to feel enough shame and regret that they leave public intellectual life.
Related point, if you are in misinformation research and have been launching study after study to see what Uncle Fred thinks after seeing misinformation PLEASE listen to what @katestarbird.bsky.social and I have been saying for years: effects on elite and official action is what matters most.
I suspect it is! Thereβs been some work on ai companies and research, but I think largely focused on ai research rather than its consequences.
I do think the field will turn up messy results unless we start getting cois straightened out.
I'm with @devezer.bsky.social, you *absolutely* cannot test for p-hacking.
These tests universally assume p-hacking, and nothing else under the sun, explain deviations from idealized distributions and then claim to have found it.
Fantastic thread.
For example if someone wants to tell me about space travel, I should treat that information differently if they are a friend I trust who has a PhD in astrophysics, versus some random person on the internet, versus a psychotic billionaire trying to sell a space travel company
9/
If the sign up bonus is good, Iβd even give it a depression survey and plot the results over time.
Stupidest job ever, but likeβ¦ Iβd take it.
For $400k a year, Iβd ask Claude how itβs doing each day and send Dario Amodei a thumbs up emoji on slack.
AI makes continuous reproducibility and robustness testing trivial. What happens to science under new levels of scrutiny and stress-testing by default?
Some thoughts on how this could play out, informed by watching open science play out over the last decade.
Good piece. Carl and I reached similar conclusions about ai in peer review. Itβs good to see the hype tamed with what it is we actually do when weβre doing science.
www.nature.com/articles/d41...
My experience with trying to get LLMs to write statistical models is that they will happily and silently bury an implausible assumption about the data generating process in an unobjectionable, conventional structure.
Of course we shouldnβt condition our model choices on the inferences we draw, but the models form is the questions they askβbased on what we know, assume, value etc..
I truly donβt see how that can be automated, itβs where our knowledge and data actually meet.
Itβs as if when we choose the terms in a model or its form weβre guessing and somewhere out there is the right answer, the biggest threat to getting it wrong is that we might choose the model based on our subjectivity.
So we want to hedge across options, or ask a machine to do it.
More seriously, a lot of the LLMs in analysis discourse seems to be stuck in the same sort of thinking that motivated avetsging full specification curves of every possible model, or many analysts studies.
Less seriously, Iβm gonna be big mad when the EC2 instance checking my reproducibility OOMs and rejects submission.
Many are appropriately outraged by Altmanβs comments here implying that raising a human child is akin to βtrainingβ an AI model.
This is part of a broader pattern where AI industry leaders use language that collapses the boundary between human and machine.
π§΅/
"There are deeper strata of truthβ¦there is such a thing as poetic, ecstatic truth. It is mysterious and elusive, and can be reached only through fabrication and imagination and stylization." - Werner Herzog
Frankly if you canβt figure out how to use a proper environment for your code it raises deeper questions about the analysis.
Definitely inspired by it. I donβt understand conceptually adding something non reproducible to the problem of reproducibility when itβs a small infrastructure investment to ensure code compiles the way we do with latex.
Most code Iβve struggled to reproduce could be addressed with a little template repository, even just using package management.
Containers make it all easier, but itβs weird how rarely someone makes it clear which version of (say) R they used.
Iβve lost track of downloading someoneβs R code and spending three hours figuring out what version of which packages donβt cause things to break, while replacing hardcoded paths everywhere.
I think the benefits sort of pay for themselves in terms of avoiding dependency hell when writing code. There is a learning curve but not particularly steep re: eveything else in science. For example, even using like pip freeze > requirements.txt and relative paths would go a long way.
My old boss and current boss happened to meet halfway across the world today, and im truly hoping they took the piss out of me.
βHave the agentic robot do itβ
Vs.
βUse nix, GitHub, docker, or even just decent dependency managementβ
People excited about computational reproducibility with LLMs are gonna lose their minds when the llm tells them that itβs largely a solved problem with dozens of suitable tools and it just requires knowing basic software development.
There's also plenty of complexity inherent to physics (the beast of social systems) but like lots of complex systems, you get emergent forms of simplicity amenable to models. I'm not sure we've done a good job integrating theory and empirics because of the fallout over homo-economicus.