"An International Journal for Epistemology, Methodology and Philosophy of Science"
I see...
@jounihelske
Academy Research Fellow at INVEST (University of Turku), PI of CAUSALTIME project. Bayesian statistics, longitudinal causal inference, hidden Markov and state space models in general. Computational statistics in social sciences and various other things. ♾️
"An International Journal for Epistemology, Methodology and Philosophy of Science"
I see...
What did I just (try to) read? I can't figure out what is the message of the paper? If this was more recent paper I would have suspected this is written by LLM.
🇫🇮 2/3 valtion palveluista Yhdysvalloissa – Suomi on digitaalisesti panttivanki! 🚨
#DigitaalinenItsenäisyys #Suomi #Tietoturva #USA #Pilvipalvelut
1/
Awful R code: > X <<- data.frame(1:5,4) > `<-`(y, 10 + assign(X |> substitute() %>% deparse(), X[, X[[T]][1L]])) > y [1] 11 12 13 14 15
Above doesn't actually work as <- doesn't support pipe :(
But this beauty does work fine!
X |> `<-`(X[, X[[T]][1L]])
I just hate tibbles so much.
(don't @ me)
Blocked and reported.
I have no doubt LLMs can out-perform the average "data analyst" as described in this piece but that's a very low bar indeed. Maybe this will help promote the importance of the "statistician in the loop" something we should already have but refuse to pay for.
I bet someone uses just <<- as they are not fan on environments...
<- vs =
And no, I don't want to hear about crazy people using ->.
I only hate how changing the printing of tibble is behind pillar options, so maybe I hate pillar and not tibble?
New blog post introducing Causion - a web app for causal inference teaching and learning: pedermisager.org/blog/causion....
four stick figures on sleds, one is sliding face first, face down, marked "skeleton", diagonally opposite to it is one sliding feet first, face up, marked "luge". The other two combinations are drawn and labeled with question marks
given the existence of skeleton and luge, i postulate the existence of two other, yet to be discovered, winter olympic sports
📢 ⚠️ Päätöksentekoa ilman tietoa? Hallituksen leikkaukset Tilastokeskukseen uhkaavat yhteiskuntaamme.
#Tilastokeskus #Demokratia #TutkittuTieto
1/
Yeah, I haven't read the paper yet, but it feels hard group to generalize from, timing of birth is maybe even less random, selection on who start IVF and who succeed, effects of the economic/mental/physical/social costs of the whole process, gender roles regarding work&family might differ, ...
On the publication bias discourse, I regret that metascience has become a source of decontextualized, low-res, bean-counting-focused `science is in crisis' narratives. It is largely uncurious abt science, desperately lacking in theory & measurement. I'll quote a few takes I liked & add my thoughts🧵
(I don't understand the meaning of that emoji)
I agree that if you're already locked into and well established in your topic, one more paper probably won’t make a difference unless it is something really groundbreaking. Even though quantity still seems to be valued over quality in many decisions...
Orpon hallitus leikkaa Tilastokeskukselta 6 miljoonaa. Mm. seuraavat tilastot ovat uhattuina:
-Teollisuuden uudet tilaukset
-Työtaistelutilasto
-Väestöennuste
-Vaalitilastot
-Ansiotasoindeksi
-Tuotannon suhdannekuvaaja
Samaan aikaan Orpon hallitus keventää suurituloisten verotusta 1,5 miljardilla.
Countries differ of course, but e.g., in Finland most researchers are not on tenure track, and getting external funding is expected also for those in permanent positions, so publications matter a lot. Also for universities, as their funding is partly tied to publications affiliated with them...
Statistical significance or lack of it shouldn't imply good or bad study? But in practice its of course easier to doubt a null results when expecting to find an effect. So if you see a published null it is a big deal if auhors would have wanted to find significant results but couldn't?...
But if you believe your study is properly executed, then whether or not you get significant results shouldn't matter?
e.g., P(correct analysis)=0.01, P(significant results |correct)=0.95, P(significant |incorrect)=0.5, then P(correct |non-significant)=0.001.
Of course there's no reason why good research => significant results, but as that is expected in practice, seeing null results doesn't look convincing? 2/2
There are more ways to do bad than good analysis. If you would assume random analysis choices and that good research leads to significant results more often than bad research, then seeing null results makes it more likely that analysis was done poorly than correctly. 1/2
political malpractice that this chart hasnt been seared into the brains of every single american through advertising and social media
Weird argument, what is greatness and why existence would increase it? Also the argument seems to rely on the assumption that someone/something can imagine the greatest possible thing, which at least I find implausible assumption. Unless the thing imagining this is the greatest...
Sir Ian McKellen performing a monologue from Shakespeare’s Sir Thomas More on the Stephen Colbert show. Never have I heard this monologue performed with such a keen sense of prescience. Nor have I ever been in this exact historical moment.TY Sir Ian, for reaching us once again.
#Pinks #ProudBlue
Or, x takes only values 0 and 100, and error term is standard normal. Now marginal distribution of y is clealy not Gaussian as it is bimodal, but that doesn't matter either.
Aloitteen löydät osoitteesta www.kansalaisaloite.fi/fi/aloite/16...
Verkkosivut löydät osoitteesta digitaalinenitsenaisyys.fi
10/10
"Our job is to find the correct choice of specifications, not to see how many changes a result is robust to. A result can be completely correct, and yet not be robust to even small changes; conversely, a result can be robust to many different changes, and yet be wrong."
By @captgouda24.bsky.social