Congrats, it's useful (if obscure) obXKCD: xkcd.com/1053/ (OK, maybe & vs && isn't as exciting as diet Coke and mentos ...)
Congrats, it's useful (if obscure) obXKCD: xkcd.com/1053/ (OK, maybe & vs && isn't as exciting as diet Coke and mentos ...)
Can you ask Claude to redo this on a scale where we don't get negative paylines? (e.g. something like a logistic fit with logit(percentile) as the preditor?)
πππ Long-awaited: lme4 2.0-1 now handles some simple structured covariance matrices: diagonal ['true' - not just semantic splitting of `(x+y|f)` into `(x|f)` + `(y|f)`], compound symmetric, AR1 (no GPs/factor-analytic/reduced-rank/etc. yet ...)
it's on my "peeves" list too: bbolker.github.io/bbmisc/peeve...
couverture du livre introduction Γ la statistique bayΓ©sienne avec R
π Nouveau livre !
Introduction Γ la statistique bayΓ©sienne avec le logiciel R
DΓ©couvrez pas Γ pas la statistique bayΓ©sienne et sa mise en Εuvre dans R : thΓ©orΓ¨me de Bayes, MCMC, priors, rΓ©gression, GLM/GLMM et comparaison/validation de modΓ¨les.
Aux Γ©ditions Quae π
π· CrΓ©dit photo : Yann Raulet
What's your Erd"os-Bacon-Sabbath number? (Is it finite?)
Maybe not "statistical concept", but this is fun for MCMC algorithms chi-feng.github.io/mcmc-demo/ap... (references at chi-feng.github.io/mcmc-demo/)
A quick plug for juliapackages.com/p/mixedmodels (by Doug Bates and phillipalday.com -- similar capabilities to lme4 but **much** faster
For those who haven't yet seen this classic: stats.stackexchange.com/q/185507/2126 (someone whose manager was insisting that they do this ...)
The Bayesian results imply much higher risk of early collapse than maximum likelihood methods. This difference is due to large probabilities of early collapse for certain parameter values that are plausible in light of the data. Because of simplifying assumptions, these results are not directly applicable to assessment. Nevertheless they imply that maximum likelihood and similar methods based upon point parameter estimates will grossly underestimate the risk of early collapse.
This is decision-theoretic rather than stats only, but: Ludwig, Donald. βUncertainty and the Assessment of Extinction Probabilities.β Ecological Applications 6, no. 4 (1996): 1067β76. doi.org/10.2307/2269....
My thoughts on this aren't fully cooked, but: this presupposes that non-null results are more interesting than null results (which is mostly true given the way most scientists set up hypotheses but doesn't have to be). Is there a bias-interestingness tradeoff dial we could adjust?
complete separation?
It's good you asked, since I got to go down a rabbit hole/didn't know about him before.
I think your AI overlords are bullshitting/hallucinating. See the other part of thread on en.wikipedia.org/wiki/Ernest_... (who sounds like a true badass BTW ...) [off-topic: is there something I can read to update my mental model of BSky threading, which I don't get at all?]
Sorry, missed alt-text on the second screenshot: "As Just remarked in the symposium this morning, he is interested more in the back than in the bristles on the back and more in eyes than in eye color"
"Embryologist E. E. Just complained that genetics and selection could explain why populations of flies had more or fewer bristles on their backs, but it couldn't explain how a fly constructed its back in the first place (Harrison 1937: 372; Gilbert et al 1996: 361)
Quote is attributed from Amundson R (2005) The Changing role of the embryo in evolutionary thought: Roots of Evo-Devo.
Cambridge University Press, Cambridge (Google books screenshot β Harrison, Ross G. 1937. βEmbryology and Its Relations.β Science 85 (2207): 369β74 (JSTOR screenshot)
Think this is what you want? philarchive.org/archive/LIAT... quotes: "I am interested more in the flyβs back than the bristles on its back, and more in its eye than its eye color (E.E. Just)" . Paper has more details, plus see: en.wikipedia.org/wiki/Ernest_...
I get the idea ... I thought the people I tagged might have an idea about the source (both fly people, although I think more on the evo of morphology side than strictly evo-devo ... [my characterization, not theirs])
@idworkin.bsky.social ? @thelonglab.bsky.social ?
"newbies" check (pkgs w/o prior releases on CRAN) have a particularly fussy human-administered set of checks (e.g. do all functions have explicitly documented return values?) Also, things like spell-check false positives (can be fixed via dirk.eddelbuettel.com/blog/2017/08... )
don't forget the π¨π¦ fentanyl czar! www.canada.ca/en/privy-cou... (not really Canada's idea ...)
... in the bottom of a locked file cabinet in a disused lavatory with a sign on the door saying "Beware of Leopard" ...
results of `apropos("R.?[Vv]ersion", ignore.case=FALSE)`: c("getRversion", "R.version", "R.Version", "R.version.string")
argh.
While we're here, the other tool I'd love (if the universe would magically grant it to me) would be a lockfile/sessionInfo "diff" utility ... (see github.com/rstudio/renv... )
Fun that the papal bulls were alphabetized by author, *all under "P"* (e.g. "Pope Benedict XVI" not "Benedict XVI"). My admittedly ancient Chicago Manual of Style (13th ed., 1982) says only (rule 18.76) "Monarchs & popes should be listed accorded to their 'official', not personal, names" ...
I agree that it doesn't (that I know of) exist. It would be tempting to write it oneself, or possibly to vibe-code it: using `renv` lockfile format as a target seems sensible (since then you get the "now install all this stuff" functionality for free from `renv`)
niche (pronounced "neesh" π¨π¦) question: can anyone tell me the dates when NSERC Discovery Grant results were announced (by university research offices to applicants) in 2024 and 2025? I want to update my predictive model gist.github.com/bbolker/ce64... for this year ...
be careful, we're sliding toward the leading-comma debate ...
Circle 4 Over-Vectorizing We skirted past Plutus, the fierce wolf with a swollen face, down into the fourth Circle. Here we found the lustful. It is a good thing to want to vectorize when there is no effective way to do so. It is a bad thing to attempt it anyway. A common reflex is to use a function in the apply family. This is not vectorization, it is loop-hiding. The apply function has a for loop in its definition. The lapply function buries the loop, but execution times tend to be roughly equal to an explicit for loop. (Confusion over this is understandable, as there is a significant difference in execution speed with at least some versions of S+.) Table 4.1 summarizes the uses of the apply family of functions. Base your decision of using an apply function on Uweβs Maxim (page 20). The issue is of human time rather than silicon chip time. Human time can be wasted by taking longer to write the code, and (often much more importantly) by taking more time to understand subsequently what it does
www.burns-stat.com/pages/Tutor/...
I guess in a pinch I could download the zenodo repo and re-build the paper, but that seems like a hassle (and maybe there's a reason the preprint disappeared?)