Maybe a sub question: is art created by an llm inherently different than the same art created by a human and if so how?
Maybe a sub question: is art created by an llm inherently different than the same art created by a human and if so how?
Deterministic arrows representing a function of covariates at the unit level (however you got that function) are fine. But the function itself as a random object is not at the same level as the units. If you want to include it in a dag, you need a dag where samples are the units and you have n of 1
I think dags represent population level distributions and am having a hard time seeing how itβs coherent to include sample statistics at allβ¦ (maybe the letter addressed this, couldnβt access full article on phone)
I know you intentionally didn't name them, but any chance you'd be willing to? Throwing them new readers like me might outweigh the mild public shaming?
protestors have painted what appears to be a tunnel on the side of the mountain, causing ICE agents to run into the mountain at full speed.
disgusting.
My internal monologue
A more borderline one is track/swim meets. You can break the world record in a qualifying heat and then get a middling time/score in the final heat and get no medal, or do the reverse and get gold
More generally, in tennis you want to spread out your total points won so that you win the most games and sets. You can easily win more points and lose a tennis match if they're not properly grouped
You should read Masters of Atlantis: en.wikipedia.org/wiki/Masters...
Waiting for the regression discontinuity
Thank you for the invitation and for the great discussion!
We (Audrey Renson and @pausalz.bsky.social) Just updated this paper on interference in time-varying DiD settings from a while ago: arxiv.org/abs/2405.11781. Still see papers coming out regularly about problems that I'm pretty sure this solves...
Is there any movement to get them to stop showing playersβ postseason stats on tv? I wanna know how good they are, not how lucky they were the past few games
And I now see how this doesnβt depend on intervention. Eg conditioning on being on earth induces associations between variables related to falling explained mathematically by Newton. So I think I fully understand now, thanks for bearing with the stream of consciousness if you made it this far!
(Didnβt Google Boyleβs law, hope I got it right)
I guess it doesnβt need to be as magical as fate. You could imagine presetting a machine to maintain gas pressure at some level however it needs to. Then temperature and volume become counterfactually related. And P=TV is a mathematical non-mechanistic explanation like you discuss in the paper
Ok, but now Iβm back to my original understanding that the distinguishing feature of pre-selection is that it occurs via intervention and induces association via βfateβ as in your fairy godmother example
Ok, but C is in the past of E and D, the exposure and outcome of interest? I think I'm being dense, so I won't ask you to explain again!
In M-bias, youβre conditioning on C, which is something pre-baseline like an eligibility criterion for a study. But it doesnβt arise from actually intervening to set C. Is that what makes predecessor bias different?
Ha, I donβt think anythingβs wrong, but I think M-bias from conditioning on pretreatment variables is one common case of what youβre describing
Yeah I was asking about your pinned post paper, read it with great interest
Does predecessor bias have to come from a 'fated collider' that seems to maybe be a uniquely quantum thing? Or do you use the term to refer to any setting where conditioning on a pre-intervention variable induces an association? We call the latter case 'M-bias' journals.lww.com/epidem/abstr....
Causal people are usually interested in causal effects and would want to adjust away what we would call that spurious association. So we would just call it confounding adjustment. Maybe more descriptive people like demographers have a word for adjusting for confounding when you donβt want to
Forgot to say, 'we' is me and my phd advisor David Madigan, who took a break from being a provost to do research again. Nice reunion!
Next, we ask 'when will the key assumptions be satisfied'? We argue that under a causal pie model the answer is 'basically never'! But we think violations should be small in practice and provide sensitivity analysis.
Then we develop Neyman orthogonal estimators for when S is only independent of Y(1) given Y(0) and covariates.
First, we simply point out that you don't really need multiple studies. You just need one study and a baseline covariate S that you can use to make your own 'substudies' that satisfy their assumptions. This gives you more control over whether the assumptions are approximately satisfied.
I think this paper we just put out is kind of interesting (arxiv.org/abs/2509.20506)! Wu and Mao (arxiv.org/abs/2504.20470) cleverly showed that if you have multiple studies and 'study' is associated with Y(0) but not Y(1) given Y(0), you can identify the joint distribution of Y(0) and Y(1).
Dare I say that this post is ironically on the verge of starting a lively debate?
Ha, yeah, that was "unconstructive".
Guess you can usually tell who wants feedback and who's celebrating from the wording of their post. Definitely shouldn't rain on people's parade when they're clearly celebrating