It’s evidence for the claim that language is primarily a tool, and as such follows metacognitive optimization, efficiency in this case. For me, my native polish don’t depends on word ordering, therefore one can change the meaning of the sentence on the fly, which is helpful in oral discussions.
30.10.2025 17:04
👍 1
🔁 0
💬 0
📌 0
Title slide: Trust the process? (Causal) Mediation analysis
Decision flowchart "So you want to conduct a mediation analysis"
This Thursday I will give a talk on (causal) mediation analysis -- happy to have finally worked out what I want to tell people.
And happy to pilot a new mode of slide sharing: just putting them on my website (juliarohrer.com/resources/)
05.05.2025 10:41
👍 147
🔁 33
💬 13
📌 1
CI is great to defend and criticize methodological ideas. Similarly to metascientific findings, it often shows that actual science is bad and good science is hard. Purely statistical work, and offloading causal conclusion to reader is easier. Actual science condition is part of adoption problem.
11.04.2025 11:35
👍 1
🔁 0
💬 1
📌 0
You mean knowing part of mechanisms constituting object of inquire is not sufficient. Agree. But it is necessary, although I have wider concept of mechanistic knowledge, probably closer to yours concept of some „theory” (not the theory!). Like knowing the symptoms, not failing selection bias, etc.
14.03.2025 19:51
👍 0
🔁 0
💬 0
📌 0
Any example of causality without mechanistic knowledge?
14.03.2025 09:16
👍 0
🔁 0
💬 1
📌 0
But how can you have better evidence without mechanisms? I mean, how do you know that in fact some empirical evidence is better w/o even minimal mechanistic knowledge?
14.03.2025 09:14
👍 1
🔁 0
💬 2
📌 0
This is practically but not theoretically correct. If one would have fully specified mechanisms in form of structural causal model, even in the case of fat handed intervention one would predict correctly. The problem is that we have such knowledge only for constructed things (see causal chambers)
14.03.2025 09:12
👍 1
🔁 0
💬 0
📌 0
@hayoungsong.bsky.social gz! What about negative „aha” like spotting the contradictions? Moreover, if we would combine both negative and positive aha in study, I expect the stronger the contradiction, the stronger the positive insights that would resolve it. Would be lovely to se it!
14.03.2025 09:01
👍 0
🔁 0
💬 0
📌 0
This is another partial evidence to more general claim: insights are signals about significant metaproblem change about problem (here re representing elements of narratives)
14.03.2025 08:57
👍 0
🔁 0
💬 1
📌 0
Three AI-powered steps to faster, smarter peer review
Tired of spending countless hours on peer reviews? An AI-assisted workflow could help.
Let's talk about this Nature piece in more detail.
I've rarely read something so anti-scientific anywhere short of the National Review.
www.nature.com/articles/d41...
06.03.2025 05:34
👍 1638
🔁 691
💬 62
📌 143
AI for science could be more impactful than chatbots. It is already helping win Nobel prizes and accelerating drug development and materials discovery.
Today we published an essay about it: why it matters, how it’s happening and its implications. Here is a summary from an econ / social sci lens.
26.11.2024 10:39
👍 79
🔁 30
💬 2
📌 7
How hard is cognitive science?
YouTube video by Iris van Rooij
How hard is cognitive science?
🎬📽🍿 Video: m.youtube.com/watch?v=2bdK...
📖 Paper version: psyarxiv.com/k79nv/
Summary in #PaperThread below 🧵 1/n
16.02.2025 17:30
👍 103
🔁 23
💬 6
📌 4
More evidence that peer review penalizes academic risk-taking from a new paper Pierre Azoulay and Wesley H. Greenblatt: "Does Peer Review Penalize Scientific Risk Taking? Evidence from NIH Grant Renewals." www.nber.org/papers/w33495
17.02.2025 08:34
👍 16
🔁 3
💬 0
📌 1
Minimum Viable Experiment to Replicate
Berna Devezer and Erkan O. Buzbas
Department of Business, University of Idaho
Department of Mathematics and Statistical Science, University of Idaho
Abstract: In theory, replication experiments purport to independently
validate claims from previous research or provide some diagnostic evidence about their truth value. In practice, this value
of replication experiments is often taken for granted. Our research shows that in replication experiments, practice often does
not live up to theory. Most replication experiments involve confounding factors and their results are not uniquely determined
by the treatment of interest, hence are uninterpretable. These
results can be driven by the true data generating mechanism,
limitations of the original experimental design, discrepancies
between the original and the replication experiment, distinct
limitations of the replication experiment, or combinations of any
of these factors. Here we introduce the notion of minimum viable experiment to replicate which defines experimental conditions that always yield interpretable replication results and is
replication-ready. We believe that most reported experiments
are not replication-ready and before striving to replicate a given
result, we need theoretical precision in or systematic exploration
of the experimental space to discover empirical regularities.
A revised version of "Minimum Viable Experiment to Replicate" is up, where we expound why standard expectations from replications are unrealistic and experiments that may deliver on those replications are rare, if not nonexistent.
#metasci #sts #philsci
philsci-archive.pitt.edu/24720/7/Mini...
11.02.2025 16:53
👍 109
🔁 33
💬 6
📌 2
True, the link between causal models and their target systems remains fuzzy in causality. Not sure we can grasp this framework from stochastic thermodynamics without mastering the framework foundations first. Getting into causality is though.
06.02.2025 09:36
👍 0
🔁 0
💬 1
📌 0
How should we measure the quality of experimental research? With talk of a looming “replicability crisis”, this question has gained additional significance. Yet, common measures of research quality based on reliability and validity do not always track core epistemic virtues. To remedy this issue, we draw on information theory and propose a measure of research quality based on mutual information. Mutual information measures how much information an experimental method carries about the world. We show that this measure tracks epistemic virtues that reliability and validity do not. We conclude by discussing implications of this information-theoretic measure of research quality and address some limitations of this approach.
"Reliability and validity are important properties for research methods to have. Yet, we also show that reliability and validity fall short of other epistemic virtues that are crucial to the quality of research methods" (Ventura, 2025).
doi.org/10.1007/s112...
#Methodology #MetaSci #PhilSci
06.02.2025 07:04
👍 37
🔁 5
💬 2
📌 0
Thrilled to share our #ICLR2025 work on Meta-Causal States! 🌟 Causal graphs evolve with dynamic systems & agent actions. We show how to cluster causal models by qualitative behavior, revealing hidden dynamics & emergent relationships 🚀 #Causality #ML
https://arxiv.org/abs/2410.13054
24.01.2025 19:34
👍 12
🔁 6
💬 0
📌 0
Do you mean that if Y is caused by $X then difference between changing $ or X is meaningful only in dynamic systems ? Changing parameter is changing causal relationships on all phenomenon level, changing X is changing its state on units level. They are not same, even with same numerical effect.
23.01.2025 18:04
👍 0
🔁 0
💬 0
📌 0
It is acceptable, but functional opacity has to be countered by external evaluation of the outcomes. For example competent judges should grade random 10% of the outcomes.
27.12.2024 10:14
👍 1
🔁 0
💬 0
📌 0
Agree. I was not saying that it’s wrongly selling itself as something more.
06.12.2024 12:59
👍 2
🔁 0
💬 0
📌 0
Where this quote come from?
05.12.2024 09:16
👍 0
🔁 0
💬 1
📌 0
This is because causal framework is selling itself as something more then statistic, while basic argument is that it could provide experimental results without them. Nevertheless, there is a lot of work dealing with both experimental and observational data
05.12.2024 09:14
👍 1
🔁 0
💬 1
📌 1
Now I get it. You mean HB paradigm as way of doing behavior economy. I was thinking about HB paradigm as program in cognitive science. You probably know the Anderson Rationality analysis regarding HB?
03.12.2024 14:41
👍 1
🔁 0
💬 1
📌 0
But maybe I don’t understand what you mean by HB program, and why impossibility of finite set is so devastating to such program
03.12.2024 10:43
👍 0
🔁 0
💬 1
📌 0
You say that HB will fall under its own weights because HB is not finite. However, notice that your argument could also support the claim: the HB changes on the go. So it’s not finite because it’s dynamical, and also the space of possible HB is changing.
03.12.2024 10:41
👍 0
🔁 0
💬 1
📌 0