's Avatar

@renebekkers

49
Followers
62
Following
60
Posts
20.02.2025
Joined
Posts Following

Latest posts by @renebekkers

OSF

When you collect data online, are the results from humans or AI? In a project led by Booth PhD student Grace Zhang, we estimate the prevalence of AI agents on commonly used survey platforms:
osf.io/preprints/ps...
🧵

07.03.2026 20:22 👍 63 🔁 33 💬 3 📌 3

Could something along these lines be a solution? renebekkers.wordpress.com/2025/11/25/a...

07.03.2026 17:51 👍 1 🔁 0 💬 1 📌 0

Perhaps things are just different in industries selling products and services that could actually hurt people

06.03.2026 20:00 👍 1 🔁 0 💬 1 📌 0
Preview
Claude Code 27: Research and Publishing Are Now Two Different Things Some Claude Code fan fiction about the economics of publishing with AI agents set in the very near future

This post really put together the pieces in a way that floored me. Everything is about to change and we have to confront that reality causalinf.substack.com/p/claude-cod...

03.03.2026 19:28 👍 29 🔁 10 💬 1 📌 3
My effort to reproduce this paper began as part of the Institute for Replication’s ongoing project to systematically examine the reproducibility and robustness of papers in Nature Human Behaviour3; my participation in this endeavour was approved by the Ethical Review Board of Vrije Universiteit Amsterdam’s School of Business and Economics. Inspecting the paper’s first two figures revealed a mathematical impossibility. There are nine EU countries that experienced zero terror attacks during the study’s time frame. However, the paper reports that the inverse hyperbolic sine of these countries’ per capita attack rates are positive, and increase or decrease over time. This is impossible; the inverse hyperbolic sine of zero is zero4. The main outcome variable displayed in the paper’s second figure is hard-coded in the replication data as ‘DVSin’. Figure 1’s top row of plots shows that DVSin is negatively correlated with both terrorist attack rates (r = −0.107, two-sided P = 0.024) and their inverse hyperbolic sine (r = −0.108, two-sided P = 0.022). These plots also show that in the 305/420 country-year observations after 2006 experiencing zero terror attacks (72.6%), DVSin takes on 292 different positive values. This implies that the paper’s main outcome variable cannot possibly be constructed as described in the paper.

My effort to reproduce this paper began as part of the Institute for Replication’s ongoing project to systematically examine the reproducibility and robustness of papers in Nature Human Behaviour3; my participation in this endeavour was approved by the Ethical Review Board of Vrije Universiteit Amsterdam’s School of Business and Economics. Inspecting the paper’s first two figures revealed a mathematical impossibility. There are nine EU countries that experienced zero terror attacks during the study’s time frame. However, the paper reports that the inverse hyperbolic sine of these countries’ per capita attack rates are positive, and increase or decrease over time. This is impossible; the inverse hyperbolic sine of zero is zero4. The main outcome variable displayed in the paper’s second figure is hard-coded in the replication data as ‘DVSin’. Figure 1’s top row of plots shows that DVSin is negatively correlated with both terrorist attack rates (r = −0.107, two-sided P = 0.024) and their inverse hyperbolic sine (r = −0.108, two-sided P = 0.022). These plots also show that in the 305/420 country-year observations after 2006 experiencing zero terror attacks (72.6%), DVSin takes on 292 different positive values. This implies that the paper’s main outcome variable cannot possibly be constructed as described in the paper.

"the paper’s main outcome variable cannot possibly be constructed as described in the paper."

Retraction of 2023 paper that did not use the reported variables. The replication report is astonishing www.nature.com/articles/s41...

Pre-publication peer review remains undefeated in laundering bullshit

26.02.2026 07:47 👍 51 🔁 14 💬 3 📌 4
Post image

It’s finally out! Together with @embopress.org and
@reviewcommons.org, we conducted a structured side-by-side comparison of human peer review and our AI scientific review (see thread 👇👇👇🔥).

26.02.2026 14:34 👍 77 🔁 38 💬 2 📌 4

Interesting predictions on what will happen "when “science as checklist” becomes policy"

24.02.2026 10:34 👍 0 🔁 0 💬 0 📌 0
Post image

1/ Sorry for double-posting from X. Sharing a new working paper for the Year of the Horce 🐎:

"An AI-assisted workflow that scales reproducibility in empirical research" (bit.ly/repro-ai) w/ Leo Yang Yang

18.02.2026 19:21 👍 76 🔁 26 💬 4 📌 6

Deze collega's schrijven:
"Zo berichte NRC over een studie die liet zien dat migratieonderzoekers met uiteenlopende ideologische opvattingen tot tegenstelde conclusies kwamen op basis van dezelfde dataset"
Deze grafiek uit de studie laat zien dat dat n volstrekt onjuiste interpretatie is.

14.02.2026 11:13 👍 101 🔁 49 💬 8 📌 6
Preview
What is it in formal education that creates civic engagement? Why is it that higher educated individuals are more likely to engage in blood donation, charitable giving and volunteer work? You might think it has something to do with what you get from going to …

Very happy to finally see this paper published in @actasociologica.bsky.social

Paper: doi.org/10.1177/0001...
A blog explaining the findings and methods of the paper is at renebekkers.wordpress.com/2025/12/07/w...

15.02.2026 21:32 👍 1 🔁 1 💬 0 📌 0

To our surprise, genetic variants of those who do better on intelligence tests were hardly correlated with giving time, money, and blood. So it is not that people who attain a higher level of education are giving more because they are born with genetic variants for being smart.

15.02.2026 21:32 👍 1 🔁 0 💬 1 📌 0

We confirmed that WLS participants and their siblings with more genetic variants that are associated with educational attainment in fact give more time, money and blood.

15.02.2026 21:32 👍 1 🔁 0 💬 1 📌 0

We found that respondents with a higher genetic propensity to spend a higher number of years in education – measured by a polygenic score for educational attainment – were more likely to engage in formal prosocial behaviors such as blood donation, charitable giving and volunteer work 57 years later.

15.02.2026 21:32 👍 1 🔁 0 💬 1 📌 0

In a new paper with Eva-Maria Merz and Ting Li, we analyzed data from 5,967 respondents in the Wisconsin Longitudinal Study (WLS) to examine which characteristics of individuals and families create the association between educational attainment and engagement in prosocial behavior.

15.02.2026 21:32 👍 1 🔁 2 💬 1 📌 0

Wild how economists and political scientists worry so much about unbiased tests **in their papers** and yet basically ignore how their journals filter on significance. Given our noisy tests, the latter creates huge bias away from zero.

10.09.2025 15:00 👍 22 🔁 5 💬 1 📌 1

New paper, on a worrying trend in meta-science: the practice of anonymising datasets on, e.g., published articles. We argue that this is at odds with norms established in research synthesis, explore arguments for anonymisation, provide counterpoints, and demonstrate implications and epistemic costs.

13.02.2026 16:50 👍 98 🔁 52 💬 6 📌 7

So sorry to have missed it! But #PSE8 was very good

14.02.2026 21:19 👍 0 🔁 0 💬 0 📌 0

My first paper is out in #SociologicalScience!
With Jörg Stolz and Ruud Luijkx, we found robust evidence of ideological #bias in #secularization research: researchers' own religiosity is correlated with their probability of finding evidence of religious decline in their publications.
Read more: 👇

11.02.2026 10:22 👍 31 🔁 11 💬 1 📌 2

Really cool talk

12.02.2026 13:45 👍 5 🔁 2 💬 0 📌 0
It must be very hard to publish null results
Publication practices in the social sciences act as a filter that favors statistically significant results over null findings. While the problem of selection on significance (SoS) is well-known in theory, it has been difficult to measure its scope empirically, and it has been challenging to determine how selection varies across contexts. In this article, we use large language models to extract granular and validated data on about 100,000 articles published in over 150 political science journals from 2010 to 2024. We show that fewer than 2% of articles that rely on statistical methods report null-only findings in their abstracts, while over 90% of papers highlight significant results. To put these findings in perspective, we develop and calibrate a simple model of publication bias. Across a range of plausible assumptions, we find that statistically significant results are estimated to be one to two orders of magnitude more likely to enter the published record than null results. Leveraging metadata extracted from individual articles, we show that the pattern of strong SoS holds across subfields, journals, methods, and time periods. However, a few factors such as pre-registration and randomized experiments correlate with greater acceptance of null results. We conclude by discussing implications for the field and the potential of our new dataset for investigating other questions about political science.

It must be very hard to publish null results Publication practices in the social sciences act as a filter that favors statistically significant results over null findings. While the problem of selection on significance (SoS) is well-known in theory, it has been difficult to measure its scope empirically, and it has been challenging to determine how selection varies across contexts. In this article, we use large language models to extract granular and validated data on about 100,000 articles published in over 150 political science journals from 2010 to 2024. We show that fewer than 2% of articles that rely on statistical methods report null-only findings in their abstracts, while over 90% of papers highlight significant results. To put these findings in perspective, we develop and calibrate a simple model of publication bias. Across a range of plausible assumptions, we find that statistically significant results are estimated to be one to two orders of magnitude more likely to enter the published record than null results. Leveraging metadata extracted from individual articles, we show that the pattern of strong SoS holds across subfields, journals, methods, and time periods. However, a few factors such as pre-registration and randomized experiments correlate with greater acceptance of null results. We conclude by discussing implications for the field and the potential of our new dataset for investigating other questions about political science.

I have a new paper. We look at ~all stats articles in political science post-2010 & show that 94% have abstracts that claim to reject a null. Only 2% present only null results. This is hard to explain unless the research process has a filter that only lets rejections through.

11.02.2026 17:00 👍 642 🔁 223 💬 30 📌 51
Post image

🚨📄 New paper (conditional accepted at @thejop.bsky.social):

We test whether social desirability bias actually distorts answers in online surveys.

Short version:
It mostly doesn’t.

w. @timallinger.bsky.social @kristianvsf.bsky.social @morganlcj.bsky.social

URL: osf.io/preprints/os...

12.02.2026 13:06 👍 177 🔁 58 💬 4 📌 7
Love Data Week graphic showing counts of journal articles, preprints, relationships, and matching article to preprints.

Love Data Week graphic showing counts of journal articles, preprints, relationships, and matching article to preprints.

Where's the Data? → Where did this preprint end up?
Dominika Tkaczyk built a matching strategy to discover #preprint→article relationships—dataset now public with 1,060,573 relationships.

🔗 Dataset: https://doi.org/10.13003/ac2ienay
🔗 Blog: https://doi.org/10.64000/dpcc9-k4564

#LoveData26

11.02.2026 02:28 👍 12 🔁 7 💬 1 📌 0
Preview
Fragmentation of a longitudinal population-scale social network: Decreasing structural social cohesion in the Netherlands Population-level dynamics of social cohesion and its underlying mechanisms remain difficult to study. In this paper, we propose a network approach to measure the evolution of social cohesion at the po...

Fascinating new working paper by @bokanyie.bsky.social and colleagues showing a gradual decrease in social closure in the Netherlands over 10 years, based on population networks arxiv.org/abs/2602.002...

09.02.2026 12:31 👍 3 🔁 2 💬 0 📌 0
Preview
On the reliability and reproducibility of qualitative research With my collaborators, I am increasingly performing qualitative research. I find qualitative research projects a useful way to improve my un...

New blog post, inspired by the excellent recent qualitative paper by Makel and colleagues: On the reliability and reproducibility of qualitative research.

I reflect on how I will incorporate realist ontologies in my own qualitative research.

daniellakens.blogspot.com/2026/02/on-r...

08.02.2026 07:46 👍 20 🔁 16 💬 0 📌 0
Preview
A Global Publishing Credit Club The global productivity increase in science is reducing the willingness of researchers to perform peer review. To bring the production of manuscripts and reviews in line with each other, I propose …

@jnfrltackett.bsky.social @lakens.bsky.social yes that is a solution - more thoughts here renebekkers.wordpress.com/2025/11/25/a...

07.02.2026 18:54 👍 1 🔁 0 💬 1 📌 0
Preregistration in Practice | Paul Meehl Graduate School February 19, 2026

You still have time to sign up for the upcoming workshop of PMGS.
@denolmo.bsky.social will guide you through evaluating and writing high quality preregistration.
See more and sign up here:
paulmeehlschool.github.io/workshops/pr...

06.02.2026 14:00 👍 2 🔁 4 💬 0 📌 0

Thanks for helping to improve the journal!

04.02.2026 12:02 👍 1 🔁 0 💬 0 📌 0

Update: the authors have fixed the errors in Table 1 and the link to the preregistration. renebekkers.wordpress.com/2026/01/31/s...

04.02.2026 07:11 👍 0 🔁 0 💬 0 📌 0
Promised Data Unavailable? – I’m Sorry, Ma’am, There’s Nothing We Can Do — Meta-Research Center This blogpost has been written by Michèle Nuijten. Michèle is an assistant professor of our research group who investigates reproducibility and replicability in psychology. Also, she is the developer ...

I wrote a blog for the Meta-Research Center expressing my infinite frustration about not getting data. What else is new, you might think? Well, I added an extra layer of annoyance directed at the journals who do NOTHING to enforce promised data sharing.

metaresearch.nl/blog/2026/2/...

03.02.2026 15:03 👍 60 🔁 36 💬 7 📌 4

Someone should create an algorithm that correctly classifies papers generated by AI agents - here are some training data

03.02.2026 07:14 👍 1 🔁 0 💬 0 📌 0