While no method is perfect - we can aim to provide less biased estimates of the per protocol analysis, and in the process, aim to estimate the effect we are really interest in: how well does an intervention work in the people who take it.
While no method is perfect - we can aim to provide less biased estimates of the per protocol analysis, and in the process, aim to estimate the effect we are really interest in: how well does an intervention work in the people who take it.
Of course there are caveats - important ones being (1) factors associated with non-adherence and the outcome of interest are appropriately measured and modeled, and (2) at least some people who remain adherent look like the people who we censored (were non-adherent) so they can "stand in" properly.
Those people who remain adherent then stand in for the people who became non-adherent. This "inverse probability of censoring weighting" (IPCW) preserves our baseline randomization and also accounts for the fact that we are censoring people who might differ from the people remaining in the study.
We identify baseline and time-varying factors (e.g. age, eGFR etc.) that might be predictive of both censoring (non-adherence) and the outcome of interest. Then we can use this to identify adherent people who based on those factors look like the people who are non-adherent and "up weight" them.
Epidemiology methods to the rescue! Modern per-protocol analyses recognize this bias and account for this using predictors of non-adherence. How do they do this? We analyze as in the ITT but as soon as someone deviates from their assigned arm, we censor them. But next is the important part...
What if we could keep people in their randomized arm (preserve randomization at baseline) but then account for the fact that non-adherence is not occurring randomly. Then we can estimate the effect had everyone been randomized and then remained adherent to their assigned treatment arm.
This restriction to those who adhere creates selection bias, specifically because factors associated with adherence might be associated with the outcome. Also we can't know ahead of time who will and will not adhere... so what do we do... the authors don't offer us a solution.
In a per-protocol analysis, the authors state we would only include participants who adhere to the assigned intervention by excluding non-adherent participants or those with protocol deviations. But to decide this we have looked in to the future (after baseline) to define our analytical groups.
Unless people are stopping treatment completely at random, those who no longer take treatment/start the comparator are going to differ from those who stay on treatment. We are comparing two different groups and have all the same issues related to confounding as we would in an observational study.
If we are interested in only assessing those who remained on treatment - we might censor those individuals who deviate from some definition of adherence e.g. censor people who stop treatment (i.e. switching to the 'placebo arm') or start the comparator treatment. But this comes at a cost.
In an as-treated analysis, we analyze the trial based on the treatment they actually received. In an ideal/perfect RCT setting, the as-treated result would be the same as an ITT. But when things aren't perfect - we "break" randomization and define the treatment groups based on treatment received.
In a two arm trial, if non-adherence is non-differential i.e. completely at random in both arms, then the ITT is commonly said to be "biased towards the null". Why? The more people are non-adherent, the more the two groups will be similar to each other & the less the difference in their outcomes...
The ITT assess the effect of assigning treatment regardless of whether someone actually received the intervention. While we preserve randomization/eliminate confounding at baseline it does not account for potential non-adherence or differential loss to follow-up.
tl/dr modern and robust methods in epidemiology can account for selection bias resulting from per-protocol analysis. Leveraging these methods can provide valuable insights in to whether a treatment works in those who take it.
Great article on estimands in clinical trials in nephrology as part of the Designing Clinical Trials series in JASN: bit.ly/4p9N0GZ. The authors discuss the differences between intention-to-treat, modified intention to treat, as-treated and per-protocol effects with one important caveat...
I hope you're wearing your hardhat because we are building bridges this #MEstimatorMonday (34/52)
More specifically, we are going to bridge different trials together to address a single question
pubmed.ncbi.nlm.nih.gov/38110289/
Doubly robust estimator be like
Self-help books often cite population studies as evidence that you can change your life with one simple trick. X will improve your health. Y will make you more successful. But there's a crucial catch...
1/
Exciting work from the UNC Kidney Center this week: pubmed.ncbi.nlm.nih.gov/40020049/.
Posting because we don't have an official account here (yet), but some excellent folks in our lab discovered important steps in the mechanism by which hydrANCAzine causes...well.. ANCA! #NephSky #MedSky π§΅
For 2025, I am going to do something a bit different. Every Monday is now #MEstimatorMonday
Each Monday, I'll talk about different M-estimators or some of their properties. This 1/52, which will just be some table setting
For me, 2024 was the year of Synthesis Estimators. Synthesis is work that came to fruition from my interest in merging ideas from βcausal inferenceβ and βmathematical modelingβ throughout my PhD. Here are some highlights to close out the year
A point estimate? You mean a 0% confidence interval?
Come be my colleague!
UNC Epidemiology just opened search for tenured/tenure track RPPE faculty at the assistant or associate professor level. Posting here: unc.peopleadmin.com/postings/291...
I'm not involved in the search, but am happy to chat about general UNC things- feel free to reach out.