Emulating a Target Trial

Lessons Learned

Target Trial
Lessons Learned
Author

Ryan Batten

Published

February 15, 2024

What is a Target Trial? Why Emulate One?

A target trial is the trial that we’d like to conduct under ideal circumstances. Emulating a target trial consists of trying to mimic this trial as much as possible given the constraints (i.e., ethics, data availability, etc). For example, the target trial may be randomizing patients but we can’t randomize patients that are in an observational study. For a brief overview of emulating a target trial, I’d recommend reading Hernán and Robins (2016) or Fu (2023).

Target trial emulation has drastically improved the quality of studies using observational data in recent years. It can prevent a lot of biases that are “self-inflicted” such as immortal time bias (Hernán et al. 2016). Although the concept may seem simple at first (or so I thought), in practice there are some challenges that can arise. This post focuses on my personal experience with emulating target trials and what I’ve learned.

Lessons Learned

I’ve been involved with a few projects where the goal is to emulate a target trial. Part of my PhD project, currently underway, is to emulate a target trial using electronic health records. There are some lessons that I’ve learned along the way. To keep it somewhat organized, I’ll group them into categories:

  • Inclusion/Exclusion Criteria

  • Data Sources

  • Missing Data

  • Index Dates

  • Causal Estimands

Inclusion/Exclusion Criteria

Inclusion/exclusion (I/E) criteria, sometimes referred to as eligibility criteria, are required for a few reasons including including patients needed to answer the research question and for internal validity. A difference between applying I/E criteria in a randomized controlled trial (RCT) and using real-world data (RWD) is the flexibility.

Applying All I/E Criteria is Unlikely

For the target trial, all I/E criteria would be applied. Unfortunately, this is unlikely to be the case for emulating a target trial. What can further complicate it, is that different data sources may have different variables which will allow for different criteria to be applied. This can make it tricky to decide what to do.

An approach that I find useful, is the one outlined in Gatto et al. (2022). This helps rank which criteria are most important so be applied to allow for a better decision. It also helps decide what are the minimum criteria (i.e., non-negotiable) required to answer the research question.

How Can We Alter Criteria?

The data has already been collected for RWD which is a limitation. For example, a lab test might measured within 30 days prior to starting treatment. For real-world data, the lab test might not be measured within 30 days.

This doesn’t mean that we can’t apply the criteria, it just requires some modification. This often requires input from an expert (i.e., a clinician). For example, the ideal criteria might be to take a lab value the day before randomization but this isn’t available in the RWD. So what’s a reasonable alternative? 14 days? A week? A month?

The goal is to modify the criteria to be as similar as the ideal criteria while not sacrificing the validity of the study.

Data Sources

A large part of emulating a target trial is deciding which data source to use. There are several factors to consider including the data quality, rate of missingness, variables captured and more. The most important component is “can this answer the research question?”. There is a wide variety of great sources of data including claims data, electronic health records, registry data and wearables (i.e., trackers, etc).

There’s limitations/tradeoffs for each of them. For example, claims data will have information about costs but electronic health records and registry data won’t. However, both of these will typically have more detail than claims (i.e., lab tests, physician notes, etc). An article that helped me when considering multiple data sources is Gatto et al. (2022).

Missing Data

If you’re working with real-world data, there will 100% be missing data. Having a plan for how to address this helps. My overall takeaway for missing data is to start with “how did this data become missing?”

Need a Plan

Working with real-world data, you will need a plan for missing data. Regardless of how high quality the data is you will need a plan. Personally, I tend to separate these into two main categories. The two main categories are: covariates and outcomes. The reason for this is the method used to solve these can differ.

Part of what can make this challenging (not just for emulating a target trial but missing data in general, including trials) is that you don’t see the data a priori. However, the rate of missingness can be guessed reasonably, and the assumed missing data mechanism.

Missing Data Mechanism for Each Variable

The missing data mechanism can be different for each variable. This was something that I hadn’t thought about before working on a project where the team specified a different missing data mechanism (MCAR, MAR, MNAR) for each variable. To me, this became a fantastic approach because a “one-size approach fits all” mentality is dangerous if applying to all variables. Not every variable is missing due to the same reason.

Index Dates

Choosing an index date, the time point that we start measuring follow-up from, can seem fairly straightforward. Based on emulating a trial, if the goal is to estimate the per-protocol effects, then we can select the index date as 1 day after receiving treatment or whatever will align with our target trial….but what if there are multiple options?

Too many options!

Depending on the disease area, it’s possible that there are multiple possible index dates. Prior to coming across this issue, I had naively assumed there’d only be one index date. This becomes problematic. Do we pick the first index date? The last? Randomly close our eyes and pick one while hoping for the best?

Like much of science the answer is “it depends”. Part of the reason is that it depends on the outcome. If it’s a time-to-event outcome, choosing the first or last index date could impact the results. My approach now when I come across this problem is to simulate the scenario, similar to Hatswell et al. (2022).

Causal Estimands

Causal estimands differ from the estimands referred to in a clinical trial setting (the ICH E9 (R1) addendum estimands). Learning about these causal estimands helped tremendously when trying to determine what type of method to use for controlling for confounding and for guiding the research question. These estimands are: the average treatment effect (ATE), the average treatment effect in the treated (ATT), the average treated effect in the untreated (ATU) and the average treatment effect in the overlap (ATO).

If you are not familiar with causal estimands, like I wasn’t, I highly recommend Greifer and Stuart (2023).

Simulating Situations!

One of the biggest lessons for me, was learning to simulate data. This can help when there isn’t necessarily a consensus on a method or a solution to the problem that you’re currently facing. I would recommend learning how to simulate data to be able to answer questions that may arise when working on emulating a target trial.

References

Fu, Edouard L. 2023. “Target Trial Emulation to Improve Causal Inference from Observational Data: What, Why, and How?” Journal of the American Society of Nephrology 34 (8): 1305–14.
Gatto, Nicolle M, Ulka B Campbell, Emily Rubinstein, Ashley Jaksa, Pattra Mattox, Jingping Mo, and Robert F Reynolds. 2022. “The Structured Process to Identify Fit-for-Purpose Data: A Data Feasibility Assessment Framework.” Clinical Pharmacology & Therapeutics 111 (1): 122–34.
Greifer, N, and EA Stuart. 2023. “Choosing the Causal Estimand for Propensity Score Analysis of Observational Studies.” arXiv Preprint arXiv:2106.10577.
Hatswell, Anthony J, Kevin Deighton, Julia Thornton Snider, M Alan Brookhart, Imi Faghmous, and Anik R Patel. 2022. “Approaches to Selecting ‘Time Zero’ in External Control Arms with Multiple Potential Entry Points: A Simulation Study of 8 Approaches.” Medical Decision Making 42 (7): 893–905.
Hernán, Miguel A, and James M Robins. 2016. “Using Big Data to Emulate a Target Trial When a Randomized Trial Is Not Available.” American Journal of Epidemiology 183 (8): 758–64.
Hernán, Miguel A, Brian C Sauer, Sonia Hernández-Dı́az, Robert Platt, and Ian Shrier. 2016. “Specifying a Target Trial Prevents Immortal Time Bias and Other Self-Inflicted Injuries in Observational Analyses.” Journal of Clinical Epidemiology 79: 70–75.