Plenary Speakers


President's Invited Speaker


Anastasios Tsiatis, North Carolina State University, USA


Treatment Discontinuation and Dynamic Treatment Regimes: What is the question?

Monday 10th, 9.30 h. (estimated)

Room: TBA

Anastasios Tsiatis


Dr. Anastasios Tsiatis is Gertrude M. Cox Distinguished Professor of Statistics from the Department of Statistics at the North Carolina State University (USA). His research focuses on a variety of problems in Biostatistics. This includes developing statistical methods for the design and analysis of clinical trials, censored survival analysis, group sequential methods, inference on Quality Adjusted Lifetime, surrogate markers, semiparametric methods with missing and censored data, causal inference and dynamic treatment regimes.




Abstract: Motivated by two studies conducted by the Duke Clinical Research Institute where interest focused on issues regarding treatment discontinuation we discuss how thinking causally in terms of potential outcomes and dynamic treatment regimes helped us formulate the question of interest and helped us develop methods for addressing these questions.


The SYNERGY trial was a randomized, open-label, multi-center clinical trial designed to compare two anti-coagulant drugs on the basis of time-to-event endpoints. The intent-to-treat analysis showed no significant difference between the two arms which was surprising as one of the treatments had been shown to be superior in other similar studies. It turned out however that a substantial proportion of patients did not complete their treatment assignment but rather discontinued study drug prematurely. It also turned out that the rate of treatment discontinuation was roughly twice as much on one arm as compared to the other. This created doubt that the reason for seeing no difference may have been due to the excess treatment discontinuation in one of the treatment arms. Therefore as an adjunct to the usual intent-to-treat analysis, we were asked to consider an analysis of the 'true' treatment effect; i.e., the difference in survival distributions if all subjects continued their treatment assignment as per protocol and did not discontinue prematurely. As usual, the protocol dictated circumstances, such as occurrence of a serious adverse event, under which it was mandatory for a subject to discontinue his/her assigned treatment. In addition, as in the execution of many trials, some subjects did not complete their assigned treatment regimens but rather discontinued study drug prematurely for other, 'optional' reasons not dictated by the protocol; e.g., switching to the other study treatment or stopping treatment altogether at their or their provider's discretion. Approaches to inference on adjusting for treatment discontinuation used commonly in practice are ad hoc and hence are not generally valid. We use SYNERGY as a motivating case study to propose generally-applicable methods for estimation and testing of this 'true' treatment effect by placing the problem in the context of causal inference on dynamic treatment regimes. Analysis of data from SYNERGY demonstrate the utility of the methods.


In the clinical trial 'ESPRIT' of patients with coronary heart disease who were scheduled to undergo percutaneous coronary intervention (PCI), patients randomized to receive Integrilin therapy had significantly better outcomes than patients randomized to placebo. The protocol recommended that Integrilin be given as a continuous infusion for 18--24 hours. There was debate among the clinicians on the optimal infusion duration in this 18--24-hour range, and we were asked to study this question statistically. Two issues complicated this analysis: (i) The choice of treatment duration was left to the discretion of the physician and (ii) treatment duration must be terminated (censored) if the patient experienced serious complications during the infusion period. To formalize the question, "What is the optimal infusion duration?" in terms of a statistical model, we developed a framework where the problem was cast using ideas developed for adaptive treatment strategies in causal inference. The problem is defined through parameters of the distribution of (unobserved) potential outcomes. We then show how, under some reasonable assumptions, these parameters could be estimated. The methods are illustrated using the data from the ESPRIT trial.


Keynote Speaker


Francesca Dominici, Harvard University, USA


Model Uncertainty and Covariate Selection in Causal Inference

Wednesday 12th, 11.00 h. (estimated)

Room: TBA

Francesca Dominici


Dr. Francesca Dominici is Professor of Biostatistics, Senior Associate Dean for Research, and Associate Dean of Information Technology at the Harvard T.H. Chan School of Public Health (USA). Her research focuses on the development of statistical methods for the analysis of large and complex data. She leads several interdisciplinary groups of scientists with the ultimate goal of addressing important questions in environmental health science, climate change, comparative effectiveness research, and health policy. 




Abstract: Researchers are being challenged with decisions on how to control for a high dimensional set of potential confounders in the context of a single binary treatment (e.g, drug) and in the context of a multivariate exposure vector with continuous agents and their interactions (e,g, exposure to mixtures). Typically, for a binary treatment, a propensity score model is used to adjust for confounding, while the uncertainty surrounding the procedure to arrive at this propensity score model is often ignored. Failure to include even one important confounder will results in bias. We discuss how to overcome issue of confounding selection and model uncertainty in causal inference. Specifically, we introduce the model averaged double robust (MA-DR) estimator, which accounts for model uncertainty in both the propensity score and outcome model through the use of model averaging. We also consider estimating the effect of a multivariate exposure that includes several continuous agents and their interactions when the true confounding variables are an unknown subset of a potentially large (relative to sample size) set of measured variables. We develop a new approach rooted in the ideas of bayesian model averaging to prioritize confounders among a high-dimensional set of measured covariates. We introduce a data-driven, informative prior that assigns to likely confounders a higher probability of being included into a regression model for effect estimation. We illustrate the performance of these estimators and applications to comparative effectiveness research and environmental problems.






 Technical Secretariat: Orzán Congres  


 Phone: +34 981 900 700 | Email: | Web:




© 2016 SERGLO - All rights reserved