Individual Participant Data Meta-Analysis

Реклама. ООО «ЛитРес», ИНН: 7719571260.
Оглавление
Группа авторов. Individual Participant Data Meta-Analysis
Table of Contents
List of Tables
List of Illustrations
Guide
Pages
WILEY SERIES IN STATISTICS IN PRACTICE
Individual Participant Data Meta‐Analysis. A Handbook for Healthcare Research
Acknowledgements
1 Individual Participant Data Meta‐Analysis for Healthcare Research
1.1 Introduction
1.2 What Is IPD and How Does It Differ from Aggregate Data?
1.3 IPD Meta‐Analysis: A New Era for Evidence Synthesis
1.4 Scope of This Book and Intended Audience
Box 1.1 Example of individual participant data (IPD) and how it differs from aggregate data
2 Rationale for Embarking on an IPD Meta‐Analysis Project
Summary Points
2.1 Introduction
2.2 How Does the Research Process Differ for IPD and Aggregate Data Meta‐Analysis Projects?
2.2.1 The Research Aims
2.2.2 The Process and Methods
2.3 What Are the Potential Advantages of an IPD Meta‐Analysis Project?
2.4 What Are the Potential Challenges of an IPD Meta‐Analysis Project?
2.5 Empirical Evidence of Differences Between Results of IPD and Aggregate Data Meta‐Analysis Projects
2.6 Guidance for Deciding When IPD Meta‐Analysis Projects Are Needed to Evaluate Treatment Effects from Randomised Trials
2.6.1 Are IPD Needed to Tackle the Research Question?
2.6.2 Are IPD Needed to Improve the Completeness and Uniformity of Outcomes and Participant‐level Covariates?
2.6.3 Are IPD Needed to Improve the Information Size?
2.6.4 Are IPD Needed to Improve the Quality of Analysis?
2.7 Concluding Remarks
3 Planning and Initiating an IPD Meta‐Analysis Project
Summary Points
3.1 Introduction
3.2 Organisational Approach
3.2.1 Collaborative IPD Meta‐Analysis Project
3.2.2 IPD Meta‐Analysis Projects Using Data Repositories or Data‐sharing Platforms
Box 3.1 Examples of data‐sharing platforms and data repositories
3.3 Developing a Project Scope
3.4 Assessing Feasibility and ‘In Principle’ Support and Collaboration
Box 3.2 Extract of the scope developed for the Evaluating Progestogens for Preventing Preterm birth International Collaborative (EPPPIC) IPD meta‐analysis. Main Aims
Population
Intervention
Comparators
Outcomes
Study Design
3.5 Establishing a Team with the Right Skills
3.6 Advisory and Governance Functions
3.7 Estimating How Long the Project Will Take
3.8 Estimating the Resources Required
Box 3.3 Typical costs incurred in an IPD meta‐analysis project. Staff Costs
General Costs
Fees
Advisory Group Meetings
Patient and Public Involvement Costs
Collaborative Group Meeting
Dissemination Costs
Box 3.4 Examples of research funder support for sharing clinical trial data. Cancer Research UK
European Research Council
National Institutes of Health
Medical Research Council
The Patient‐Centered Outcomes Research Institute
Wellcome
3.9 Obtaining Funding
3.10 Obtaining Ethical Approval
3.11 Data‐sharing Agreement
3.12 Additional Planning for Prospective Meta‐Analysis Projects
3.13 Concluding Remarks
4 Running an IPD Meta‐Analysis Project: From Developing the Protocol to Preparing Data for Meta‐Analysis
Summary Points
4.1 Introduction
4.2 Preparing to Collect IPD
4.2.1 Defining the Objectives and Eligibility Criteria
4.2.2 Developing the Protocol for an IPD Meta‐Analysis Project
Box 4.1 Key sections to include in an IPD meta‐analysis protocol
4.2.3 Identifying and Screening Potentially Eligible Trials
Box 4.2 Summary of the sources searched for an IPD meta‐analysis of randomised trials examining recombinant human bone morphogenetic protein‐2 for spinal fusion
4.2.4 Deciding Which Information Is Needed to Summarise Trial Characteristics
4.2.5 Deciding How Much IPD Are Needed
4.2.6 Deciding Which Variables Are Needed in the IPD
Box 4.3 Example of typical data obtained for trials to be included in an IPD meta‐analysis project
4.2.7 Developing a Data Dictionary for the IPD
4.3 Initiating and Maintaining Collaboration
4.4 Obtaining IPD
4.4.1 Ensuring That IPD Are De‐identified
4.4.2 Providing Data Transfer Guidance
4.4.3 Transferring Trial IPD Securely
4.4.4 Storing Trial IPD Securely
4.4.5 Making Best Use of IPD from Repositories
4.5 Checking and Harmonising Incoming IPD
4.5.1 The Process and Principles
4.5.2 Initial Checking of IPD for Each Trial
4.5.3 Harmonising IPD across Trials
4.5.4 Checking the Validity, Range and Consistency of Variables
4.6 Checking the IPD to Inform Risk of Bias Assessments
4.6.1 The Randomisation Process
4.6.2 Deviations from the Intended Interventions
4.6.3 Missing Outcome Data
4.6.4 Measurement of the Outcome
4.7 Assessing and Presenting the Overall Quality of a Trial
4.8 Verification of Finalised Trial IPD
4.9 Merging IPD Ready for Meta‐Analysis
4.10 Concluding Remarks
Part I References
5 The Two‐stage Approach to IPD Meta‐Analysis
Summary Points
5.1 Introduction
5.2 First Stage of a Two‐stage IPD Meta‐Analysis
5.2.1 General Format of Regression Models to Use in the First Stage
5.2.2 Estimation of Regression Models Applied in the First Stage
5.2.3 Regression for Different Outcome Types
5.2.3.1 Continuous Outcomes
Box 5.1 Applied example of the first stage of a two‐stage IPD meta‐analysis of randomised trials with a continuous outcome. Case study based on the IPD meta‐analysis project of Rogozinska et al.16
5.2.3.2 Binary Outcomes
Box 5.2 Applied example of the first stage of a two‐stage IPD meta‐analysis of randomised trials with a binary outcome. Case study based on the IPD meta‐analysis project of Rogozinska et al.16
5.2.3.3 Ordinal and Multinomial Outcomes
5.2.3.4 Count and Incidence Rate Outcomes
5.2.3.5 Time‐to‐Event Outcomes
5.2.4 Adjustment for Prognostic Factors
5.2.5 Dealing with Other Trial Designs and Missing Data
Box 5.3 Applied example of the first stage of a two‐stage IPD meta‐analysis of randomised trials with a time‐to‐event outcome. Case study based on an extension of IPD meta‐analysis project of Wang et al.46 as described by Crowther et al.44
5.3 Second Stage of a Two‐stage IPD Meta‐Analysis
5.3.1 Meta‐Analysis Assuming a Common Treatment Effect
5.3.2 Meta‐Analysis Assuming Random Treatment Effects
Box 5.4 Applied example of the second stage of a two‐stage IPD meta‐analysis for a time‐to‐event outcome, assuming a common treatment effect. Case study based on an extension of the IPD meta‐analysis project of Wang et al.,46 as described by Crowther et al.44
5.3.3 Forest Plots and Percentage Trial Weights
5.3.4 Heterogeneity Measures and Statistics
Box 5.5 Applied example of the second stage of a two‐stage IPD meta‐analysis of randomised trials with a continuous outcome, assuming a random treatment effect. Case study based on the IPD meta‐analysis project of Rogozinska et al.16
5.3.5 Alternative Weighting Schemes
5.3.6 Frequentist Estimation of the Between‐Trial Variance of Treatment Effect
5.3.7 Deriving Confidence Intervals for the Summary Treatment Effect
5.3.8 Bayesian Estimation Approaches. 5.3.8.1 An Introduction to Bayes’ Theorem and Bayesian Inference
5.3.8.2 Using a Bayesian Meta‐Analysis Model in the Second Stage
5.3.8.3 Applied Example
5.3.9 Interpretation of Summary Effects from Meta‐Analysis
5.3.10 Prediction Interval for the Treatment Effect in a New Trial
5.4 Meta‐regression and Subgroup Analyses
Box 5.6 Applied example of the second stage of a two‐stage IPD meta‐analysis for a binary outcome, including a meta‐regression. Case study based on the IPD meta‐analysis project of Rogozinska et al.16
5.5 The ipdmetan Software Package
Box 5.7 Example of using Fisher’s ipdmetan package within Stata.119
5.6 Combining IPD with Aggregate Data from non‐IPD Trials
5.7 Concluding Remarks
Note
6 The One‐stage Approach to IPD Meta‐Analysis
Summary Points
6.1 Introduction
6.2 One‐stage IPD Meta‐Analysis Models Using Generalised Linear Mixed Models
6.2.1 Basic Statistical Framework of One‐stage Models Using GLMMs
6.2.1.1 Continuous Outcomes
6.2.1.2 Binary Outcomes
Box 6.1 Example of a one‐stage IPD meta‐analysis for a continuous outcome (systolic blood pressure, SBP). Case study based on an extension of the IPD meta‐analysis project of Wang et al.,46 as introduced in Box 5.3
Box 6.2 Example of a one‐stage IPD meta‐analysis for a binary outcome. Case study is adapted from Simmonds and Higgins.166
6.2.1.3 Ordinal and Multinomial Outcomes
6.2.1.4 Count and Incidence Rate Outcomes
6.2.2 Specifying Parameters as Either Common, Stratified, or Random
Box 6.3 Example of a one‐stage IPD meta‐analysis for a count outcome, adapting the case study presented within Niël‐Weise et al.169 and Stijnen et al.14
6.2.3 Accounting for Clustering of Participants within Trials
6.2.3.1 Examples
6.2.4 Choice of Stratified Intercept or Random Intercepts
6.2.4.1 Findings from Simulation Studies
6.2.4.2 Our Preference for Using a Stratified Intercept
6.2.4.3 Allowing for Correlation between Random Effects on Intercept and Treatment Effect
6.2.5 Stratified or Common Residual Variances
6.2.6 Adjustment for Prognostic Factors
6.2.7 Inclusion of Trial‐level Covariates
6.2.8 Estimation of One‐stage IPD Meta‐Analysis Models Using GLMMs
6.2.8.1 Software for Fitting One‐stage Models
6.2.8.2 ML Estimation and Downward Bias in Between‐trial Variance Estimates
6.2.8.3 Trial‐specific Centering of Variables to Improve ML Estimation of One‐stage Models with a Stratified Intercept
6.2.8.4 REML Estimation
6.2.8.5 Deriving Confidence Intervals for Parameters Post‐estimation
6.2.8.6 Prediction Intervals
6.2.8.7 Derivation of Percentage Trial Weights
6.2.8.8 Bayesian Estimation for One‐stage Models
6.2.9 A Summary of Recommendations
6.3 One‐stage Models for Time‐to‐event Outcomes
6.3.1 Cox Proportional Hazard Framework
Box 6.4 Recommendations for one-stage IPD meta-analysis models using GLMMs
6.3.1.1 Stratifying Using Proportional Baseline Hazards and Frailty Models
6.3.1.2 Stratifying Baseline Hazards without Assuming Proportionality
6.3.1.3 Comparison of Approaches
6.3.1.4 Estimation Methods
6.3.1.5 Example
6.3.2 Fully Parametric Approaches
6.3.3 Extension to Time‐varying Hazard Ratios and Joint Models
6.4 One‐stage Models Combining Different Sources of Evidence
6.4.1 Combining IPD Trials with Partially Reconstructed IPD from Non‐IPD Trials
Box 6.5 Example of pseudo‐IPD reconstructed from reported aggregate data, to enable a one‐stage IPD meta‐analysis for a binary outcome such as model (6.3), albeit without adjustment for prognostic factors
6.4.2 Combining IPD and Aggregate Data Using Hierarchical Related Regression
6.4.3 Combining IPD from Parallel‐group, Cluster and Cross‐over Trials
6.5 Reporting of One‐stage Models in Protocols and Publications
6.6 Concluding Remarks
7 Using IPD Meta‐Analysis to Examine Interactions between Treatment Effect and Participant‐level Covariates
Summary Points
7.1 Introduction
Box 7.1 Example of an IPD meta‐analysis that identified a treatment‐covariate interaction in breast cancer.268
7.2 Meta‐regression and Its Limitations
7.2.1 Meta‐regression of Aggregated Participant‐level Covariates
7.2.2 Low Power and Aggregation Bias
7.2.3 Empirical Evidence of the Difference Between Using Across‐trial and Within‐trial Information to Estimate Treatment‐covariate Interactions
7.3 Two‐stage IPD Meta‐Analysis to Estimate Treatment‐covariate Interactions
7.3.1 The Two‐stage Approach
7.3.2 Applied Example: Is the Effect of Anti‐hypertensive Treatment Different for Males and Females?
7.3.3 Do Not Quantify Interactions by Comparing Meta‐Analysis Results for Subgroups
7.4 The One‐stage Approach
7.4.1 Merging Within‐trial and Across‐trial Information
7.4.2 Separating Within‐trial and Across‐trial Information
7.4.2.1 Approach (i) for a One‐stage Survival Model: Center the Covariate and Include the Covariate Mean
7.4.2.2 Approach (ii) for a One‐stage Survival Model: Stratify All Nuisance Parameters by Trial
7.4.2.3 Approaches (i) and (ii) for Continuous and Binary Outcomes
7.4.2.4 Comparison of Approaches (i) and (ii)
7.4.3 Applied Examples
7.4.3.1 Is Age an Effect Modifier for Epilepsy Treatment?
7.4.3.2 Is the Effect of an Early Support Hospital Discharge Modified by Having a Carer Present?
7.4.4 Coding of the Treatment Covariate and Adjustment for Other Covariates
7.4.4.1 Example
7.4.5 Estimating the Aggregation Bias Directly
7.4.6 Reporting Summary Treatment Effects for Subgroups after Adjusting for Aggregation Bias
7.5 Combining IPD and non‐IPD Trials
7.5.1 Can We Recover Interaction Estimates from non‐IPD Trials?
7.5.2 How to Incorporate Interaction Estimates from Non‐IPD Trials in an IPD Meta‐Analysis
7.6 Handling of Continuous Covariates
7.6.1 Do Not Categorise Continuous Covariates
7.6.2 Interactions May Be Non‐linear. 7.6.2.1 Rationale and an Example
7.6.2.2 Two‐stage Multivariate IPD Meta‐Analysis for Summarising Non‐linear Interactions
Box 7.2 Brief introduction to modelling non‐linear relationships for a continuous covariate using restricted cubic splines
Box 7.3 Brief introduction to allowing for non‐linear relationships using fractional polynomials
Box 7.4 Overview of the first stage of a two‐stage multivariate IPD meta‐analysis to summarise a non‐linear treatment‐covariate interaction using a restricted cubic spline
Box 7.5 Overview of the second stage of a two‐stage multivariate IPD meta‐analysis to summarise a non‐linear treatment‐covariate interaction using a restricted cubic spline
7.6.2.3 One‐stage IPD Meta‐Analysis for Summarising Non‐linear Interactions
7.7 Handling of Categorical or Ordinal Covariates
7.8 Misconceptions and Cautions
7.8.1 Genuine Treatment‐covariate Interactions Are Rare
7.8.2 Interactions May Depend on the Scale of Analysis
7.8.3 Measurement Error May Impact Treatment‐covariate Interactions
7.8.4 Even without Treatment‐covariate Interactions, the Treatment Effect on Absolute Risk May Differ across Participants
7.8.5 Between‐trial Heterogeneity in Treatment Effect Should Not Be Used to Guide Whether Treatment‐covariate Interactions Exist at the Participant Level
7.9 Is My Identified Treatment‐covariate Interaction Genuine?
Box 7.6 Criteria for examining the credibility of a treatment‐covariate interaction (termed ‘subgroup effect’), as proposed by Sun et al.261. Design
Analysis
Context
7.10 Reporting of Analyses of Treatment‐covariate Interactions
7.11 Can We Predict a New Patient’s Treatment Effect?
7.11.1 Linking Predictions to Clinical Decision Making
7.12 Concluding Remarks
8 One‐stage versus Two‐stage Approach to IPD Meta‐Analysis: Differences and Recommendations
Summary Points
8.1 Introduction
8.2 One‐stage and Two‐stage Approaches Usually Give Similar Results. 8.2.1 Evidence to Support Similarity of One‐stage and Two‐stage IPD Meta‐Analysis Results
8.2.2 Examples
8.2.3 Some Claims in Favour of the One‐stage Approach Are Misleading
8.3 Ten Key Reasons Why One‐stage and Two‐stage Approaches May Give Different Results
8.3.1 Reason I: Exact One‐stage Likelihood When Most Trials Are Small
Box 8.1 Case study of an IPD meta‐analysis with a notable difference between one‐stage and two‐stage IPD meta‐analysis results, which most likely arises due to a change in the estimation method, and not the use of one‐stage or two‐stage per se
8.3.2 Reason II: How Clustering of Participants Within Trials Is Modelled
8.3.3 Reason III: Coding of the Treatment Variable in One‐stage Models Fitting with ML Estimation
8.3.4 Reason IV: Different Estimation Methods for τ2
8.3.5 Reason V: Specification of Prognostic Factor and Adjustment Terms
8.3.6 Reason VI: Specification of the Residual Variances
8.3.7 Reason VI: Choice of Common Effect or Random Effects for the Parameter of Interest
8.3.8 Reason VIII: Derivation of Confidence Intervals
8.3.9 Reason IX: Accounting for Correlation Amongst Multiple Outcomes or Time‐points
8.3.10 Reason X: Aggregation Bias for Treatment Covariate Interactions
8.3.11 Other Potential Causes
8.4 Recommendations and Guidance
Box 8.2 Two general recommendations about choosing between one‐stage and two‐stage IPD meta‐analyses
8.5 Concluding Remarks
Note
Part II References
9 Examining the Potential for Bias in IPD Meta‐Analysis Results
Summary Points
9.1 Introduction
9.2 Publication and Reporting Biases of Trials
9.2.1 Impact on IPD Meta‐Analysis Results
9.2.2 Examining Small‐study Effects Using Funnel Plots
9.2.3 Small‐study Effects May Arise Due to the Factors Causing Heterogeneity
9.3 Biased Availability of the IPD from Trials
9.3.1 Examining the Impact of Availability Bias
9.3.2 Example: IPD Meta‐Analysis Examining High‐dose Chemotherapy for the Treatment of Non‐Hodgkin Lymphoma
9.4 Trial Quality (risk of bias)
9.5 Other Potential Biases Affecting IPD Meta‐Analysis Results
9.5.1 Trial Selection Bias
9.5.2 Selective Outcome Availability
9.5.3 Use of Inappropriate Methods by the IPD Meta‐Analysis Research Team
9.6 Concluding Remarks
10 Reporting and Dissemination of IPD Meta‐Analyses
Summary Points
10.1 Introduction
10.2 Reporting IPD Meta‐Analysis Projects in Academic Reports
10.2.1 PRISMA‐IPD Title and Abstract Sections (Table 10.1) Title (PRISMA‐IPD 1)
Structured abstract (PRISMA‐IPD 2)
10.2.2 PRISMA‐IPD Introduction Section (Table 10.1) Background
Rationale (PRISMA‐IPD 3)
Aims and objectives (PRISMA‐IPD 4)
10.2.3 PRISMA‐IPD Methods Section (Table 10.2) Protocols and registration (PRISMA‐IPD 5)
Eligibility criteria (PRISMA‐IPD 6)
Identifying trials and trial selection processes (PRISMA‐IPD 7, 8, 9)
Data collection (PRISMA‐IPD 10, 11)
IPD integrity (PRISMA‐IPD A1)
Risk of bias assessment in individual trials (PRISMA‐IPD 12)
Specification of outcomes and effect measures (PRISMA‐IPD 13)
Meta‐Analysis methods (PRISMA‐IPD 14, A2, 16)
Risk of bias associated with the overall body of evidence (PRISMA‐IPD 15)
10.2.4 PRISMA‐IPD Results Section (Table 10.3)
Trials identified and data obtained (PRISMA‐IPD 18)
Trial characteristics (PRISMA‐IPD 18)
IPD integrity and risk of bias within trials (PRISMA‐IPD 18,19)
Results of syntheses (PRISMA‐IPD 20, 21, 23)
Risk of bias across trials and other analyses (PRISMA‐IPD 22)
10.2.5 PRISMA‐IPD Discussion and Funding Sections (Table 10.3) Discussion (PRISMA‐IPD 24, 25, 26, A4)
Funding (PRISMA‐IPD 27)
10.3 Additional Means of Disseminating Findings
10.3.1 Key Audiences. 10.3.1.1 The IPD Collaborative Group
10.3.1.2 Patient and Public Audiences
10.3.1.3 Guideline Developers
10.3.2 Communication Channels. 10.3.2.1 Evidence Summaries and Policy Briefings
10.3.2.2 Press Releases
Box 10.1Box Example of a press release issued for an IPD meta‐analysis project69. York Researchers Question the Effectiveness and Safety of Bone Growth Product
Notes for Editors
10.3.2.3 Social Media
10.4 Concluding Remarks
11 A Tool for the Critical Appraisal of IPD Meta‐Analysis Projects (CheckMAP)
Summary Points
11.1 Introduction
11.2 The CheckMAP Tool
11.3 Was the IPD Meta‐Analysis Project Done within a Systematic Review Framework?
Box 11.1 Checklist of signalling questions to consider when appraising an IPD meta‐analysis project evaluating the effects of treatments (CheckMAP tool), adapted from Tierney et al.79
11.4 Were the IPD Meta‐Analysis Project Methods Pre‐specified in a Publicly Available Protocol?
11.5 Did the IPD Meta‐Analysis Project Have a Clear Research Question Qualified by Explicit Eligibility Criteria?
11.6 Did the IPD Meta‐Analysis Project Have a Systematic and Comprehensive Search Strategy?
11.7 Was the Approach to Data Collection Consistent and Thorough?
11.8 Were IPD Obtained from Most Eligible Trials and Their Participants?
11.9 Was the Validity of the IPD Checked for Each Trial?
11.10 Was the Risk of Bias Assessed for Each Trial and Its Associated IPD?
11.10.1 Was the Randomisation Process Checked Based on IPD?
11.10.2 Were the IPD Checked to Ensure That All (or Most) Randomised Participants Were Included?
11.10.3 Were All Important Outcomes Included in the IPD?
11.10.4 Were the Outcomes Measured/Defined Appropriately?
11.10.5 Was the Quality of Outcome Data Checked?
11.11 Were the Methods of Meta‐Analysis Appropriate?
11.11.1 Were the Analyses Pre‐specified in Detail and the Key Estimands Defined?
11.11.2 Were the Methods of Summarising the Overall Effects of Treatments Appropriate?
11.11.3 Were the Methods of Assessing whether Effects of Treatments Varied by Trial‐level Characteristics Appropriate?
11.11.4 Were the Methods of Assessing whether Effects of Treatments Varied by Participant‐level Characteristics Appropriate?
11.11.5 Was the Robustness of Conclusions Checked Using Relevant Sensitivity or Other Analyses?
11.11.6 Did the IPD Meta‐Analysis Project’s Report Cover the Items Described in PRISMA‐IPD?
11.12 Concluding Remarks
Part III References
12 Power Calculations for Planning an IPD Meta‐Analysis
Summary Points
12.1 Introduction. 12.1.1 Rationale for Power Calculations in an IPD Meta‐Analysis
12.1.2 Premise for This Chapter
12.2 Motivating Example: Power of a Planned IPD Meta‐Analysis of Trials of Interventions to Reduce Weight Gain in Pregnant Women. 12.2.1 Background
12.2.2 What Is the Power to Detect a Treatment‐BMI Interaction?
12.3 Power of an IPD Meta‐Analysis to Detect a Treatment‐covariate Interaction for a Continuous Outcome
12.3.1 Closed‐form Solutions
12.3.1.1 Application to the i‐WIP Example
12.3.2 Simulation‐based Power Calculations for a Two‐stage IPD Meta‐Analysis
12.3.2.1 Application to the i‐WIP Example
12.3.3 Power Results Naively Assuming the IPD All Come from a Single Trial
12.4 The Contribution of Individual Trials Toward Power
12.4.1 Contribution According to Sample Size
12.4.2 Contribution According to Covariate and Outcome Variability
12.5 The Impact of Model Assumptions on Power
12.5.1 Impact of Allowing for Heterogeneity in the Interaction
12.5.2 Impact of Wrongly Modelling BMI as a Binary Variable
12.5.3 Impact of Adjusting for Additional Covariates
12.6 Extensions. 12.6.1 Power Calculations for Binary and Time‐to‐event Outcomes
12.6.2 Simulation Using a One‐stage IPD Meta‐Analysis Approach
12.6.3 Examining the Potential Precision of IPD Meta‐Analysis Results
12.6.4 Estimating the Power of a New Trial Conditional on IPD Meta‐Analysis Results
12.7 Concluding Remarks
13 Multivariate Meta‐Analysis Using IPD
Summary Points
13.1 Introduction
Box 13.1 Motivating example: Multivariate versus univariate meta‐analysis of cohort studies examining the prognostic effect of progesterone receptor status for cancer‐specific survival and progression‐free survival in endometrial cancer.50,51
13.2 General Two‐stage Approach for Multivariate IPD Meta‐Analysis
13.2.1 First‐stage Analyses. 13.2.1.1 Obtaining Treatment Effect Estimates and Their Variances for Continuous Outcomes
Option 1: Modelling outcomes separately within each trial
Option 2: Modelling outcomes jointly within each trial
13.2.1.2 Obtaining Within‐trial Correlations Directly or via Bootstrapping for Continuous Outcomes
Example: Application to a randomised trial of anti‐hypertensive treatment
13.2.1.3 Extension to Binary, Time‐to‐event and Mixed Outcomes
13.2.2 Second‐stage Analysis: Multivariate Meta‐Analysis Model
13.2.2.1 Multivariate Model Structure
13.2.2.2 Dealing with Missing Outcomes
13.2.2.3 Frequentist Estimation of the Multivariate Model
13.2.2.4 Bayesian Estimation of the Multivariate Model
13.2.2.5 Joint Inferences and Predictions
13.2.2.6 Alternative Specifications for the Between‐trial Variance Matrix with Missing Outcomes
13.2.2.7 Combining IPD and non‐IPD Trials
13.2.3 Useful Measures to Accompany Multivariate Meta‐Analysis Results. 13.2.3.1 Heterogeneity Measures
13.2.3.2 Percentage Trial Weights
13.2.3.3 The Efficiency (E) and Borrowing of Strength (BoS) Statistics
13.2.4 Understanding the Impact of Correlation and Borrowing of Strength
13.2.4.1 Anticipating the Value of BoS When Assuming Common Treatment Effects
13.2.4.2 BoS When Assuming Random Treatment Effects
13.2.4.3 How the Borrowing of Strength Impacts upon the Summary Meta‐Analysis Estimates
13.2.4.4 How the Correlation Impacts upon Joint Inferences across Outcomes
13.2.5 Software
13.3 Application to an IPD Meta‐Analysis of Anti‐hypertensive Trials
13.3.1 Bivariate Meta‐Analysis of SBP and DBP. 13.3.1.1 First‐stage Results
13.3.1.2 Second‐stage Results
13.3.1.3 Predictive Inferences
13.3.2 Bivariate Meta‐Analysis of CVD and Stroke
13.3.3 Multivariate Meta‐Analysis of SBP, DBP, CVD and Stroke
13.4 Extension to Multivariate Meta‐regression
13.5 Potential Limitations of Multivariate Meta‐Analysis
13.5.1 The Benefits of a Multivariate Meta‐Analysis for Each Outcome Are Often Small
13.5.2 Model Specification and Estimation Is Non‐trivial
13.5.3 Benefits Arise under Assumptions
13.6 One‐stage Multivariate IPD Meta‐Analysis Applications
13.6.1 Summary Treatment Effects
13.6.1.1 Applied Example
13.6.2 Multiple Treatment‐covariate Interactions
13.6.2.1 Applied Example
13.6.3 Multinomial Outcomes
13.7 Special Applications of Multivariate Meta‐Analysis
13.7.1 Longitudinal Data and Multiple Time‐points
13.7.1.1 Applied Example
13.7.1.2 Extensions
13.7.2 Surrogate Outcomes
13.7.3 Development of Multi‐parameter Models for Dose Response and Prediction
13.7.4 Test Accuracy
13.7.5 Treatment‐covariate Interactions
13.7.5.1 Non‐linear Trends
13.7.5.2 Multiple Treatment‐covariate Interactions
13.8 Concluding Remarks
14 Network Meta‐Analysis Using IPD
Summary Points
14.1 Introduction
14.2 Rationale and Assumptions for Network Meta‐Analysis
14.3 Network Meta‐Analysis Models Assuming Consistency
14.3.1 A Two‐stage Approach
14.3.2 A One‐stage Approach
14.3.3 Summary Results after a Network Meta‐Analysis
14.3.4 Example: Comparison of Eight Thrombolytic Treatments after Acute Myocardial Infarction
14.3.4.1 Two‐stage Approach
14.3.4.2 One‐stage Approach
14.4 Ranking Treatments
14.5 How Do We Examine Inconsistency between Direct and Indirect Evidence?
14.6 Benefits of IPD for Network Meta‐Analysis
14.6.1 Benefit 1: Examining and Plotting Distributions of Covariates across Trials Providing Different Comparisons
14.6.2 Benefit 2: Adjusting for Prognostic Factors to Improve Consistency and Reduce Heterogeneity
14.6.3 Benefit 3: Including Treatment‐covariate Interactions
14.6.4 Benefit 4: Multiple Outcomes
14.7 Combining IPD and Aggregate Data in Network Meta‐Analysis
14.7.1 Multilevel Network Meta‐regression
14.7.2 Example: Treatments to Reduce Plaque Psoriasis
14.8 Further Topics. 14.8.1 Accounting for Dose and Class
14.8.2 Inclusion of ‘Real‐world’ Evidence
14.8.3 Cumulative Network Meta‐Analysis
14.8.4 Quality Assessment and Reporting
14.9 Concluding Remarks
Part IV References
15 IPD Meta‐Analysis for Test Accuracy Research
Summary Points
15.1 Introduction
15.1.1 Meta‐Analysis of Test Accuracy Studies
15.1.2 The Need for IPD
Box 15.1 Potential advantages of examining test accuracy using IPD meta‐analysis rather than a conventional meta‐analysis of published aggregate data.18–22
15.1.3 Scope of This Chapter
15.2 Motivating Example: Diagnosis of Fever in Children Using Ear Temperature
15.3 Key Steps Involved in an IPD Meta‐Analysis of Test Accuracy Studies
15.3.1 Defining the Research Objectives
15.3.2 Searching for Studies with Eligible IPD
15.3.3 Extracting Key Study Characteristics and Information
15.3.4 Evaluating Risk of Bias of Eligible Studies
15.3.5 Obtaining, Cleaning and Harmonising IPD
15.3.6 Undertaking IPD Meta‐Analysis to Summarise Test Accuracy at a Particular Threshold
15.3.6.1 Bivariate IPD Meta‐Analysis to Summarise Sensitivity and Specificity
15.3.6.2 Examining and Summarising Heterogeneity
15.3.6.3 Combining IPD and non‐IPD Studies
15.3.6.4 Application to the Fever Example
15.3.6.5 Bivariate Meta‐Analysis of PPV and NPV
15.3.7 Examining Accuracy‐covariate Associations
15.3.7.1 Model Specification Using IPD Studies
15.3.7.2 Combining IPD and Aggregate Data
15.3.7.3 Application to the Fever Example
15.3.8 Performing Sensitivity Analyses and Examining Small‐study Effects
15.3.9 Reporting and Interpreting Results
15.4 IPD Meta‐Analysis of Test Accuracy at Multiple Thresholds
15.4.1 Separate Meta‐Analysis at Each Threshold
15.4.2 Joint Meta‐Analysis of All Thresholds
15.4.2.1 Modelling Using the Multinomial Distribution
15.4.2.2 Modelling the Underlying Distribution of the Continuous Test Values
15.5 IPD Meta‐Analysis for Examining a Test’s Clinical Utility
15.5.1 Net Benefit and Decision Curves
15.5.2 IPD Meta‐Analysis Models for Summarising Clinical Utility of a Test
15.5.3 Application to the Fever Example
15.6 Comparing Tests
15.6.1 Comparative Test Accuracy Meta‐Analysis Models
15.6.2 Applied Example
15.7 Concluding Remarks
Note
16 IPD Meta‐Analysis for Prognostic Factor Research
Summary Points
16.1 Introduction
16.1.1 Problems with Meta‐Analyses Based on Published Aggregate Data
16.1.2 Scope of This Chapter
16.2 Potential Advantages of an IPD Meta‐Analysis
16.2.1 Standardise Inclusion Criteria and Definitions
16.2.2 Standardise Statistical Analyses
16.2.3 Advanced Statistical Modelling
16.3 Key Steps Involved in an IPD Meta‐Analysis of Prognostic Factor Studies
16.3.1 Defining the Research Question
Box 16.1 Six items to help define the question for IPD meta‐analysis projects aiming to examine prognostic factors, abbreviated as PICOTS,112,128 and applied to the project of Trivella et al. to examine microvessel density (MVD) as a prognostic factor in non‐small‐cell lung carcinoma.115
16.3.1.1 Unadjusted or Adjusted Prognostic Factor Effects?
16.3.2 Searching and Selecting Eligible Studies and Datasets
16.3.3 Extracting Key Study Characteristics and Information
16.3.4 Evaluating Risk of Bias of Eligible Studies
16.3.5 Obtaining, Cleaning and Harmonising IPD
16.3.6 Undertaking IPD Meta‐Analysis to Summarise Prognostic Effects
16.3.6.1 A Two‐stage Approach Assuming a Linear Prognostic Trend
16.3.6.2 A Two‐stage Approach with Non‐linear Trends Using Splines or Polynomials
16.3.6.2.1 Using Splines to Model Non‐linear Trends
16.3.6.2.2 Using Polynomials to Model Non‐linear Trends
16.3.6.2.3 Modelling Interactions
16.3.6.3 Incorporating Measurement Error
16.3.6.4 A One‐stage Approach
16.3.6.5 Checking the Proportional Hazards Assumption
16.3.6.6 Dealing with Missing Data and Adjustment Factors
16.3.7 Examining Heterogeneity and Performing Sensitivity Analyses
16.3.8 Examining Small‐study Effects
16.3.9 Reporting and Interpreting Results
16.4 Software
16.5 Concluding Remarks
17 IPD Meta‐Analysis for Clinical Prediction Model Research
Summary Points
17.1 Introduction
17.2 IPD Meta‐Analysis for Prediction Model Research. 17.2.1 Types of Prediction Model Research
Box 17.1 Typical format of prediction models developed using IPD from a single study
17.2.2 Why IPD Meta‐Analyses Are Needed
Box 17.2 Examples of IPD meta‐analyses in prediction model research.223–229
17.2.3 Key Steps Involved in an IPD Meta‐Analysis for Prediction Model Research
17.2.3.1 Define the Research Question and PICOTS System
17.2.3.2 Identify Relevant Existing Studies and Datasets
17.2.3.3 Examine Eligibility and Risk of Bias of IPD
Box 17.3 The PICOTS system to help define the research question for IPD meta‐analysis projects aiming to develop, validate or update prediction models, as illustrated using a systematic review and meta‐analysis by Debray et al.129 of the predictive performance of the EuroSCORE model to predict short‐term mortality in patients who underwent coronary artery bypass grafting (CABG)
17.2.3.4 Obtain, Harmonise and Summarise IPD
17.2.3.5 Undertake Meta‐Analysis and Quantify Heterogeneity
17.3 External Validation of an Existing Prediction Model Using IPD Meta‐Analysis
17.3.1 Measures of Predictive Performance in a Single Study. 17.3.1.1 Overall Measures of Model Fit
17.3.1.2 Calibration Plots and Measures
17.3.1.3 Discrimination Measures
Box 17.4 Explanation of some key measures for calibration of a prediction model with binary or time‐to‐event outcomes. Observed/Expected number of outcomes (O/E)
Calibration‐in‐the‐large
Calibration slope
17.3.2 Potential for Heterogeneity in a Model’s Predictive Performance
17.3.2.1 Causes of Heterogeneity in Model Performance
17.3.2.2 Disentangling Sources of Heterogeneity
17.3.3 Statistical Methods for IPD Meta‐Analysis of Predictive Performance
17.3.3.1 Two‐stage IPD Meta‐Analysis
17.3.3.2 Example 1: Validation of Prediction Models for Cardiovascular Disease
17.3.3.3 Example 2: Meta‐Analysis of Case‐mix Standardised Estimates of Model Performance
17.3.3.4 Example 3: Examining Predictive Performance of QRISK2 across Multiple Practices
17.3.3.5 One‐stage IPD Meta‐Analysis
17.4 Updating and Tailoring of a Prediction Model Using IPD Meta‐Analysis
17.4.1 Example 1: Updating of the Baseline Hazard in a Prognostic Prediction Model
17.4.2 Example 2: Multivariate IPD Meta‐Analysis to Compare Different Model Updating Strategies
17.5 Comparison of Multiple Existing Prediction Models Using IPD Meta‐Analysis
17.5.1 Example 1: Comparison of QRISK2 and Framingham
17.5.2 Example 2: Comparison of Prediction Models for Pre‐eclampsia
17.5.3 Comparing Models When Predictors Are Unavailable in Some Studies
17.6 Using IPD Meta‐Analysis to Examine the Added Value of a New Predictor to an Existing Prediction Model
17.7 Developing a New Prediction Model Using IPD Meta‐Analysis
17.7.1 Model Development Issues
17.7.1.1 Examining and Handling Between‐study Heterogeneity in Case‐mix Distributions
Box 17.5 General recommendations for IPD meta‐analysis projects aiming to develop a prediction model
17.7.1.2 One‐stage or Two‐stage IPD Meta‐Analysis Models
17.7.1.3 Allowing for Between‐study Heterogeneity and Inclusion of Study‐specific Parameters
17.7.1.4 Studies with Different Designs
17.7.1.5 Predictor Selection Based on Statistical Significance
17.7.1.6 Conditional and Marginal Apparent Performance
17.7.1.7 Sample Size, Overfitting and Penalisation
Box 17.6 The bootstrap procedure for internal validation and optimism‐adjustment of the predictive performance of a prognostic model,158,203 extended to the IPD meta‐analysis setting
17.7.2 Internal‐external Cross‐validation to Examine Transportability
17.7.2.1 Overview of the Method
17.7.2.2 Example: Diagnostic Prediction Model for Deep Vein Thrombosis
17.8 Examining the Utility of a Prediction Model Using IPD Meta‐Analysis
17.8.1 Example: Net Benefit of a Diagnostic Prediction Model for Ovarian Cancer
17.8.1.1 Summary and Predicted Net Benefit of the LR2 Model
17.8.1.2 Comparison to Strategies of Treat All or Treat None
17.8.2 Decision Curves
17.9 Software
17.10 Reporting
17.11 Concluding Remarks
18 Dealing with Missing Data in an IPD Meta‐Analysis
Summary Points
18.1 Introduction
18.2 Motivating Example: IPD Meta‐Analysis Validating Prediction Models for Risk of Pre‐eclampsia in Pregnancy
18.3 Types of Missing Data in an IPD Meta‐Analysis
18.4 Recovering Actual Values of Missing Data within IPD
18.5 Mechanisms and Patterns of Missing Data in an IPD Meta‐Analysis
18.5.1 Mechanisms of Missing Data
18.5.2 Patterns of Missing Data
18.5.3 Example: Risk of Pre‐eclampsia in Pregnancy
18.6 Multiple Imputation to Deal with Missing Data in a Single Study
18.6.1 Joint Modelling
18.6.2 Fully Conditional Specification
18.6.3 How Many Imputations Are Required?
18.6.4 Combining Results Obtained from Each Imputed Dataset
18.7 Ensuring Congeniality of Imputation and Analysis Models
18.8 Dealing with Sporadically Missing Data in an IPD Meta‐Analysis by Applying Multiple Imputation for Each Study Separately
18.8.1 Example: Risk of Pre‐eclampsia in Pregnancy
18.9 Dealing with Systematically Missing Data in an IPD Meta‐Analysis Using a Bivariate Meta‐Analysis of Partially and Fully Adjusted Results
18.10 Dealing with Both Sporadically and Systematically Missing Data in an IPD Meta‐Analysis Using Multilevel Modelling
18.10.1 Motivating Example: Prognostic Factors for Short‐term Mortality in Acute Heart Failure
18.10.2 Multilevel Joint Modelling
18.10.3 Multilevel Fully Conditional Specification
18.11 Comparison of Methods and Recommendations
18.11.1 Multilevel FCS versus Joint Model Approaches
18.11.2 Sensitivity Analyses and Reporting
18.12 Software
18.13 Concluding Remarks
Part V References
Index
a
b
c
d
e
f
g
h
i
j
m
n
o
p
q
r
s
t
v
w
WILEY END USER LICENSE AGREEMENT
Отрывок из книги
Advisory Editor, Marian Scott, University of Glasgow, Scotland, UK
Founding Editor, Vic Barnett, Nottingham Trent University, UK
.....
As well as providing independent advice, an advisory group can play an important role in garnering wider clinical or topic expertise, and providing additional methodological oversight. It is also a good way to engage patients and the public in a meaningful way, ensuring that the patient voice is heard. Therefore, an advisory group will often include members from different specialties and professions that are relevant to the review topic, representation from patient support groups, individual patients and carers (who will often have a different perspective to support groups), and methodologists. For example, the advisory group for an IPD meta‐analysis project (and linked economic evaluation) examining intensive behavioural interventions based on applied behaviour analysis (ABA) for young children with autism included: representation from the National Autistic Society, research study investigators, parents of children with autistic spectrum disorder, adults with autism spectrum disorder, ABA practice specialists, psychiatrists, clinical and educational psychologists, specialists in IPD meta‐analysis, and health economists.78 Having well‐respected international members of the advisory group might be particularly helpful when requesting IPD from trial investigators working in different countries to the central research team.
As participant‐level data are being used, commissioners may sometimes suggest that an IPD meta‐analysis project should establish governance structures similar to those for a clinical trial. However, with the exception of those that are prospective (Section 3.12), IPD meta‐analysis projects use participant‐level data that have already been collected in existing trials and no new participants are being recruited; therefore, there should usually be no requirement for a steering or data monitoring and ethics committee (as would be needed for a clinical trial).
.....