Anda di halaman 1dari 47

Randomized Controlled Trials in

Health Services Research


Morris Weinberger, PhD

Senior Career Scientist, HSR&D Service


Investigator, Center for Health Services
Research, Durham VAMC
VA Cyber-Seminar, January 12, 2009
Overview to Today’s Seminar
• Overview of randomized controlled trials
• Minimizing threats to internal validity
• Targets of interventions
• Analytical issues
• Practical issues and advice
Overview of Randomized
Controlled Trials
• Bias
– Systematic (non-random) error
– Bane of research, regardless of study design
– Can invalidate study results
– Can occur in any phase of research
Overview of Randomized
Controlled Trials
• Most powerful research design to
establish causality, including the
effectiveness of interventions
• Establishes causality by controlling for
confounding factors
• Well-designed experiments minimize bias
• Not suitable for all research questions
Overview of Randomized
Controlled Trials
• Subjects randomized to treatment groups
• Follow subjects prospectively
• Compare subjects across treatment groups
on relevant outcomes
Required CONSORT Figure
Overview to Today’s Seminar
• Overview of randomized trials
• Minimizing threats to internal validity
Validity
• Internal validity: Can the observed
differences between groups be
attributed to the intervention?
– Randomization
• External validity: Are the observed
differences in your study representative
of patients/subjects in general?
– Random sampling
Threats to Internal Validity
• Biased assignment of patients to groups
• Biased outcome assessment
• Non-compliance with treatment protocol
• Dropouts
• Co-intervention
• Contamination
Minimizing Threats to Internal
Validity
• Randomization
• Blinding (Masking)
• Intervention design
• Study protocol
Randomization
• Random assignment of subjects to study groups:
– Produces study groups comparable with respect to
measured and unmeasured characteristics
– Removes investigator bias in assigning patients to
groups
– Increases validity of statistical tests
• If allocation of subjects to groups is predictable,
it may lead to bias, e.g., decision to participate
Minimizing Threats to Internal
Validity
• Randomization
• Blinding (Masking)
Minimizing Threats to Internal
Validity
• Randomization
• Blinding (Masking)
• Intervention design
Considerations When
Designing Interventions

• What is the intervention?


• Is it likely to be potent?
• Is it ethical?
• Is it practical and feasible in the “real
world”?
• Will it be acceptable to patients?
• Is it effective?
What is the Intervention?
Standardization
• Who did what to whom?
• What was the dose?
• How often?
• For how long?
• Administered under what conditions?
• With what dose adjustments?
Minimizing Threats to Internal
Validity
• Randomization
• Blinding (Masking)
• Intervention design
• Study protocol
Study Protocol
• Choice of control group
• Recruitment strategies
• Retention strategies
• Outcome measures
• Evaluating effectiveness of the intervention
– Compliance with treatment protocol
– Co-intervention
– Contamination
Choosing the Control Group
• Nothing
• Usual Care
• Placebo
Recruitment Strategies
• Recruitment sources
– Community vs. patients
– Volunteer bias
• Barriers to recruitment
– Interest in subject
– Distrust of research
– Distrust of contact from unknown persons
– Transportation
– Informed consent
– Long enrollment visits
Retention
• Impact on external validity of the trial
– Who completes the trial?
• Impact on internal validity of the trial
– What if there is differential dropout?
• Impact on number of patients recruited
– How will my sample size estimates be
affected?
Design Considerations:
Choosing Measures
• Properties of the measure
– Validity and reliability
– Floor and ceiling effects
– Sensitivity to change
• Pragmatic considerations
– Setting
– Respondent burden
– Appropriateness for subjects
– Cost
Evaluating Effectiveness of the
Intervention
• Dose: Was the intervention delivered?
• Contamination: Did the control group receive
components of the intervention?
• Co-intervention: Did the treatment group
receive interventions other than what was
intended?
• Key: Measure what elements of the
intervention were delivered to all study groups
Overview to Today’s Seminar
• Overview of randomized trials
• Minimizing threats to internal validity
• Targets of interventions
Targets of Strategies to
Improve Outcomes
• Patients
• Providers
• System
Patient-Level Strategies
• Randomization is at the patient level
• Simplifies the statistical analysis
Provider-Level Strategies
• Strategies to improve quality of care by
intervening with providers
• If outcomes are at physician level, issues are
generally similar to patient-level interventions
Provider-Level Strategies
• Often, goal is to evaluate the impact of provider
interventions on patient outcomes because
– Expect intervention with provider to affect patient
outcomes (e.g., improving patients’ glycemic control)
– Concern that providing intervention to intervention
patients will affect providers behavior with control
patients (i.e. contamination)
• It seems like effective sample size should be
greater than the number of physicians
randomized
Provider-Level Strategies
• Randomizing patients assumes balance on both
measured and unmeasured variables
• Analytically, assumes that patients are
independently assigned to groups
• When unit of randomization is not the unit of
analysis:
– Patients are not independent within physicians
– Physicians not independent within setting (e.g.,
team, hospital)
– Complicates sample size estimates
Provider-Level Strategies:
Summary
• Often, interventions to improve outcomes target
providers, but analyze patients
• Reasonable, and perhaps only plausible strategy
• Must provide reviewers with clear justification
(i.e., minimizes threats to internal validity that
would otherwise result)
• Involve biostatisticians early, as the analyses
(including sample size calculations) are complex
Design Considerations:
Biomedical vs. HSR Trials
Consideration Biomedical HSR
• Patients
– Eligibility Narrow Broad
– Randomization Patient Patient, Physician,
Clinic, Hospital
• Intervention:
- Components Single Multiple
- Type Drug, device, Structure of care
procedure
- Uniformity High Low
Design Considerations:
Biomedical vs. HSR Trials
Consideration Biomedical HSR
• Intervention
– Control group Standard Usual Care
– Assess compliance Easy-Moderate Difficult
• Data Collection
– Outcomes Events, test result Patient-centered
– Process of care Easy-moderate Difficult
– Cost Easy-moderate Difficult
– Blinding Possible Not possible
Overview to Today’s Seminar
• Overview of randomized trials
• Threats to internal validity
• Targets of interventions
• Analytical issues
Analytical Issues
• Between- versus within-group comparisons
– Primary analysis is between-groups
• Controlling for baseline differences
– Randomization does not achieve balance on
key factors
Analytical Issues
• Intention to Treat: Subjects analyzed as
part of original group, regardless of
compliance, dropout, or crossover
– Question answered: What is benefit of
treatment/intervention as given?
• Per protocol: Focus on subjects who were
compliant with protocol
– Question answered: What is benefit for
people who are compliant?
Analytical Issues
• Sub-group analyses: Focus on subjects
with certain characteristics
– Question answered: Does the treatment
work for certain types of patients
• Disaggregating complex interventions:
Can we identify the effective component?
• Unit of assignment versus unit of analysis
Overview to Today’s Seminar
• Overview of randomized trials
• Threats to internal validity
• Targets of interventions
• Multi-site trials in health services
research
• Analytical issues
• Practical issues and advice
Practical Issues and Advice
• What help do I need?
• Am I ready to conduct a clinical trial?
• Is the intervention worth evaluating?
• Is the project feasible?
• What outcomes can I reasonably expect
to change?
What Help Do I Need?
• Colleagues with content and
methodological expertise and/or other
specialized skills
• Biostatistician
Biostatisticians Are Your
Friends
• Study design
• Sample size
• Measuring outcomes
• Data management
• Statistical analysis
Am I Ready to Conduct a
Clinical Trial?
• Equipoise
• Advance previous study
– Good idea, flawed study
– Application to another patient population
– Application to another clinical venue
• Relevance
– Policy makers
– Clinicians
– Health care organizations
Is the Intervention Worth
Evaluating?
• Is the study ethical?
• Is the intervention feasible and practical?
• If effective, will intervention be accepted
and useful?
– Health care organization
– Physicians
– Staff
– Patients
Is the Project Feasible?
Pilot Study as Dress Rehearsal
• Are there enough subjects?
• Do I have a practical strategy to identify, enroll
and retain subjects in the study?
• Do I have time to complete the project?
• Do I have resources to complete the study?
• Can I measure the critical variables?
• Are data collection forms reasonable?
• Do I have buy-in from the organization and
personnel (physicians, nurses, clerks, etc.)?
What Outcomes Can I
Reasonably Expect to Change?
• What should my outcomes be?
– Mortality and/or morbidity
– Clinical parameters
– Health-related quality of life
– Health services utilization/cost
Final Advice
• Specify the hypotheses
• Write early, write often:
– methods
– dummy tables
• Monitor what is happening
– Enrollment
– Retention
– Delivery of the intervention
• Review CONSORT statement on reporting
randomized trials

Anda mungkin juga menyukai