METHODS
Collaborative Execution Framework
67
Kroch et al
Multivariate Analysis
0
366
366
0
373
373
141
141
29
141
321
462
170
324
494
136
28
36
200
392
592
136
28
35
199
424
623
Some hospitals may not have consecutive data across the entire time frame.
68
www.journalpatientsafety.com
No. observations
(hospital quarters)
Observed mortality
rate, average
Expected mortality
rate, average
O-E Diff mortality
rate, average
Beds (099)
Beds (100199)
Beds (200399)
Beds (400+)
Teaching (COTH)
Rural location
Northeast
Midwest
South
West
Y2006
Y2007
Y2008
Y2009
Y2010
Y2011
QUEST
Non-QUEST
Overall
2851
7749
10,600
1.90%
2.03%
2.00%
2.62%
2.35%
2.42%
0.72%
0.32%
0.43%
14.5%
18.7%
37.9%
28.8%
16.6%
16.2%
15.6%
29.1%
42.3%
13.0%
0.0%
0.0%
19.8%
23.9%
28.5%
27.8%
20.5%
21.3%
34.6%
23.7%
12.5%
26.6%
12.8%
18.8%
43.1%
25.3%
9.3%
18.9%
16.0%
16.0%
18.6%
21.1%
19.2%
25.1%
30.7%
25.1%
13.6%
23.8%
13.6%
21.5%
42.9%
22.0%
6.8%
13.8%
17.1%
18.1%
21.3%
22.9%
is the least constrained, allowing for cohort effects, as well as flexible time effects by interacting the QUEST flag with the cohort, as
well as the year. A hospital's cohort is determined by the year it
joined QUEST, that is, charter membership (starting in 2008),
the class that started in 2009, and the class of 2010.
Another variant of the model introduces full hospital effects,
which effectively removes the influence of all potential latent effects, thereby isolating the timing effects. This variant is applied
to each of the 3 aforementioned model specificationsversion
b as distinct from the original version a described earlier. Version b is a type of Heckman specification2 to remove selection
bias. In this setting, all hospital effects are represented by dummy
variables, which replace all control traits, such as size, teaching
status, location, and the like. Finally, version c of the model introduces hospital random effects to account for the correlation of
observations over time within a hospital and the correlation of
patients within a hospital. Hence, we allow for nonindependence
of hospital disturbances over time, which, if treated as fixed, have
the potential of inflating the significance of the QUEST effect. If
the QUEST effect were purely based on self-selection into the collaborative, then the time effects would vanish in favor of sorting
on hospital effects, whether fixed or random.
RESULTS
There were 141 charter members in QUEST. Not all QUEST hospitals license data to be included in the Premier research database,
and these were excluded. During the 4-year performance period,
some charter members of QUEST dropped out of the collaborative. There were 136 QUEST charter members with at least one
quarter of data between the third quarter of 2006 and the last quarter of 2011. There were 317 hospitals in the Premier database that
did not join QUEST at any time during the baseline or performance period with at least one quarter of data in the same period.
QUEST charter member and non-QUEST Premier hospital
characteristics are shown in Table 3.
The change in mortality during the baseline and performance
period for these 2 hospital cohorts is shown in Figure 1 as a
4-quarter moving average. The average O/E ratio for the baseline
period for QUEST hospitals and non-QUEST hospitals was 0.98
and 1.07, respectively. By the end of the 4-year performance
TABLE 3. Hospital Characteristics of QUEST and Non-QUEST
Cohorts
No Constraints
Hospital
Characteristic
Bed size
<100
100-200
200-400
400+
Urban
Teaching
Region
Midwest
Northeast
South
West
QUEST
NonQUEST
QUEST
NonQUEST
n = 136
n = 317
n = 123
n = 135
%
14.0
16.9
36.8
32.4
83.8
31.6
%
23.7
21.1
33.8
21.5
71.9
28.4
%
15.4
17.1
35.8
31.7
84.6
30.9
%
22.2
18.5
36.3
23.0
71.1
25.2
30.9
17.6
36.8
14.7
22.7
11.7
41.3
24.3
31.7
19.5
34.1
14.6
15.6
8.1
39.3
37.0
period, the QUEST hospital cohort average O/E ratio was reduced
to 0.65 in the final year of the performance period compared
with 0.75 for the non-QUEST cohort. There were 40,859 deaths
avoided calculated for the QUEST hospital cohort compared
with 38,115 death avoided calculated for the non-QUEST cohort.
Deaths avoided were computed by subtracting observed deaths
from expected deaths in each calendar quarter and finding the
total for the performance period.
To compare results of non-QUEST hospitals with consistent
data during a long period, only non-QUEST hospitals with data
in every quarter of the baseline and performance period were examined separately. Imposing this constraint had no material effect
on the results displayed in Figure 1.
The QUEST participants understood that expected mortality
depends in part on the number of comorbid conditions coded in
the discharge abstract. That raises the possibility that differences
in O/E ratios between the QUEST cohort and the controls reflect
(at least in part) differences in the rate of coding secondary diagnoses. Figure 2 shows that the rate of recording secondary diagnosis codes was indeed higher among QUEST hospitals at the
outset, but over time, the non-QUEST facilities increased their
use of secondary diagnosis codes at a faster pace than the QUEST
cohort, and by the third quarter of 2011, the gap nearly closed.
Figure 3 depicts the results of a paired t test of matched hospitals,
which tests the null hypothesis that between the QUEST cohort
and the non-QUEST controls, the difference in the number of
secondary diagnoses per patient is zero (heavy horizontal line at
69
Kroch et al
Multivariate Analysis
All 3 alternative specifications of the regression model demonstrated statistically significant QUEST effects for risk-adjusted
Variable
QUEST_Flag
QUEST_Trend
QUEST_Y2008
QUEST_Y2009
QUEST_Y2010
QUEST_Y2011
Charter_Y2008
Charter_Y2009
Charter_Y2010
Charter_Y2011
Class09_Y2009
Class09_Y2010
Class09_Y2011
Class10_Y2010
Class10_Y2011
Model 1b
Model 2a
Model 2b
Model 3a
Model 3b
Model 1a
(R2 = 0.244) (R2 = 0.638) Model1c (R2 = 0.244) (R2 = 0.638) Model 2c (R2 = 0.244) (R2 = 0.639) Model 3c
0.18
0.00
0.09
0.00
0.12
0.00
0.20
0.20
0.09
0.21
0.10
0.12
0.04
0.05
0.13
0.14
0.00
0.08
0.20
0.23
0.10
0.19
0.05
0.03
0.25
0.11
0.23
0.10
0.13
0.04
0.02
0.04
0.01
0.13
0.05
0.05
0.13
0.16
0.00
0.06
0.06
0.02
0.17
0.02
0.09
70
www.journalpatientsafety.com
Model 1
The QUEST hospitals have a risk-adjusted mortality rate
0.18% lower than non-QUEST hospitals, assuming the same hospital mix. Given an overall mortality rate of approximately 2%, the
absolute difference of 0.18 percentage points is a relative difference of approximately 9% in favor of QUEST hospitals. A similar
but smaller effect was seen in models 1b and 1c (random hospital
effects). Nevertheless, we found no statistically significant linear
relationship between risk-adjusted mortality and the duration of
a hospital's participation in QUEST.
Model 2
The interactive variables between QUEST flag and year
dummies allow the QUEST trend effect to be nonlinear. In each
year, QUEST hospitals performed better than non-QUEST hospitals. In the model 2a, this effect was statistically significant each
year. However, there has been no progressive QUEST effect over
time in either the fixed- or random-effects versions (2b and 2c).
Model 3
In this model, the QUEST effect is examined by each class.
Charter members performed significantly better than non-QUEST
hospitals in all 4 years, but there is no progressive QUEST effect
over time. Both classes 2009 and 2010 started with no significant
difference from non-QUEST hospitals but ended performing better in 2011 in model 3a, suggesting a strong lag effect for mortality reduction associated with QUEST membership. Interestingly,
controlling for latent effects in models 3b and 3c significantly reduced the size effect of the coefficient for the class of 2010 in the
final year of this study.
DISCUSSION
The study found that hospitals participating in Premier's QUEST
Collaborative reduced the O/E mortality ratio as much as 10%
more than a matched group of non-QUEST Premier hospitals
a group committed to quality improvement with access to many
of the tools QUEST participants had. The matching result was
corroborated in the formal multivariate analysis.
We found no evidence that these improvements in O/E mortality could be attributed solely to improved coding and more precise
documentation; rather, all hospitals' expected rate of mortality is
increasing across the board as documentation improves. This is
likely due to pressure for accurate coding as hospital payments
become increasingly tied to risk-adjusted outcomes.
In addition, focused discussions with QUEST members helped
to identify factors that contributed to the success of the collaborative. Several themes have emerged, which correspond to our
model for collaboration. (1) Building willthat all results and
data are transparent to everyone in the collaborative has been cited
as a means to provide a sense of urgency, and participants once
complacent in the assumption that they were providing the highest
possible care have often been confronted with a different reality.
(2) Sharing ideasbecause data are collected in a common format, participants and staff are able to identify those islands of
top performance in specific domains that serve as models. (3) Collaborative executionorganized sprints and collaboratives provide a structured means to facilitate improvement much like the
management systems of successful enterprises.
In addition, we have corroborating evidence that hospitals took
concrete actions to lower their O/E ratios: O was driven down by a
number of interventions that took place in a matter of months,
such as advances in sepsis treatment and better discharge management that led to greater use of hospice care.
2015 Wolters Kluwer Health, Inc. All rights reserved.
71
Kroch et al
CONCLUSIONS
Given the relative improvement of the QUEST collaborative participants compared with their peers, this study has potential policy
implications with regard to transparency, promotion of success
tactics, providing a platform for structured improvements, and
goal setting. This is evident when contrasting QUEST with our
previous effort to establish a major improvement collaborative
HQID. Although it shared many of the features of the Centers
for Medicare and Medicaid Services/Premier HQIDcommitment
of senior leadership, collection of a common data set via standard
tools, and transparencyQUEST differed from HQID in several
important ways, which we feel may have contributed to its success. Whereas HQID used an educational portal to post improvement ideas, QUEST actively sought out islands of excellence
and actively promoted success tactics through a variety of mechanisms. In addition, whereas HQID relied for the most part on
hospitals to structure their own improvement efforts internally,
QUEST provided a structured platform for collaborative execution. Finally, HQID was structured as a tournament, with a relative (highest 25 percentile) top performance goal, ensuring, by its
nature, that 75% of the participants would not be top performers.
QUEST used a fixed goal set in advance, providing a condition in
which every participant could achieve the goal and fostering an
environment of mutual assistance and collaboration. All of these
factors seemed to have potentially played a role in improving
mortality outcomes in QUEST but will require further study to
draw the direct causal pathway. To better understand the mechanism of the effect, we recommend further research to help document causation.
REFERENCES
1. Grossbart SB. What's the return? Assessing the effect of
pay-for-performance initiatives on the quality of care delivery. Med Care
Res Rev. 2006;63:29S48S.
2. Heckman J. Sample selection bias as a specification error. Econometrica.
1979;47:153161.
3. Hulscher ME, Schouten LM, Grol RP, et al. Determinants of success of
quality improvement collaboratives: what does the literature show?
BMJ Qual Saf. 2013;22:1931.
4. Jha A, Orav EJ, Epstein AM. The effect of financial incentives on hospitals
that serve poor patients. Ann Intern Med. 2010:299306.
5. Jha A, Joynt KE, Orav EJ, et al. The long-term effect of premier pay for
performance on patient outcomes. N Engl J Med. 2012;366:16061615.
6. Kahn CN III, Ault T, Isenstein H, et al. Snapshot of hospital quality
reporting and pay-for-performance under Medicare. Health Aff. 2006;25:
148162.
72
www.journalpatientsafety.com