Anda di halaman 1dari 9

Error Detection in Anatomic Pathology

Richard J. Zarbo, MD, DMD; Frederick A. Meier, MDCM; Stephen S. Raab, MD

● Objectives.—To define the magnitude of error occurring ation and interpretive error, which ranges from 1.2 to 50
in anatomic pathology, to propose a scheme to classify errors per 1000 cases; however, it is unclear which forms
such errors so their influence on clinical outcomes can be of such redundancy are the most efficient in uncovering
evaluated, and to identify quality assurance procedures diagnostic error. The proposed error taxonomy tested has
able to reduce the frequency of errors. shown a very good interobserver agreement of 91.4% (k
Design.—(a) Peer-reviewed literature search via PubMed 5 0.8780; 95% confidence limit, 0.8416–0.9144), when
for studies from single institutions and multi-institutional applied to amended reports, and suggests a distribution of
College of American Pathologists Q-Probes studies of an- errors among identification, specimen, interpretation, and
atomic pathology error detection and prevention practices; reporting variables.
(b) structured evaluation of defects in surgical pathology Conclusions.—Presently, there are no standardized tools
reports uncovered in the Department of Pathology and for defining error in anatomic pathology, so it cannot be
Laboratory Medicine of the Henry Ford Health System in reliably measured nor can its clinical impact be assessed.
2001–2003, using a newly validated error taxonomy The authors propose a standardized error classification
scheme; and (c) comparative review of anatomic pathology that would permit measurement of error frequencies, clin-
quality assurance procedures proposed to reduce error. ical impact of errors, and the effect of error reduction and
Results.—Marked differences in both definitions of error prevention efforts. In particular, the value of double-read-
and pathology practice make comparison of error detec- ing, case conferences, and consultations (the traditional
tion and prevention procedures among publications from triad of error control in anatomic pathology) awaits objec-
individual institutions impossible. Q-Probes studies further tive assessment.
suggest that observer redundancy reduces diagnostic vari- (Arch Pathol Lab Med. 2005;129:1237–1245)

B ecause of its complex nature, anatomic pathology is


prone to error at many steps throughout the testing
process. In the complex series of production events on the
view of the literature discovered no comparable assess-
ment tools; the absence of these tools makes objective as-
sessment of strategies to prevent surgical and cytopathol-
way to a surgical or cytopathology report, there are few ogy error impossible. Some strategies have been used to
instances of the ‘‘autonomation,’’ mechanized error detec- prevent errors in surgical pathology reports. We list these
tion, and safety-inducing ‘‘forcing functions’’ common to strategies that have been drawn from 2 sources, a rela-
industrial production.1 Professional and technical human tively small body of published studies on error from in-
interactions are the usual source of quality control and dividual institutions and the comparative information
error detection. gleaned from College of American Pathologists (CAP) Q-
A first prerequisite to reducing the incidence of error in Probes studies, but have found no reliable assessment of
anatomic pathology is a reasonably complete and gener- their worth. Two of us (R.J.Z. and F.A.M.) took a step to-
ally consistent set of definitions of the types of problems ward providing the first of the 2 missing prerequisites by
encountered, so that their magnitude can be gauged. Until applying an error taxonomy to our own practice experi-
now, no such taxonomy has existed. Second, standardized ence in the Department of Pathology and Laboratory Med-
measurement tools, using these definitions, need to not icine in the Henry Ford Health System (Detroit, Mich).
only measure the rates of errors and provide a standard- The 3 of us are now attempting to validate this system
ized assessment of their clinical impact, but also test the in a consortium of institutions studying pathology error
effects of error reduction and prevention efforts. Our re- and patient safety through a grant funded by the Agency
for Healthcare Research and Quality, with the intent of
developing the second prerequisite, standardized error
Accepted for publication May 26, 2005.
From the Department of Pathology and Laboratory Medicine, Henry monitors. With such monitors in hand, the clinical impact
Ford Health System, Detroit, Mich (Drs Zarbo and Meier); and De- of errors could then be assessed.
partment of Pathology and Laboratory Medicine, University of Pitts-
burgh Medical Center/Shadyside, Pittsburgh, Pa (Dr Raab). DESIGN
Presented at the College of American Pathologists Special Topic Sym-
posium, Error in Pathology and Laboratory Medicine: Practical Lessons Anatomic Pathology Error Detection Studies
for the Pathologist, Phoenix, Ariz, September 20, 2004.
The authors have no relevant financial interest in the products or
Peer-reviewed literature was searched via PubMed for
companies described in this article. reports from single institutions to examine the mecha-
Reprints: Richard J. Zarbo, MD, DMD, Department of Pathology, nisms of error discovery. In each publication, we looked
Henry Ford Hospital, 2799 W Grand Blvd, Detroit, MI 48202. at criteria for defining error and assessing its ‘‘severity,’’
Arch Pathol Lab Med—Vol 129, October 2005 Error Detection in Anatomic Pathology—Zarbo et al 1237
Table 1. Approach to Error Investigation and were compared from usual laboratory quality assurance
Documentation (QA) and surveillance sources.
● Type of error (see Figure)
● Timing of discovery
Error Taxonomy and Validation
● Discoverer We used a recently developed and validated tool, a tax-
● Report revision onomy of anatomic pathology error, to evaluate in a stan-
● Mechanism of discovery
● Outcome of error: initial vs late
dardized way amended report defects in surgical pathol-
ogy diagnostic information uncovered in the Department
of Pathology and Laboratory Medicine of the Henry Ford
Health System in 2001–2003. The approach to error inves-
whether the study was prospective or retrospective, the tigation and documentation using that tool is now de-
number of cases reviewed to detect the errors, and the scribed (Table 1).
composition of the denominator of cases used to define We define 4 general types of errors (Figure), with 3 sub-
the error rates. types in the category of defective interpretation. (1) The
From the CAP Q-Probes studies of multiple institutions, first subtype is a false-negative diagnosis or undercall of
we examined 4 measures of error: the types and rates of the extent or severity of a lesion. (2) The second is a false-
errors detected in amended surgical pathology reports positive diagnosis or an overcall. (3) The third subtype is
were compared with the types of internal review used in misclassification. For example, there is neither an undercall
participating departments, and the error types and rates nor an overcall when the pathologist incorrectly labels an
detected by second-pathologist review after case sign-out entity in the proper category of disease (eg, fibrosarcoma

Error types and test-cycle phases.


1238 Arch Pathol Lab Med—Vol 129, October 2005 Error Detection in Anatomic Pathology—Zarbo et al
Table 2. Error Outcome Severity Assessment*
No impact on care
No harm: erroneous message not transmitted or received
Near miss: erroneous message received but ignored or disregarded
Minimal harm (no morbidity)
Delay in diagnosis only (,6 mo)
Unnecessary noninvasive further diagnostic efforts (blood, radiograph, computed tomography)
Delay in therapy only (,6 mo)
Unnecessary therapy based on diagnostic error without morbidity
Minor harm (minor morbidity)
Delay in diagnosis only .6 mo
Unnecessary invasive further diagnostic efforts (biopsy, angiogram)
Delay in therapy with minor morbidity
Moderate harm (moderate morbidity)
Moderate morbidity due to otherwise unnecessary diagnostic efforts
Moderate morbidity due to otherwise unnecessary therapeutic efforts
Major harm (major morbidity)
Dismemberment or loss of an organ or function of an organ system due to unnecessary diagnostic efforts
Dismemberment or loss of an organ or function of an organ system due to unnecessary therapeutic efforts
Death
* Minor morbidity indicates effects and events that can be demonstrated objectively and that do not require hospitalization or surgical intervention
(eg, fever, thrombocytopenia, wound erythema, swelling); moderate morbidity, effects and events that require hospitalization or surgical interven-
tion, but do not result in dismemberment or loss of life; and major morbidity indicates dismemberment, loss of an organ or the function of an
organ system (eg, arm/limb, eye/sight, ear/hearing, speech, or the uterus of a woman of reproductive age).

rather than malignant fibrous histiocytoma); the alterna- change made, usually in an amended report, to revise the
tive designation alters neither the diagnostic primary clas- error after it is discovered. These amendment options in-
sification (eg, malignancy) nor secondary diagnostic fea- clude changes in (1) the primary diagnostic characteristics
tures (eg, high grade, negative margin) among the char- (eg, change from negative to positive, benign to malig-
acteristics summarized in the report. nant, or inadequate to adequate); (2) the secondary diag-
The second major category of error is that of defective nostic characteristics (eg, tumor grade, stage, margin, or
identification of patient, tissue, or laterality. Such misiden- node status); (3) diagnostic reclassification (eg, the fibro-
tification can take place at any step in the diagnostic pro- sarcoma changed to malignant fibrous histiocytoma in
cess, but typically involves the preanalytic phase of test- which the primary or secondary diagnostic change does
ing. This can involve misidentification of the patient, the not alter the prognostic impact of the classification); (4)
origin of the tissue sample itself (eg, stomach vs colon), patient or specimen reidentification; (5) report of addition-
the anatomic location (eg, ascending vs sigmoid colon), or al specimen sampling that had resulted in the changed
the laterality of the tissue (eg, right vs left breast). report; and (6) other edits of the reports that do not
The third major category of error consists of specimen change primary or secondary diagnostic information, pa-
defects. These include, first of all, lost specimens, but also tient or specimen identification, or involve specimen char-
specimens of inadequate volume or size, as the specimens acteristics.
are submitted in the preanalytic phase, and inadequate Timing of discovery segregates into those cases detect-
gross description or mismeasurement, extraneous tissue, ed before sign-out (before the case is finalized) and those
or inadequate sampling occur in the analytic phase. An- detected after sign-out (after a report has been produced).
alytic-phase specimen defects also include specimens For changes detected before sign-out, we define 4 mech-
whose representativeness is inadequate or less than opti- anisms of discovery: the effects of (1) additional infor-
mal at the tissue, block, or slide level, because of an action mation or material; (2) intradepartmental review before
or inaction taken or not taken in the surgical pathology sign-out or double-read of the current case; (3) preparation
and histology laboratories. Failure to perform pertinent for or presentation at a conference or at review with cli-
ancillary studies that would have initially revealed a cor- nician; and (4) an external consultation.
rect diagnosis is also classified among the subtypes of an- For the revisions after sign-out, we list 5 mechanisms:
alytic specimen defects. (1) the responsible pathologist’s review of a recent case
The fourth major category of error is that of a defective without additional information or material; (2) the respon-
report. This includes reports with erroneous or missing sible pathologist’s review of a recent case with additional
nondiagnostic information (eg, clinician name, date of pro- information or material but without clinician prompting;
cedure), dictation or typing errors, and report format or (3) at preparation or presentation at conference with cli-
upload errors. The latter defects arise from the use (or nicians (eg, tumor board); (4) clinician-initiated review or
misuse) of computer systems. Defective report errors typ- reconsideration of a case; and (5) as the result of an ex-
ically occur in the postanalytic phase of anatomic pathol- ternal consultation.
ogy testing, although absent or incorrect information may The third and last part of the error classification at-
have arisen in the preanalytic phase without having been tempts to standardize assessment of outcomes related to
detected or addressed by anyone in pathology until (or anatomic pathology error. Again, temporal considerations
after) the report’s preparation and publication. force a subdivision of the evaluation into an initial assess-
This scheme incorporates documentation of the type of ment at the time of error discovery and a follow-up as-
Arch Pathol Lab Med—Vol 129, October 2005 Error Detection in Anatomic Pathology—Zarbo et al 1239
Table 3. Literature Summary of Anatomic Pathology Errors*
Discrepancy Error Rate No. of Cases Reviewed Review Method Source, y
Single Institutions
0.26% 5397 SP Prospective slide double-read Safrin and Bark,14 1993
0.2% Unknown Questionnaire, 202 pathologists; self- Furness and Lauder,15 1997
reported errors during 1-y period
0.1% 5000 SP Retrospective blinded review Renshaw et al,17 2003
9% changed diagno- ... Conference case review McBroom and Ramsay,6
sis, 10% refined di- 1993
agnosis
Multiple Institutions
6.7% aggregate, 5.1% 6186 SP and CY Q-Probes, prospective data collection of Raab et al,16 2005
median up to 100 cases in each laboratory
with second pathologist review; 74
laboratories, various QA sources
Multiple Institutions
Amended Report Rate Amended Cases Review Method Source, y
0.12% 3147, amended from Q-Probes, prospective data collection of Nakhleh and Zarbo,13 1998
1667 547 SP cases up to 50 amended reports in each
laboratory; 359 laboratories, various
QA sources
3.4% 208, amended from Q-Probes, prospective data collection of Raab et al,16 2005
6186 cases up to 100 cases in each laboratory
with second pathologist review; 74
laboratories, various QA sources
* SP indicates surgical pathology; CY, cytopathology; and QA, quality assurance.

sessment 6 months after discovery. The taxonomy of out- Hospital, we have documented the effectiveness of 100%
come types divide the consequences of results into (a) no prospective review of all breast cases by a panel of pa-
impact on care, (b) an impact on care with minimal harm thologists who have some additional expertise in breast
(no morbidity), (c) minor harm (minor morbidity), (d) pathology.3 The baseline error rate in terms of amended
moderate harm (moderate morbidity), or (e) major harm reports in breast pathology from January 2002 to July 2003
(major morbidity or death) (Table 2). Minor morbidity is was derived under circumstances of routine practice (ie,
defined as effects and events that can be demonstrated voluntary intradepartmental consultation at pathologists’
objectively (fever, thrombocytopenia, wound erythema, individual discretion, presentation of malignancies at
swelling, etc), but which do not require hospitalization or weekly breast tumor board, and clinician-initiated re-
surgical intervention. Moderate morbidity includes effects views). During this period, 78 000 surgical pathology cases
and events that require hospitalization or surgical inter- were seen in the department and 37 amended reports for
vention but not major morbidity, defined as loss of an or- revised diagnoses were issued. Of these revised diagno-
gan or the function of an organ system (eg, arm/limb, ses, 5 (13.5%) were breast cases. Three were false-negative
eye/sight, ear/hearing, speech, or the uterus of a woman biopsies and 2 were false-positive diagnoses. From August
of reproductive age) or loss of life. 2003 through June 2004, the process of breast pathology
Finally, from the publications reviewed, an inventory of sign-out was changed to daily 100% prospective review
putative error prevention strategies was accumulated. by the panel. During this period, 36 000 surgical pathology
cases were accessioned, and 18 amended reports were is-
RESULTS
sued for revised diagnoses, but none were breast related.
I. Literature Review (Table 3) Although no breast diagnoses were changed, the retro-
A. Single-Institution Studies. 1. Double Reader: No spective tumor board review process detected other errors
Specialized Skills. Lind et al2 from Creighton University resulting in 4 amended reports regarding errors in sec-
pathology laboratory described different detection rates of ondary diagnostic characteristics related to the assessment
prospective compared to retrospective review involving of stage (2 cases), margins (1 case), and side of involve-
pathologists with no specialized skills. Using a 100% pro- ment (1 case).
spective, double-read process for 2694 diagnostic biopsies, 3. Correlation Review. Although cervical cytohistologic
the investigators found an error rate of 14.1% (380 cases). correlations are mandated for American laboratories by
Based on the denominator of all cases reviewed, 1.2% were the Clinical Laboratory Improvement Amendments of
major diagnostic errors. An additional 3.9% of cases had 1988,4 the effect on interpretive error reduction is unprov-
diagnostic discrepancies not considered major, 7% had en after 6 years of continuous tracking by the participants
minor errors, 1.9% had clerical errors, and 12.9% had no in the CAP Q-Tracks program, in which no improvement
clinical significance. The overall error detection rate of trends have been documented.5
13% errors from retrospective review of 480 random cases 4. Conference Review. In 1993, McBroom and Ramsay6
was similar to the detection rate of the prospective review, collected data on 416 cases reviewed in 8 conferences dur-
with a 1.7% frequency of major errors and 11.5% rate of ing a 14-week period and found that 19% of the diagnoses,
errors with no clinical significance. or 190 per 1000 cases, were changed after histologic re-
2. Double Reader: Specialized Skill Sets. At Henry Ford view. Amended diagnoses were split almost equally be-
1240 Arch Pathol Lab Med—Vol 129, October 2005 Error Detection in Anatomic Pathology—Zarbo et al
tween altered diagnoses (9%) and refined diagnoses reported from 74 laboratories, collected data prospectively,
(10%). Eighty-eight percent of the changes were attributed examining 6186 surgical and cytopathology cases that
to specialist expertise, and 4.8% were caused by additional were reviewed by a second pathologist after the sign-out
information provided by clinicians. Patient management of the pathology report. The median error rate detected
was assessed as unaffected in 92.5% of the 416 conference in this manner was 5.1%, or 51 per 1000 specimens (Table
reviews, 3.8% (n 5 16) resulted in major management 3).16
changes, and 2.9% (n 5 12) in minor management chang- The Q-Probes study of 1996 examined amended reports
es.6 only. The 2003 Q-Probes error study looked at discrepancy
5. Institutional Review. Institutional consultation of pa- rates turned up by routine QA activities in surgical and
thology material from patients newly referred to practi- cytopathology, including internal reviews, such as cyto-
tioners whose practices the reviewing pathologists sup- histologic correlations, random review, frozen section–per-
port is another time-honored strategy for detecting past manent section correlation, extradepartmental consult, in-
errors and preventing future ones. It offers a double-read tradepartmental consult, conferences, and at clinician re-
of cases that have been diagnosed elsewhere before clini- quest.16 Up to 100 surgical pathology and cytopathology
cal second opinion is rendered or therapy is initiated. cases, 85% of which were surgical pathology cases, were
The Hershey Medical Center experience in reviewing reviewed after case sign-out in each institution.16
case material from all organ systems is that this QA activ- From this review, 1 (5%) in 20 reports had a defect iden-
ity produced a 9.1% diagnostic discrepancy rate. These tified. This translates to 50 000 defects per million or per-
diagnostic differences changed therapy or evaluation in formance between the 1 and 2 sigma levels. The vast ma-
5.8% of reviewed cases.7 In the published Johns Hopkins jority of errors detected at post–sign-out review were in-
Hospital experience of more than 6000 cases, 1.4% of di- terpretations changed within the same category of disease
agnoses changed in ways that caused major modifications (48%), followed by categorical (benign-malignant) inter-
of therapy or prognosis.8 However, in both the Hershey pretation discrepancies (21%), typographic errors (18.5%),
and Hopkins series, long-term patient follow-up (ie, out- a change in patient or specimen information (9%), and
comes) supported the original pathologist’s diagnosis in finally by revision of margin status (3.7%).
roughly 8% of cases, rather than the institutional review From the 1996 Q-Probes study, which focused on
pathologist’s second opinion. amended reports, the overall amended report rate was
Series of organ-specific institutional reviews by special- roughly 1.5 per 1000, or roughly 1500 errors per million.13
ist pathologists have shown similar rates of changed major This rate is significantly a hundred times lower than the
diagnoses following institutional review for head and 51 per 1000 rate found in the 2003 study. In the 1996 study,
neck (7%),9 prostate (1.3%),10 gynecologic tract (2%),11 and the majority of errors involved emendation of significant
neuropathology (8.8%).12 information that might affect patient management and
6. Single-Institution Anatomic Pathology Error Detection: prognosis ‘‘other than the diagnosis.’’ The second most
Summary. Because of variation in definitions and detec- common change amended diagnostic information itself,
tion methods, the range of error rates in anatomic pathol- and the third most common emendation was of patient
ogy reported from single institutions is wide, from 1 to identification. The emendation rates in this 1996 study ap-
90 per 1000 surgical pathology cases published in the pear to be due largely to ‘‘passive’’ discovery; that is, they
peer-reviewed pathology literature (see Table 3). Variation were called to pathologists’ attention without being looked
in definitions of errors and detection techniques makes for.
comparing performance among institutions impossible, so In contrast, in the 2003 study when pathologists re-
one cannot draw exportable conclusions about successful viewed 100 cases after sign-out, based on a multitude of
error reduction strategies. In addition to classifications of QA activities, the ‘‘discovery’’ rate was more than 33 times
error disagreeing from one study to another, calculated the rate derived from the 1996 Q-Probes study focused on
rates vary as to whether diagnoses were ‘‘changed’’ or reports that had been amended. The 74 institutions par-
‘‘refined’’ and according to method of detection, that is, ticipating in the 2003 study exhibited a wider range of
whether cases were derived from pathologists’ self-re- error detection, from no errors discovered to errors dis-
porting, prospective double-reading, retrospective blind- covered in 21% of retrospectively reviewed cases.16 More
ed review, or case conference review (Table 3).6,13–17 active, combined glass slide and report review based on
B. Multiple-Institution Anatomic Pathology Error De- multiple QA initiatives reveals a much higher error rate
tection. The first multi-institutional study of amended of 50 errors per 1000 cases, or roughly 50 000 errors per
anatomic pathology reports carried out in 1996 by the CAP million. Unlike the previous study, more comprehensive
Q-Probes quality improvement program retrospectively review most often led to changes in primary diagnosis
reviewed more than 1.66 million surgical pathology case rather than to changes of nondiagnostic significant infor-
accessions in 359 laboratories.18 The Q-Probes study dem- mation. These 2 well-designed studies illustrate the sig-
onstrated 0.15% median and 0.19% mean rates of changed nificant impact of the method of error detection on the
or amended reports, or 1.5 and 1.9 amended cases per rates of errors found.
1000 surgical pathology accessions.18 When stratified by 1. Extradepartmental Consultation. There are few data
slide review practices, no slide review policy resulted in on the effectiveness of extradepartmental consultation.
1.4 amended reports per 1000 cases; active slide review The Q-Probes experience is that the extradepartmental
before sign-out reduced that to 1.2 per 1000, whereas ac- consultation process confirms referring pathologists’ orig-
tive slide review after case sign-out resulted in a higher inal diagnoses 70% of the time and provides significant
amended report rate of 1.6 per 1000. These various sur- additional information in 16% of cases.19 Both national and
veillance methods resulted in error detection rates that local ‘‘experts’’ are most often used to resolve difficult
were not statistically significantly different. A second 2003 skin, hematolymphoid, breast, and gastrointestinal cases.19
Q-Probes study of anatomic pathology discrepancy rates, 2. Clinical Impact of Anatomic Pathology Error. Only the
Arch Pathol Lab Med—Vol 129, October 2005 Error Detection in Anatomic Pathology—Zarbo et al 1241
Table 4. Literature Summary of Anatomic Pathology Errors With Significant Clinical Impact
No. of Denominator
Significant Error Rate Cases Reviewed Review Method Source, y
Single Institutions
0.13% 5397 Prospective slide double-read Safrin and Bark,14 1993
4.1% 416 Conference case review McBroom and Ramsay,6 1993
0.08% 5000 Retrospective blinded review Renshaw et al,17 2003
Multiple Institutions
5.3% 379 errors detected Q-Probes, prospective data collec- Raab et al,16 2005
tion of up to 100 cases in each
laboratory with second patholo-
gist review; 74 laboratories, vari-
ous quality assurance sources
0.32% 6186 Q-Probes Raab et al,16 2005

second of the Q-probes studies examined the clinical con- and what the errors were; that is, did it involve diagnostic
sequences of the errors detected. Of 415 discrepant spec- interpretation, patient or specimen identification, speci-
imens detected in the 2003 Q-Probes study, 379 were eval- men attributes, or report production? It addressed the
uated for effect on patient management (Table 4). Pathol- type of error, the timing of the discovery, the discoverer,
ogist participants described 5.3% of these errors as having and the type of report revision; widened the scope of the
had a clinically significant impact on patient manage- investigation to deal with the mechanism of discovery;
ment.16 The outcome assessment by the pathologists par- and provided a classifying framework for evaluating out-
ticipating in this study indicated that only about 1 in 20 comes of error. The outcome assessment is designed to be
pathology errors had, in their estimation, ‘‘moderate or performed initially, at the time of error discovery, and sub-
marked’’ impact on clinical management, with nearly 95% sequently at 6 months.
resulting in ‘‘mild’’ outcome effect, ‘‘near misses,’’ or ‘‘no The Figure lists the types of anatomic pathology errors.
harm.’’ 16 The validity of this estimate is limited by a num- The central diagram of the figure illustrates the interre-
ber of uncontrolled factors, which include potential pa- lationships of error predicated on root cause analysis. Giv-
thologist observer bias and lack of a mandated rigorous en the focus on amended reports, some errors that arise
investigation. In the context of medical outcomes, chart from nondiagnostic information defects in the preanalytic
review is an essential element of assessment, which was aspect of the test cycle may be evident, if initially uncor-
not carried out in the Q-Probes studies. Interestingly, this rected, in the postanalytic test phase as a defective report
2003 Q-Probes study showed no statistically significant requiring emendation. The categories of discoverer could
difference in error frequency by organ type. be the caregiver, the technical support or clerical support
Others have calculated clinical impact using the number personnel, the pathologist, and/or instrument/computer
of specimens assessed, rather than the number of errors with designed smart logic. Mechanisms of discovery in-
or discrepant specimens detected, as the denominator. clude those examined in the 2003 Q-Probes studies (eg,
This variation in denominators by itself contributes to the routine QA practices; preparation and presentation at clin-
wide range of 0.8 to 41 per 1000 clinically significant an- ical conferences and tumor boards; and intradepartmen-
atomic pathology errors derived from reports of single in- tal, internal, extradepartmental, and external consulta-
stitutions.6,14,16,17 The 2003 Q-Probes study would have had tions). Assessing outcomes has further impressed upon us
a clinically significant error rate of 0.32% (or 3.2/1000) if the importance of agreeing on shared indices of severity
the total number of cases reviewed were used for the de- of outcomes, which are now incorporated into the assess-
nominator (Table 4). The confidence in these numbers is ment.
further eroded by the lack of standard criteria for outcome
assessment. Validation of the Taxonomy
Our initial evaluation of error identification and classi-
II. Taxonomy of Error fication of 430 amended reports derived from 150 000 sur-
The wide range of error rates derived from studies of gical pathology case accessions using the error taxonomy
diverse design is not a useful standard against which to scheme described herein has shown very good interob-
make comparisons or with which to judge the effect of server agreement in error classification of 91.4% (k 5
interventions on error reduction. One of the authors’ main 0.8780; 95% confidence limit, 0.8416–0.9144). In our prac-
goals in a multi-institutional investigation (funded by the tice setting, there is a fairly consistent amended report
Agency for Healthcare Research and Quality to study pa- frequency of 2.6 to 3.6 per 1000 during a 3-year study
thology errors and opportunities to improve patient safe- period. More than 1 error has been found in 2% to 10%
ty) is to develop standardized measurement tools for as- of these amended reports. Tumor board reviews generat-
sessing anatomic pathology errors. As part of this study, ed 15% to 18% of the amended reports, and most of these
we developed and are validating for interinstitutional were changes in primary diagnoses based on retrospective
comparison, a consistent, standardized, relatively compre- clinicopathologic review. The tumor board review process
hensive taxonomy of error in surgical pathology. This was the most efficient discovery mechanism for interpre-
scheme started from errors that prompted surgical pa- tive errors, detecting roughly half to three quarters of the
thology report emendation. It then looked to see when in interpretive errors documented in amended reports.
the diagnostic process the events causing errors occurred Importantly, the most common error type resulting in
1242 Arch Pathol Lab Med—Vol 129, October 2005 Error Detection in Anatomic Pathology—Zarbo et al
Table 5. Interpretive Error Reduction Strategies: Types diagnosis is that of extraneous tissue unappreciated as for-
of Observer Redundancy eign to that sample. These sample-related errors may re-
flect on the pathologist’s practical judgment, but not on
Double-read: general sign-out with intradepartmental consult
Voluntary, individual diagnostic thresholds his or her knowledge, skills, and abilities as a histopa-
Mandated by organ, diagnosis, or percent of cases thologist. Lastly, a clinical misinterpretation may result
Blinded or public review, by individual or panel from a pathologist finalizing a report in which he or she
Selected slides or entire case failed to catch in proofreading a significant typographic
Correlation review error that lacks diagnostic fidelity with the intended com-
Conference/Tumor Board review munication. Another report defect that can end in misdi-
Extradepartmental consult agnosis is failure of a revised diagnosis report to upload
Institutional review (outside cases) into a hospital’s computerized information system. The
underlying event contributing to the eventual erroneous
report is the primary defect in the unfortunate chain of
amended reports was that of wrong identification (27%– events that may well be considered as the primary cause
38%) related to patient, tissue, laterality, or sample loca- in the context of subsequent error-prevention root cause
tion. The second most common error was a defective re- analyses. Analysis of misdiagnoses reveals the complexity
port (28%–44%) corrected for erroneous or missing non- of errors that occur in all test phases of the surgical pa-
diagnostic information, dictation or typing error, or com- thology testing process.
puter format mistake. Diagnostic misinterpretations Two types of antierror interventions have gained cur-
accounted for a more narrow range of 23% to 28% of the rency, at least in North American pathology departments,
amended reports. Defective specimens were the least com- namely, double-reading and synoptic reporting. One or
mon cause of error, accounting for 4% to 10% of the more elements of observer redundancy may be built into
amended reports. surgical pathology practice to standardize pathologic in-
terpretation through routine case review and diagnostic
III. List of Error Reduction/Prevention Techniques consultation (Table 5). The other means of achieving stan-
From the Literature dardization and reducing variation is the use of structured
Table 5 provides a list of error reduction and prevention data entry and reporting elements, such as checklists and
techniques found in the literature. synoptic output forms. Examples of the latter, especially
for resected specimens of malignancies, are rapidly be-
COMMENT coming standards of practice across the United States.21,22
Several review and consultation practices are suggested The most common approach to reducing interpretive
by the laboratory accrediting agencies to reduce error and error is obtaining a second pathologist’s opinion by dou-
excess variation in anatomic pathology interpretation, but ble-reading of the glass slides constituting a pathology
they provide no specific methodologies. Such methods case. This form of intradepartmental consultation is not
would include reviewing previous cytologic and/or his- usually a standardized procedure and may be variously
tologic material from a patient when examining current applied in a voluntary manner based on individual di-
material; reconciling and documenting significant dispar- agnostic comfort thresholds. When double-reading is
ities in intraoperative consultation compared to final path- mandated, the mandate tends to be carried out in a be-
ologic diagnosis, including intradepartmental consulta- wildering variety of patterns. The double-reading is com-
tions in the patient’s final report; and documenting and monly applied by percentage of cases, by specific organ
maintaining records of extradepartmental consultations.20 system (eg, breast lesions), or in specific diagnostic cate-
There is no standardization offered of when, how fre- gories (eg, all newly diagnosed malignancies). The impact
quently, or how double-reading or second opinion con- of both first- and second-reader expertise, the manner in
sultations should take place. Furthermore, at this time which the review is conducted, and no doubt other vari-
there are no comprehensive comparisons of the effective- ables influence the effectiveness of such error prevention
ness of various approaches to double-reading and consul- strategies. The advantage to prospective review appears
tations as a means to reduce error in pathologic interpre- to be the timing of the error correction before erroneous
tation. information is communicated, as errors caught before re-
In trying to define surgical pathology error, most pub- porting have no opportunity to harm patients.
lished studies have focused on diagnostic accuracy. How- Several candidate variables may influence error detec-
ever, our taxonomy finds a wider spectrum of clinically tion by double-reading. A second reviewer may or may
significant errors occurring in surgical pathology with not be blinded to the first reviewer’s diagnosis. The second
many potential underlying and contributory causes. Ex- review may be undertaken by an individual or panel of
amination of the data that the taxonomy uncovers, in root double-readers. Review may be based on selected slides
cause analyses, may well reveal that cases of misdiagnosis or the entire complement of slides in a case. Documenta-
(the wrong diagnosis for the patient in question) is less tion of intradepartmental consultation in the pathologist’s
often an indictment of the pathologist’s diagnostic acumen final report, although an accreditation requirement, may
than a problem with patient or specimen misidentification. also vary. Characteristics that trigger double-reviews also
Correcting this sort of error often requires an investigation have impact on what the reviews will turn up. Reviews
into tissue identity, at times even resorting to molecular that focus on all new diagnoses of malignancy are direct-
genetic identity testing of embedded tissues. Diagnostic ed to minimizing false-positive diagnoses but will find no
errors may also result from a pathologist making diag- false-negative diagnoses. Reviews directed to other cate-
noses on inadequately sampled tissues, either at the gross gories of (benign) diagnoses or that cover all cases in a
or microscopic level, when additional material proves di- particular genre can pick up false-negative as well as
agnostic. Another defective specimen cause of erroneous false-positive diagnoses. Timing of review is another im-
Arch Pathol Lab Med—Vol 129, October 2005 Error Detection in Anatomic Pathology—Zarbo et al 1243
portant variable; different mixes of error may be found in the error taxonomy to our surgical pathology practice ap-
cases reviewed before sign-out, in contrast to cases re- pears to affirm the efficiency of the tumor board review
viewed after sign-out. Most of these review procedures process to uncover interpretive errors documented in
that add an element of redundancy have been adopted in amended reports.
pathology practices on the basis of plausibility (the intui- Whereas clinicians look to their particular pathology de-
tive likelihood that they will reduce error), but without partmental expert as the gold standard, pathologists often
evidence of relative effectiveness or evaluation of efficiency view extradepartmental experts of choice as the ultimate
in terms of time expended and cost incurred per error measures of diagnostic accuracy. It is widely recognized
found. that expert opinions may also differ based on the diag-
Some pathology practices are of a size that allows most nostic criteria and threshold of the chosen expert. For that
or all cases of tissue from a specific organ to be directed reason, many pathologists consistently use the same con-
to subspecialists focused on that organ system for primary sultant in a particular area. This practice serves not only
review and sign-out. Common areas of partial or full spe- for diagnosis verification, but also enables pathologists to
cialization include skin, brain, kidney, liver, and transplant calibrate their observations based on the consultant’s di-
pathology. Full specialization is a cultural system whose agnostic thresholds. Extradepartmental consultation is
adoption greatly influences professional staffing, depart- limited as an error surveillance tool, given the focus on
mental schedules, and resident education.23 The simplicity out-of-the-ordinary cases rather than on a selection of ap-
of a single reader has advantages of a pathologist likely parently routine cases that might be false-negative diag-
to have both a close working relationship and enhanced noses on review.
communication with specialist clinicians and a more com- Not all review options beyond cytohistologic correlation
plete understanding of specific clinical problems and path- and conference reviews are available on-site to small pa-
ologic correlations that material from the organ system thology practices. The American Society of Clinical Pa-
may present. Many clinicians view this approach as the thologists consensus conference on second opinions in pa-
gold standard. A single reader with specialized skill sets thology recommended that pathologists select problem-
using standardized terminology and diagnostic criteria prone cases for second opinion.25 Extradepartmental con-
has the advantage of the standardization of one, whereby sultation is more commonly used for this purpose in
in the words of Elliott Foucar,24 ‘‘the individuality (varia- smaller practice settings than in larger groups, including
tion) of many pathologists is reduced to the individuality in-house experts. Cases may also be sent out for referee
of one pathologist.’’ However, this approach usually opinions at the request of clinicians or patients. The status
comes at the price of staff size and therefore increased of extradepartmental consultation as a formal medical
direct cost compared to a generalist model. There are no consultation is reflected in its own reimbursement code to
data that demonstrate that specialist sign-out reduces er- be distinguished from internal QA activities that are not
ror in pathology. billable.
Correlation reviews present an opportunity to assess di- As can be seen from this literature review and initial
agnostic accuracy by comparing diagnoses between dif- validation of an error taxonomy for anatomic pathology,
ferent sampling modalities on the same patient, whether the value of double-reading, case conferences, and con-
the specimens are synchronously or metachronously ob- sultations, which intuitively form the traditional triad of
tained. Papanicolaou-stained smears and cervical biopsies error control in anatomic pathology, yet await objective
are examples of frequent specimens collected at the same assessment.
time; fine-needle aspiration cytology and resection speci- This work was funded by the Agency for Healthcare Research
mens are frequent comparable samples, with the fine-nee- and Quality (Rockville, Md), grant RO1HS13321.
dle aspiration and resection done at different times. There
are no data indicating that these legally mandated cyto- References
1. Ohno T. Toyota Production System: Beyond Large-Scale Production. New
histologic comparisons reduce diagnostic errors. Correla- York, NY: Productivity Press; 1988.
tions may also be made by integrating information from 2. Lind AC, Bewtra C, Healy JC, Sims KL. Prospective peer review in surgical
different testing modalities (eg, biopsy and cytogenetics pathology. Am J Clin Pathol. 1995;104:560–566.
or molecular genetics). Correlations between morphologic 3. Ormsby AH, Raju U, Lee M, Ma C, Zarbo RJ. Pathology panel review of
breast lesions: a quality assurance assessment [abstract]. Mod Pathol. 2005;18:
diagnoses and newer molecular analyses have opened a 324A.
new avenue for error detection; however, the effectiveness 4. Medicare, Medicaid and CLIA programs: regulations implementing the
of this sort of correlative QA procedure has not yet been Clinical Laboratory Improvement Amendments of 1988 (CLIA), 57 Federal Reg-
ister 7002 (1992).
documented. 5. Zarbo RJ, Jones BA, Friedberg RC, et al. Q-Tracks: a College of American
Conference and tumor board reviews constitute another Pathologists program of continuous laboratory monitoring and longitudinal per-
time-honored process in which pathologists review infor- formance tracking. Arch Pathol Lab Med. 2005;126:1036–1044.
6. McBroom HM, Ramsay AD. The clinicopathologic meeting: a means of au-
mation content of reports and/or review entire glass slide diting diagnostic performance. Am J Surg Pathol. 1993;17:75–80.
interpretations, often correlated with new clinical and ra- 7. Abt AB, Abt LG, Olt GJ. The effect of interinstitution anatomic pathology
diographic information. Conferences, especially tumor consultation on patient care. Arch Pathol Lab Med. 1995;119:514–517.
8. Kronz JD, Westra WH, Epstein JI. Mandatory second opinion surgical pa-
boards, are often pointed out as QA opportunities that thology at a large referral hospital. Cancer. 1999;86:2426–2435.
allow pathologists to review accuracy of diagnoses and 9. Westra WH, Kronz JD, Eisele DW. The impact of second opinion surgical
other report information in a clinical dialog. The effec- pathology on the practice of head and neck surgery: a decade experience at a
large referral hospital. Head Neck. 2002;24:684–693.
tiveness of this surveillance process depends on the type 10. Epstein JI, Walsh PC, Sanfillipo F. Clinical cost impact of second-opinion
of cases brought to conference, the quality of the clinical pathology: review of prostate biopsies prior to radical prostatectomy. Am J Surg
interaction provided, and the pathologist’s expertise, as Pathol. 1996;20:851–857.
well as completeness of the conference-related review and 11. Santoso JT, Coleman RL, Voet RL, Berstein SG, Lifshitz S, Miller D. Pa-
thology slide review in gynecologic oncology. Obstet Gynecol. 1998;91:730–
whether or not the review is done by a pathologist differ- 734.
ent from the initial interpreter. Our experience applying 12. Brunner JM, Inouye L, Fuller GN, Langford LA. Diagnostic discrepancies

1244 Arch Pathol Lab Med—Vol 129, October 2005 Error Detection in Anatomic Pathology—Zarbo et al
and their clinical impact in a neuropathology referral practice. Cancer. 1997;79: practices: a College of American Pathologists Q-Probes study of 2746 consulta-
796–803. tions from 180 laboratories. Arch Pathol Lab Med. 2002;126:405–412.
13. Nakhleh RE, Zarbo RJ. Amended reports in surgical pathology and impli- 20. College of American Pathologists Commission on Laboratory Accredita-
cations for diagnostic error detection and avoidance: a College of American Pa- tion. Anatomic Pathology Checklist. Northfield, Ill: College of American Pathol-
thologists Q-Probes study of 1 667 547 accessioned cases in 359 laboratories. ogists; 2005.
21. Nakhleh RE, Jones B, Zarbo RJ. Mammographically directed breast biop-
Arch Pathol Lab Med. 1998;122:303–309.
sies: a College of American Pathologists Q-Probes study of clinical physician
14. Safrin RE, Bark CJ. Surgical pathology signout: routine review of every case expectations and of specimen handling and reporting characteristics in 434 in-
by a second pathologist. Am J Surg Pathol. 1993;17:1190–1192. stitutions. Arch Pathol Lab Med. 1997;121:11–18.
15. Furness PN, Lauder I. A questionnaire-based survey of errors in diagnostic 22. Zarbo RJ. Interinstitutional assessment of colorectal carcinoma surgical pa-
histopathology throughout the United Kingdom. J Clin Pathol. 1997;50:457–460. thology report adequacy: a College of American Pathologists Q-Probes study of
16. Raab SS, Nakhleh RE, Ruby SG. Patient safety in anatomic pathology: mea- practice patterns from 532 laboratories and 15 940 reports. Arch Pathol Lab Med.
suring discrepancy frequencies and causes. Arch Pathol Lab Med. 2005;129:459– 1992;116:1113–1119.
466. 23. Black-Schaffer WS, Young RH, Young RH, Harris NL. Subspecialization of
17. Renshaw AA, Cartagena N, Granter SR, Gould EW. Agreement and error surgical pathology at the Massachusetts General Hospital. Arch Pathol Lab Med.
rates using blinded review to evaluate surgical pathology of biopsy material. Am 1996;106:S33–S42.
24. Foucar E. ‘Individuality’ in the specialty of surgical pathology: self-expres-
J Clin Pathol. 2003;119:797–800.
sion or just another source of diagnostic error? [editorial]. Am J Surg Pathol. 2000;
18. Nakhleh RE, Zarbo RJ. Surgical pathology specimen identification and ac- 24:1573–1576.
cessioning: a College of American Pathologists Q-Probes study of 1 004 115 cases 25. Tomaszewski JE, Bear HD, Connally JA, et al. Consensus conference on
from 417 institutions. Arch Pathol Lab Med. 1996;120:227–233. second opinions in diagnostic anatomic pathology: who, what, and when. Am J
19. Azam M, Nakhleh RE. Surgical pathology extradepartmental consultation Clin Pathol. 2000;114:329–335.

Arch Pathol Lab Med—Vol 129, October 2005 Error Detection in Anatomic Pathology—Zarbo et al 1245

Anda mungkin juga menyukai