Advertisement

Can medical record reviewers reliably identify errors and adverse events in the ED?

Published:March 03, 2016DOI:https://doi.org/10.1016/j.ajem.2016.03.001

      Abstract

      Background

      Chart review has been the mainstay of medical quality assurance practices since its introduction more than a century ago. The validity of chart review, however, has been vitiated by a lack of methodological rigor.

      Objectives

      By measuring the degree of interrater agreement among a 13-member review board of emergency physicians, we sought to validate the reliability of a chart review–based quality assurance process using computerized screening based on explicit case parameters.

      Methods

      All patients presenting to an urban, tertiary care academic medical center emergency department (annual volume of 57,000 patients) between November 2012 and November 2013 were screened electronically. Cases were programmatically flagged for review according to explicit criteria: return within 72 hours, procedural evaluation, floor-to-ICU transfer within 24 hours of admission, death within 24 hours of admission, physician complaints, and patient complaints. Each case was reviewed independently by a 13-member emergency department quality assurance committee all of whom were board certified in emergency medicine and trained in the use of the tool. None of the reviewers were involved in the care of the specific patients reviewed by them. Reviewers used a previously validated 8-point Likert scale to rate the (1) coordination of patient care, (2) presence and severity of adverse events, (3) degree of medical error, and (4) quality of medical judgment. Agreement among reviewers was assessed with the intraclass correlation coefficient (ICC) for each parameter.

      Results

      Agreement and the degree of significance for each parameter were as follows: coordination of patient care (ICC = 0.67; P < .001), presence and severity of adverse events (ICC = 0.52; P = .001), degree of medical error (ICC = 0.72; P < .001), and quality of medical judgment (ICC = 0.67; P < .001).

      Conclusion

      Agreement in the chart review process can be achieved among physician-reviewers. The degree of agreement attainable is comparable to or superior to that of similar studies reported to date. These results highlight the potential for the use of computerized screening, explicit criteria, and training of expert reviewers to improve the reliability and validity of chart review–based quality assurance.
      To read this article in full you will need to make a payment
      One-time access price info
      • For academic or personal research use, select 'Academic and Personal'
      • For corporate R&D use, select 'Corporate R&D Professionals'

      Subscribe:

      Subscribe to The American Journal of Emergency Medicine
      Already a print subscriber? Claim online access
      Already an online subscriber? Sign in
      Institutional Access: Sign in to ScienceDirect

      References

        • Codman E.A.
        A study in hospital efficiency : as demonstrated by the case report of the first five years of a private hospital.
        Facsimile, 1869-1940 ([Originally Published in; 1918])
        • Gilbert E.H.
        • Lowenstein S.R.
        • Koziol-McLain J.
        • Barta D.C.
        • Steiner J.
        Chart reviews in emergency medicine research: where are the methods?.
        Ann Emerg Med. 1996; 27: 305-308
        • Sanazaro P.J.
        • Mills D.
        A critique of the use of generic screening in quality assessment.
        JAMA. 1991; 265: 1977-1981
        • Allison J.J.
        • Wall T.C.
        • Spettell C.M.
        • Calhoun J.
        • Fargason C.A.
        • Kobylinski
        • et al.
        The art and science of chart review.
        Jt Comm J Qual Patient Saf. 2000; 26: 115-136
        • Panacek E.A.
        Performing chart review studies.
        Air Med J. 2007; 26: 206-210
        • Altman N.
        • Krzywinski M.
        Points of significance: association, correlation and causation.
        Nat Methods. 2015; 12: 899-900
        • Brook R.H.
        • Appel F.A.
        Quality-of-care assessment: choosing a method for peer review.
        N Engl J Med. 1973; 288: 1323-1329
        • Donabedian A.
        Evaluating the quality of medical care.
        Milbank Mem Fund Q. 1966; 44: 166-206
        • Klasco R.S.
        • Wolfe R.E.
        • Wong M.
        • Edlow J.
        • Chiu D.
        • Anderson P.D.
        • Grossman S.A.
        Assessing the rates of error and adverse events in the ED.
        Am J Emerg Med. 2015; 33: 1786-1789
        • Handel D.A.
        • Wears R.L.
        • Nathanson L.A.
        • Pines J.M.
        Using information technology to improve the quality and safety of emergency care.
        Acad Emerg Med. 2011; 18: e45-e51
        • Handler J.A.
        • Gillam M.
        • Sanders A.B.
        • Klasco R.
        Defining, identifying, and measuring error in emergency medicine.
        Acad Emerg Med. 2000; 7: 1183-1188
        • Fisher R.A.
        Statistical methods for research workers [Internet]. 5th ed., revised and enlarged.
        Oliver and Boyd, London1934 ([Available from: http://www.haghish.com/resources/materials/Statistical_Methods_for_Research_Workers.pdf])
        • Dans P.E.
        Clinical peer review: burnishing a tarnished icon.
        Ann Intern Med. 1993; 118: 566-568
        • Badcock D.
        • Kelly A.-M.
        • Kerr D.
        • Reade T.
        The quality of medical record review studies in the international emergency medicine literature.
        Ann Emerg Med. 2005; 45: 444-447
        • Worster A.
        • Bledsoe R.D.
        • Cleve P.
        • Fernandes C.M.
        • Upadhye S.
        • Eva K.
        Reassessing the methods of medical record review studies in emergency medicine research.
        Ann Emerg Med. 2005; 45: 448-451
        • Osborne J.W.
        Best practices in quantitative methods.
        SAGE, 2008
        • Landis J.R.
        • Koch G.G.
        The measurement of observer agreement for categorical data.
        Biometrics. 1977; 33: 159-174
        • Brennan T.A.
        • Localio R.J.
        • Laird N.L.
        Reliability and validity of judgments concerning adverse events suffered by hospitalized patients.
        Med Care. 1989; 27: 1148-1158
        • Fleiss J.L.
        • Cohen J.
        The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability.
        Educ Psychol Meas. 1973; 33: 613-619
        • Brennan T.A.
        • Localio A.R.
        • Leape L.L.
        • Laird N.M.
        • Peterson L.
        • Hiatt H.H.
        • Barnes B.A.
        Identification of adverse events occurring during hospitalization. A cross-sectional study of litigation, quality assurance, and medical records at two teaching hospitals.
        Ann Intern Med. 1990; 112: 221-226
        • Hayward R.A.
        • McMahon L.F.
        • Bernard A.M.
        Evaluating the care of general medicine inpatients: how good is implicit review?.
        Ann Intern Med. 1993; 118: 550-556
        • Thomas E.J.
        • Lipsitz S.R.
        • Studdert D.M.
        • Brennan T.A.
        The reliability of medical record review for estimating adverse event rates.
        Ann Intern Med. 2002; 136: 812-816
        • Hofer T.P.
        • Asch S.M.
        • Hayward R.A.
        • Rubenstein L.V.
        • Hogan M.M.
        • Adams J.
        • Kerr E.A.
        Profiling quality of care: is there a role for peer review?.
        BMC Health Serv Res. 2004; 4: 9
        • Localio A.R.
        • Weaver S.L.
        • Landis J.R.
        • Lawthers A.G.
        • Brennan T.A.
        • Hebert L.
        • Sharp T.J.
        Identifying adverse events caused by medical care: degree of physician agreement in a retrospective chart review.
        Ann Intern Med. 1996; 125: 457-464
        • Myers R.S.
        Hospital statistics don't tell the truth.
        Mod Hosp. 1954; 83: 53-54
        • Rosenfeld L.S.
        Quality of medical care in hospitals.
        Am J Public Health Nations Health. 1957; 47: 856-865
        • Myers R.S.
        • Slee V.N.
        • Hoffmann R.G.
        The medical audit protects the patient, helps the physician, and serves the hospital.
        Mod Hosp. 1955; 85: 77-83
        • Luck J.
        • Peabody J.W.
        • Lewis B.L.
        An automated scoring algorithm for computerized clinical vignettes: evaluating physician performance against explicit quality criteria.
        Int J Med Inform. 2006; 75: 701-707
        • Cassidy L.D.
        • Marsh G.M.
        • Holleran M.K.
        • Ruhl L.S.
        Methodology to improve data quality from chart review in the managed care setting.
        Am J Manag Care. 2002; 8: 787-793
        • Rubenstein L.V.
        • Kahn K.L.
        • Reinisch E.J.
        • Sherwood M.J.
        • Rogers W.H.
        • Kamberg C.
        • et al.
        Changes in quality of care for five diseases measured by implicit review, 1981 to 1986.
        JAMA. 1990; 264: 1974-1979
        • Myers R.S.
        • Slee V.N.
        • Hoffmann R.G.
        The medical audit protects the patient, helps the physician, and serves the hospital.
        Mod Hosp. 1955; 85: 77-83
        • Caplan R.A.
        • Posner K.L.
        • Cheney F.W.
        Effect of outcome on physician judgments of appropriateness of care.
        JAMA. 1991; 265: 1957-1960
        • Gupta M.
        • Schriger D.L.
        • Tabas J.A.
        The presence of outcome bias in emergency physician retrospective judgments of the quality of care.
        Ann Emerg Med. 2011; 57 ([e9]): 323-328
      1. Office of the National Coordinator for Health Information Technology. Health IT enabled quality improvement. A vision to achieve better health and health care [Internet]. Available from: https://www.healthit.gov/sites/default/files/HITEnabledQualityImprovement-111214.pdf