Peer Learning vs. Peer Review: One Hospital's Experience

Wednesday, Nov. 28, 2018

As vice chair for radiology quality and safety at Lahey Hospital and Medical Center in Burlington, MA, Jennifer Broder, MD, began to realize the peer review process in her department was not as productive as it could be.



The 40 radiologists at her hospital were using RADPEER, a traditional peer review program developed by the American College of Radiology that allows doctors to review and score their peers' work while reporting discrepancies and their clinical significance. Seeking to improve on the process, Dr. Broder sought an alternative that took a peer learning vs. a peer review approach.

In a Tuesday session, Dr. Broder described the successful results in a presentation, "Adopting Peer Learning: A Practical Approach for Improving Clinical Performance Feedback and Learning among Colleagues within a Radiologic Practice."

Traditional peer review methods in radiology involve giving a score to a peer's work which can be divisive in the workplace and affect collegial relationships, particularly because people are worried the reviews will impact their professional evaluations, Dr. Broder said.

"In a scored peer review system, people will go through the motions of scoring, but they often won't really give honest feedback," Dr. Broder said. "That can mean skipping cases with significant errors because they do not want to submit a poor score. You end up with data that shows everyone is doing great, but that is not useful."

Dr. Broder said she experienced this problem first hand, realizing that she would find mistakes in her own work that no one pointed out to her. Overall, her department had a 1 percent reported discrepancy rate in RADPEER, which Dr. Broder knew could not be accurate; it just meant many mistakes were going unreported.

At a prior RSNA Annual Meeting, Dr. Broder attended a session led by David Larson, MD, MBA, vice chair of education and clinical operations at Stanford University School of Medicine, who was instrumental in developing the Peer Learning Model as an alternative to the traditional format. She took the idea back to Lahey Hospital and Medical Center and started working with her colleagues and hospital leaders to develop and implement the process.

Increase in Sharing of Cases, Discrepancies Reported

Implemented in the radiology department at Lahey in April 2017, the Peer Learning Model eliminated scores attached to cases and created a system where radiologists anonymously submit errors or "great calls" on their peers' work. What makes a great call is subjective, but Dr. Broder said it is a way to highlight and learn from good work rather than just point out mistakes. The submitted cases are not used for radiologist performance evaluation.

Dr. Broder and colleagues studied the 10-month period before the Peer Learning Model was implemented and 10 months after. Under the RADPEER model, the hospital's radiology department reported 64 discrepancies. Under the Peer Learning Model, 488 discrepancies were reported along with 396 great calls, and 157 cases submitted for further discussion.

The department also moved from monthly whole department traditional morbidity and mortality conferences to quarterly subspecialty-focused peer learning conferences to discuss cases in a more in-depth, but anonymous fashion. In total, 286 cases were shown in conferences under the Peer Learning Model compared to only 47 under the traditional morbidity and mortality model.

Dr. Broder said she was happy with the results and hopes to see departments across the country implement the Peer Learning Model of review.

"If all of our mistakes stay in a black box, we can never get better," she said. "The only way to improve the work we do is to first understand where we're going wrong, and to understand where we are doing well and to amplify that."


Twitter Poll