Recently, Louis Pangaro received the 2018 National Board of Medical Examiners Hubbard award. Lou is known for his work developing conceptual frameworks for assessment and developed the RIME framework for clinical assessment of learners in medical education. As Lou is one of the course directors for our ‘Systems Approach to Assessment in Health Professions Education’ course, and a leader in medical education assessment of learners, we thought we’d ask him to explain the RIME Framework, and how it came to be.

  • When did you first think about the RIME framework?

About 30 years ago when I first became the clerkship director for internal medicine at the Uniformed Services University of Health Sciences, I realized that I could do a better job in how we evaluate students in the clinical setting. I had added multiple choice tests of knowledge, and a free-response exam of analytic ability, but the major focus remained evaluations by teachers in the clinical setting, and to get these teachers playing from the same sheet of music.

Since I had students on clerkships in hospitals thousands of miles away, I had to deal with close scrutiny from the Liaison Committee on Medical Education (LCME). As a result, consistency across teachers, especially consistency across clinical teaching sites, became an obsession of mine. My mentor Gordon Noel had introduced “formal evaluation sessions” in which we spoke with teachers face-to-face every few weeks to track student progress. One outcome of my regular discussions with teachers was the realization that the usual language on clerkship evaluation forms was not working well for clinical teachers.

  • How did you develop the RIME framework?

In 1987 I had spent a month with Kelley Skeff and Georgette Stratos in the Stanford Faculty Development Center for Medical Teachers becoming a facilitator in their faculty development program in clinical teaching, and I had the opportunity to read a lot about education in general, and to talk with experts.

Reading the work of Benjamin Bloom and colleagues about educational objectives was a real prompt to my thinking about frameworks for assessment of competence. I loved the fact that Bloom’s frameworks had an explicitly developmental aspect, and this fit very well with our expectations for clerkship students, as they moved from the pre-clerkship world of basic science, to the clinical setting, and eventually to residency.

On the other hand, I found the refraction of expectations into knowledge, skills and attitudes (“K,S,A”) to be to limiting, even artificial, for how we asked teachers to evaluate in the clinical setting. So I evolved my concept of “synthetic” frameworks in which students had to combine KSA in order to do a task, and contrasted this with “analytic” frameworks in which these dimensions were articulated and assessed separately.

I think that recent work with milestones and EPAs reflects the realization that synthetic assessment frameworks work better for clinical teachers.93

  • How does the RIME framework work?

Overall, RIME tries to capture the movement from understanding into action that is the essence of clinical education.

First of all, RIME is “synthetic” in the sense that a student’s being successful in one of its “levels” requires the student or resident to bring the right combination of knowledge, skills and attitude to each situation.

Second, RIME asks the teacher to draw a conclusion from what has been observed in the student’s work with a given patient: does the pattern fit with being a “reporter”, “interpreter,” “manager,” or “educator.” In this, we take advantage of the years of training of our faculty in making a diagnosis by comparing our observations about a patient to pre-existing patterns we have in our memory. Our teachers are very well trained in asking themselves, “does what I see before me in this patient fit better with pneumonia, pulmonary embolism, or heart failure?” Our clinical teachers are physicians who save lives by drawing conclusion from sets of data, in which not all patients have every finding on a checklist. It seemed reasonable to think that they could do this in looking at a student: “does what I see before me in this student fit better with reporter, interpreter, or manager?”

This “diagnostic” approach to assessment of students gets at one essential thing that RIME is trying to address - the emotional difficulty that teachers have in “giving” a grade. We do not say that we “give” the patient pneumonia or pulmonary embolism, but teachers often do say that they are “giving a grade”. RIME is trying to help teachers see the evaluation of students as similar to making a diagnosis, assigning a label to what they are seeing. It’s not something they “do” to a student, but a diagnostic judgment.

Third, and probably most important, the pattern that RIME asks teachers to look for is very familiar to them, almost intuitive. Reporting, interpreting, and managing are just terms for the steps in working with the patient: data gathering (history and physical) – data interpretation (clinical assessment) – planning (ordering tests and therapies). In other words, the rhythm of rhyme is the rhythm that teachers have been using throughout medical school and residency.

When I asked a group of teachers, faculty and residents to look at some videotapes of students in action, they have about 90% agreement in how they labeled the student, after about 10 minutes of explanation of the RIME scheme.

  • What are the limitations of the RIME framework?

The meanings of words and terminology are not always self-evident. In the RIME scheme, what we mean by “reporting” is not always self-evident. In RIME it means getting the facts yourself, not simply repeating them in an oral case presentation. A TV personality who recites the news on prime time, is only doing one part of being a reporter, and not the most important part. In the RIME scheme, being a reporter is much more than repetition of facts, and requires the teacher to directly observing bedside skills

One other problem with RIME that I’ve always had to deal with is that some educators get confused and think that it is a developmental framework, in which a student goes through four phases. It’s not that. When one moves from reporter to interpreter one doesn’t stop being a reporter; when one moves to manager we must continue to gather the information and to interpret it.

Clinicians in practice are typically gathering the information, interpreting it, beginning to manage and educate their patients all at the same time. We don’t do it in phases. What RIME does is makes clear the minimal level of entrustment for each level of training.

To the extent that the RIME framework is used for assessment, it is a diagnostic tool, and so teachers may see it as only about the student’s cognitive ability. But RIME is actually a synthetic model in which KSA have to be combined. You have to have the knowledge to know what questions to ask the patient. You have to have the communication skills, and often the patience to ask carefully. You need the manual skills to examine the patient. To be a “reliable reporter” you need to show up each day, an hour or two before rounds, so that you can accept responsibility and ownership for serving the patient by being their reporter to the team.

In other words, RIME is intended to be synthetic. Most teachers who use RIME get this, but they also want to emphasize “professionalism” as a prerequisite for almost anything else. So, while the RIME scheme includes professionalism in its synthesis and a sense of ownership for tasks, some clerkships like to call it P-RIME just to put professionalism “on the marquee”, so students know it’s important.

I should say that the most important thing to me is that we need to find a framework that works for teachers on the front lines of patient care. So whatever phrasing or acronym achieves that level of consistency across teachers and consistency across sites, I’m in favor of.

Whatever the framework we ask teachers to use in assessing students, we have to spend resources - usually time - to get them to understand it. But this frame-of-reference training is not enough. Teachers have to be trained as well to understand the dimensions of performance for each part of the framework. For instance, Performance Dimension Training gets the teacher to understand that a “reliable reporter” does not just recite clinical facts, but has to gather them with accuracy; that for a clerkship student to be a consistent interpreter, this does not mean being “being right” but does mean offering a reasonable differential without a lot of prompting. 

  • What do you want your colleagues in medical education around the world to know about the RIME framework?

My whole understanding of a professional’s competence is the ability to embrace complexity, but act with simplicity. I would like teachers and medical schools to ask themselves whether the RIME framework is simple, without being simplistic. Our own experience tells us that other, more elaborate assessment frameworks such as competencies, milestones, and entrustable professional activities may provide increasing levels of granularity, but at the cost of intuitive understanding.

I like the list of 13 pre-graduation EPA’s proposed by the AAMC for all medical schools. But it is not a framework; it is simply a list of tasks that a student should do, or learn to do, and it’s not really well prioritized. On the other hand, each of the separate EPA’s can be placed to fit into the elements of the RIME scheme. The EPAs can be seen as the performance dimensions for each term in the RIME framework, which provides a simple, stable framework of understanding. 

  • How does the RIME framework fit into a program of assessment?

Program assessment is the aggregate of the assessment of individual students or residents, systematized into a before-during-and-after model in which baseline characteristics and subsequent outcomes can be tracked. In order to do this in a comprehensive, systematic way all schools need for each activity, including quantified assessments such as multiple-choice tests, OSCEs, clinical reasoning examinations, and task trainers - this is the focus of RIME - a robust system of descriptive evaluations by teachers in the clinical setting. As I said before clinical teachers have never been very good at rating students on a scale. These are notoriously inflated. But lots of published data with the RIME scheme have shown that teachers are more able to use the full rating spectrum, including rating students at the novice level and avoiding grade inflation, if the evaluation framework is, like RIME, descriptive, rather than numerical.

  • What would you like to see in the field of medical education moving forward?

There so much we don’t know about how much effort to put into the different parts of the evaluation process. I would like to see a system that identifies marginal performers among medical students or residents, so that we don’t advance people who are not ready for increased responsibility. That is priority number one - we have to be fair to patients and society.

We also have to be fair and helpful to students, and give them a system that they easily understand, and which can be used with consistency across teachers and across sites. If there’s too much inconsistency or, even worse, a student’s grade depends on a crapshoot, whether you get one faculty member or another, then we’re not being fair to the students. This is one of the things that the RIME scheme is trying to address. If you can’t trust the evaluations of your teacher to be fair, dispassionate and consistent, then the whole concept of professionalism is undermined.

Finally, I think that we have to be fair to teachers. We have to give them a system that they can use, easily and consistently. RIME takes advantage of their diagnostic skill and pattern recognition and gives them an underlying rhythm which is exactly what they have been using for years (H&P, assessment, plan).

  

Selected Publications:

Pangaro L. A New Vocabulary and Other Innovations for Improving Descriptive In-training
Evaluations. 1999. Academic Medicine, 74: 1203-1207.

Pangaro L. Investing in Descriptive Evaluation: a vision for the future of assessment. 2000. Medical Teacher, 22(5): 478 – 481.

Hemmer P, Hawkins R, Jackson J, Pangaro L. Assessing How Well Three Evaluation Methods detect Deficiencies in Medical Students' Professionalism in Two Settings of an Internal Medicine Clerkship. 2000. Academic Medicine, 75: 167 – 173.

Gaglione MM, Moores L, Pangaro L, Hemmer PA. Does Group Discussion of Student Clerkship Performance at an Education Committee Affect an Individual Committee Member’s Decisions? 2005. Academic Medicine, 80: S55-S58.

Pangaro LN. A Shared Professional Framework for Anatomy and Clinical Clerkships. 2006. Clinical Anatomy, 19: 419-428.

DeWitt DE, Carline D, Paauw DS, Pangaro L. A Pilot Study of a "RIME" framework-based Tool for Giving Feedback in a Multi-specialty Longitudinal Clerkship. 2008. Medical Education, 42: 1205 -1209.

 Rodriguez R, Pangaro L. Mapping the ACGME competencies to the RIME Framework. 2012. Academic Medicine, 87 (12): 1781.

Pangaro L, ten Cate O. AMEE Guide - Frameworks for Learner Assessment in Medicine. 2013. Medical Teacher, 35: 524 – 537.

Pangaro LN. System Approaches to Student Assessment. 2015. Pangaro and McGaghie, editors, Handbook of Medical Student Assessment and Evaluation, Gegensatz Press.

For more resources on RIME, check out Dr. Pangaro’s Annotated Bibliography of related papers

 

The opinions expressed are those of Dr. Pangaro, and do not reflect the Uniformed Services University of the Health Sciences or the Department of Defense.

 

Louis Pangaro, MD

Louis Pangaro, MD (Course Director, A Systems Approach to Assessment in Health Professions Education) is Professor of Medicine and Health Professions Education at the Uniformed Services University of Health Sciences. Lou’s areas of professional interest include assessment, curricular reform, and faculty development.