The quality of Resident Rotational Evaluations by Faculty

From U of M Internal Medicine Wiki
Policy: The quality of Resident Rotational Evaluations by Faculty
Policy Nr: Sec 06 - 05
Target Review Date: 2017-12-31
Main Stakeholder: Managing Director


The Royal College of Physicians and Surgeons of Canada mandates documentation of evaluation of residents based on specific objectives at the end of each clinical rotation. Their site review processes require documentation of completion of these evaluations, however there is little or no evidence pertaining to the quality or utility of these evaluation processes.

The Department of Internal Medicine uses two CanMEDS compliant standard forms for evaluation of Core Internal Medicine residents. One has been developed by the Section of Critical Care, and the other by the Core Internal Medicine Residency Training Program. These evaluations are completed on line through VENTIS. Most evaluations are completed by individual faculty members, but a substantial number are submitted as group evaluations of individual trainees. The group evaluation process is particularly subscribed by both the Clinical Teaching Unit and critical care rotations. Group evaluations are attributed to each individual faculty member whose name is identified on the group evaluation form.

We undertook analysis of all completed residents evaluations (3400) in the most recent six year period to identify a simple group of indicators that could be used to improve the quality and utility of these evaluations.


For each individual attending physician, the following information was produced:

  1. The total number of individual or group evaluations completed.
  2. The distribution of checks in categories 1 through 4 on the evaluation form.
  3. The average number of characters or letters provided in the text section of the evaluation form.
  4. The number of times face-to-face discussions of the evaluations took place with the residents as indicated by check off in the evaluation form.


A review of this information identified a number of disturbing patterns.

  1. There were a number of individuals frequently attached to clinical services who had completed or participated in less than nine resident evaluations.
  2. Many evaluators attributed with twenty or more evaluations had zero category one or two check offs.
  3. Several evaluators checked off category three for more than ninety percent of entries.
  4. A substantial number of evaluators checked off category four more than forty percent of the time.
  5. Many evaluators included an average of less than forty characters of text per evaluation, and a few averaged zero.
  6. Many evaluators provide no evidence of face-to-face discussions with trainees concerning their evaluations.

In contrast some individuals provided frequent evaluations, occasionally identified performance weaknesses by checking off category one or two, described some residents as exceptional but less than forty percent, consistently provided two more points of commentary, and frequently had face-to-face discussions with residents concerning their evaluations. It is reasonable to assume that evaluations provided by these individuals are more helpful than from those who check off only category three, never identify a performance weakness or an exceptional characteristic, provide little or no text commentary, and never discuss an evaluation with the trainee.

Conclusions and Recommendations

The overall poor quality of evaluations provided by attending physicians has significant negative consequences. It is not unusual for us to fail to identify areas of resident performance requiring substantial improvement until several rotations or even years of training have elapsed. Even when such performance problems are identified contradictory evaluations supplied by less conscientious evaluators provide ample justification for appeal by residents who would benefit from remediation programs.

There are several possible reasons for poor performance on evaluations. Some individuals may regard them as having little value and see no harm in signing the form and drawing a line through the column of threes as evidence of completion. A number of faculty members, both junior and senior, have indicated that they are hesitant to criticize a residents’ performance since it might be challenged or the trainee may retaliate by giving them a poor evaluation. Others have indicated that they are uncomfortable discussing evaluations with trainees especially when there is an area of concern.

Some of these concerns may be reflected by the fact that evaluations done by groups are more than likely to contain the positive elements of evaluations which have been alluded to since the process provides some level of anonymity.

Another reason for poor evaluation performance may relate to the fact that historically good evaluators are not rewarded.


Prior to the 2012-2014 cycle the Department of Internal Medicine process for evaluating faculty as teachers and role models was based exclusively on resident evaluation of their performance as clinicians and teachers.

For the 2012-2014 cycle, the process was modified so that resident evaluations of faculty constitutes eighty-five percent of the total score. The remaining fifteen percent are now based on an analysis of evaluation performance by faculty using a group of six negative indicators each valued at negative one mark, and a group of six positive indicators each valued at positive one mark. The base line score will be nine and the total score will be out of fifteen.

For the 2014-2016 teaching evaluation cycle, we will include all evaluations submitted by faculty from 2010-2016. All individuals start with a base score of nine out of 15. The following scoring system will be applied:

Positive Indicators

(+1 each indicator)

  1. Participation in thirteen or more individual or group evaluations.
  2. Evidence of at least one category one or two checks among the accumulated evaluations.
  3. Greater than an average of sixty characters in the text portion of the evaluation.
  4. Greater than an average of eighty characters in the text portion of the evaluation.
  5. Indication of at least two face-to-face discussions with residents concerning their evaluations.
  6. Indication of at least five face-to-face discussions with residents concerning their evaluations.

Negative Indicators

(-1 each indicator) 1. Less than eight completed individual or group evaluations in a two year cycle. 2. Zero category one or two checks among the total evaluations. 3. More than ninety percent category three checks. 4. More than forty percent category four checks. 5. An average of forty or less characters of text. 6. Absence of any face-to-face discussions with residents concerning their evaluations.

A summary of individual faculty performance assessment as evaluators will be included as part of the standard biennial report Interro-01.gif Who knows more about this? Ttenbergen 00:26, 4 February 2018 (CST)

. The cumulative teaching evaluation performance score will consist of the cumulative score from the resident evaluations multiplied by .85 plus the new evaluator performance score out of fifteen.

Policy approval

This policy was approved unanimously at the Department of Internal Medicine Executive Committee Meeting of Wednesday, January 12, 2011.

February 2017 update: positive indicator #1 and negative indicator #2 were prorated because WebEval is no longer being used as of April 7th, 2016.

Policy will be reviewed in two years to take into account the use of the new program, VENTIS.

Related articles

Related articles: