In a follow-up to his earlier post about his gratitude for high-school education:

A number of responses to my column about the education I received at Classical High (a public school in Providence, RI) rehearsed a story of late-flowering gratitude after an earlier period of frustration and resentment. “I had a high school (or a college) experience like yours,” the poster typically said, “and I hated it and complained all the time about the homework, the demands and the discipline; but now I am so pleased that I stayed the course and acquired skills that have served me well throughout my entire life.”Now suppose those who wrote in to me had been asked when they were young if they were satisfied with the instruction they were receiving? Were they getting their money’s worth? Would they recommend the renewal of their teachers’ contracts? I suspect the answers would have been “no,” “no” and “no,” and if their answers had been taken seriously and the curriculum they felt oppressed by had been altered accordingly, they would not have had the rich intellectual lives they now happily report, or acquired some of the skills that have stood them in good stead all these years. . . .“Deferred judgment” or “judgment in the fullness of time” seems to be appropriate to the evaluation of teaching.And that is why student evaluations (against which I have inveighed since I first saw them in the ’60s) are all wrong as a way of assessing teaching performance: they measure present satisfaction in relation to a set of expectations that may have little to do with the deep efficacy of learning.

This is exactly right. I have often argued over the years that if we must have such evaluations, students should be asked for their responses to a course at least one semester after completing it. Instead, they are asked for their judgments near the end of a semester, when they are probably busier and more stressed than at any other time, and when they haven’t completed their final work for the class or received their final evaluations. It’s a perfect recipe for useless commentary.By the way, colleagues typically respond to my suggestion by arguing that if students have to wait a semester before evaluating courses, they won’t even remember what kind of experience they had. I counter, “If true, wouldn’t that be worth knowing?”

3 Comments

  1. Obviously, student evaluations should be taken in context, especially for younger students. However, course evaluations help the instructor to understand how the students are receiving his/her instruction.

    Comments like, "This is too hard!" may just be whining, or may indicate that the content level is over their heads or that they are ill prepared by the course material to complete course assignments.

    Student also learn better when they are engaged, so comments like "this is boring" may indicate a poor pedagogical style.

    Students understand that education is not entertainment, but learning is fun if it challenges and engages them. A lazy professor may think students are just naturally lazy, excusing himself for innovating his/her teaching style and updating the curriculum.

  2. Alan, I've fought that battle too, to no avail. You must have noticed (I'm sorry, I don't recall whether you posted about it) the story in the Washington Post about the statistical effects of teachers students like? The results tend very strongly to corroborate your argument (and mine) about deferred evaluation.

    But I'm not holding my breath for institutions to change their evaluation schedule.

  3. I think the problem of evaluation could also be improved in one or two ways:

    1) Show the average GPA for the class versus the average GPA for all students taking that class, whenever possible, over the last five years. If someone is getting an average of a "4" on an evaluation, but giving out a 3.5 GPA, and someone else is getting a 3.8, but giving out a 3.0 GPA, who is more effective? Obviously the standard deviations will affect that answer, but this would at least point to whether the evals are inflated because of high GPAs.

    2) Show how students do in classes subsequent to the one being taken, ideally in the same department or subject. After taking English 101, do the students in Jones' class do better or worse than average in 102?

    That should be very easy to calculate. It might also give the professors themselves and the administrators looking over their shoulder a clearer view of what's actually happening.

Comments are closed.