Student Teaching Evaluations

I mentioned that I wrote often during my career but published little.  Here is one example I found in my files.  I apparently wrote it at the start of my fourth year teaching, but never really shared it until now.

Yesterday (January 14, 2004) we had what for me was a very depressing department meeting.    The message from the chair was essentially that we need to move to the next level, which I have no problem with.   But then he defined the next level as getting student evals one standard deviation above the college average.    And that frustrates the hell out of me.   I have gotten horrible evals on some courses and terrific evals on others.   And I have given both cases, all courses in fact, my best effort.   The feedback from students often seems more random than useful.     The written comments sometimes are helpful; I have done things differently in subsequent courses as a result of them.   The numbers, however, seem to me to be a random crap shoot.   And now I’m being told I need to improve them.   I told the chair I feel like a rat in the maze; I keep getting random shocks.    My instinct is to go sit in the corner and not move.

The first issue for me is in the title.   The evals are explicitly called teaching evaluations.   The goal, however, is to produce learning, and granted that is more likely to happen if the teaching is considered good.   And it could probably be better if we attempted to directly measure whether learning is taking place.    But no one apparently knows how to do that.   So we’ll continue to look for our lost car keys under the street lamp because the light is better there?

The Yeats quote is relevant here: “Education is not about filling buckets, but rather about lighting fires.”    Maybe we should be asking questions like: did this course inspire you to want to apply the techniques of xxx to business problems or opportunities?   Did it give you some tools to attempt to do so?   Our attempts at measuring learning seem to be mechanistic: we pour in some knowledge, then put the dip stick in to see how much actually flowed in.   But sometimes the real benefit won’t manifest for some time, or until after repeated exposures.    Does that mean the initial effort was in vain?

The key challenge with measuring learning is that the learning takes place in students, and they are independent agents.   I cannot directly effect learning in students; only each individual student him or herself can.   Yet what I profess to want to accomplish is to produce learning, so I think trying to measure the amount of learning taking place is still appropriate.   I’m okay with that.   But what it means is that my efforts need to be some conveying of knowledge (content), but more helping to provide motivation, identifying techniques that are more effective, setting up the environment that is conducive to learning.   So the first question that students should be asked is did learning take place.   Then what are the conditions which helped or hindered that effort—the categories for which the theory applies in the parlance Christensen uses.    That is not as simple and clean as a single number, but then anything I’ve valued in my life never has been.    So it takes more effort, reflection, judgment to do this.     I don’t see anywhere that our mission is to produce learning only if it’s easily measured by one number.    Our mission is to do it whatever it takes.    At least that’s my feeling.

Okay, then there is the issue of simply playing the game, but still trying to do what is needed.    The example that comes to mind is that yesterday on the news they observed that Howard Dean started his campaign as the Washington DC outsider, yet he gains momentum by having the consummate insiders endorse his candidacy.   The implication is that you have to “play the game” to compromise in the real world.   So we indulge the dean in his little eval sport, game the system to produce better results, while still doing what we feel we need to do to be true to our mission?   That is certainly one way we could approach this.    What’s the second right answer?    My inclination is to continue to focus on what will help me to accomplish the mission while ignoring what I consider the silliness around evals.    That can work too, except for short interludes like the meeting yesterday, when the two come into stark contrast that cries out for resolution, or more typically for me produces a defeating strong sense of depression.

I have been at this teaching stuff full time for three years.    Some of it, particularly the challenge of identifying what will really serve students well to learn and how to help them do that, has been challenging and rewarding.   To some extent I have ignored the politics: I was told after the first year that I need to publish some articles.    I haven’t done that because I have nothing compelling to say, and I have been driven by trying to produce learning better.    So far it appears to not have hurt me.    I’ve had no travel money, no lab money, token raises.   But so has everyone else in the college as far as I can tell.    Those perks aren’t particularly motivating anyway.    The challenge of helping students effectively is.   A general IT course for MBA’s is lacking even at the best schools.    It appears no one has demonstrated a good way to get business students to embrace this stuff, even though we in the industry think it is the salvation of the world.   So what are we missing?

The score in a sporting event can be motivating.    We need three runs to win, and we know what we need to do to produce three runs.    Or we need one touchdown, and we’ve practiced things to accomplish that.   The score on an eval is for me confounding.    I need to improve it by 10% (next time around; this game is over and we don’t get the score until the game is over), and the number gives me no clue what to do to improve it next time around.    Maybe the next class will be the same and so if I respond to the few verbal comments, things will improve.   Or maybe it will be the opposite, so that the things that annoyed this class will work well next time.    Who the hell knows?

I generally start a course with a great deal of optimism.    I think I know things which will help this class learn more effectively and efficiently.     Several weeks after the course I get the evals, and for me it is almost invariably a depressing experience.    The number is not high enough, and the comments sometimes hurt.    I clearly didn’t completely succeed in what I wanted to do.    So I put them away, eventually get over it, plan how the next round will be better, and repeat the cycle once more.    And I’m beginning to think with each cycle I spiral higher in what I actually accomplish and lower in the satisfaction I derive from it.   The end point appears to be the same as many previous endeavors: move on to something else?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s