I received a phone call from a music teacher last night whose principal had recently marked her down on an unannounced, “drive-by” observation…because the students “weren’t engaged.”
When she asked how the principal had come up with that rating, she was told the “evidence” upon which the comment was made was based on the principal’s observation that “only 10% of her students were looking at her” during an exercise in beginning guitar class while the class was learning a finger picking example.
Mind you now, this principal acknowledged to the teacher that she “knew nothing” about music in general, and had never touched a guitar in her life. But there she was, making evaluative judgments about this certified, qualified music teacher’s ability to teach guitar
A few thoughts…
A. There were 13 students in the class, so that means 1.3 kids were allegedly looking at her–I mean, data is data, right? If a teacher’s job security is going to be determined by a reliance on “data-driven assessment procedures,” and an obsession with “metrics,” then shouldn’t those data and metrics be precise, and not subject to “rounding errors”?
2. Why would beginning guitar students be looking at the teacher while learning how to pick strings? Wouldn’t we expect the kids to be looking at their fingers, or the guitar strings? And if the teacher wanted the students to have already learned which strings to pick, then why would she want the kids to look at *her* fingers?
G. How does counting up the number of students looking at a teacher have anything to do with students’ “engagement”? Any teacher who has spent more than a hot minute in the classroom knows that there are students who are fully engaged but not looking at them, and plenty of kids who look directly at the teacher all the time and are not engaged in any way, shape, or form. “Looking at” does not equal “engagement” any more than “listening to” equals “understanding.” That’s just not the way that teaching works. Not even close.
7b. Shouldn’t the person doing the evaluating have some level of understanding, or at least some familiarity, with the thing being evaluated? Wouldn’t a person supervising, say, a chef, have to know how to cook? And wouldn’t it make sense for an engineer’s evaluation to be conducted by a person with knowledge of engineering? For teacher evaluation to even begin to approach legitimacy, the evaluation process must be discipline-specific–that is, the evaluator needs to have some sort of subject-matter expertise. If a principal doesn’t know anything about music, or art, or social studies, then it just stands to reason that a subject-matter expert should be brought in to assist with that music, or art, or social studies teacher’s evaluations.
Bottom line: Teacher evaluation in our country is badly broken. It’s based on business models of employee evaluation (i.e., “stack ranking”) that were abandoned in the business world years ago, but are considered cutting edge “accountability systems” in education.
We are often told that we “measure what we treasure.”
I’d suggest that exactly the opposite is actually true: That the things we love best, and value the most, are precisely those things that are the most resistant to being measured.
If you want to know how a teacher is doing in the classroom, then visit that classroom. A lot. And don’t do so unannounced. And don’t “rate” that teacher’s job performance with a set of rubrics not designed for that classroom, or subject, or school, or neighborhood, or students, or teacher. Because teaching is not generic–it’s wildly specific, and wonderfully nuanced. And generic, one-size-fits-all measurement tools work as well at evaluating teachers as a bathroom scale works at evaluating a person’s height and overall fitness.
And if you doubt any of this to be true, I invite you to try a little experiment tonight:
When you get home from work, or school, or whatever you spent your day doing, walk into your house or apartment or dorm, and tell your friend, partner, roommate, or spouse how they did today in meeting your needs, expressed on an “A to F” grade scale. You know, just the way that Republican-controlled state legislatures across the country think we should rate school districts–using a vague, generic, “one size fits all” measuring tool that winds up being so useless that it ceases to have any real value at all.
And if you happen to be a parent, please assign the same sort of grades to your children based on how much you love them.
And then please let me know how that works out for you.
I’ll leave you with one of my favorite quotes about education, from one of my favorite thinkers, the late Elliot Eisner, professor of Art and Education at the Stanford Graduate School of Education:
“We study education through social science disciplines which were originally meant for rat maze learning. . . We have built a technology of educational practice. . .of commando raids on the classroom. . . So often what is educationally significant, but difficult to measure, is replaced with that which is insignificant, but easy to measure” (Eisner, 1985).