Teacher Evaluation Is Not Synonymous With Teacher Quality
In the debate over the use of value-added analysis of student data to evaluate teachers, there seems to have been an assumption that teacher evaluation alone is an effective way to improve teacher performance. Or at its crudest level, there is an acceptance that the use of value-added data analysis will lead school administrators to replace bad or mediocre teachers with effective teachers. One of the reasons that so many teachers are skeptical about this movement is that they realize teacher evaluation does not really make them better teachers, at least using traditional methods.
Evaluation is perhaps one way to improve teacher quality. It requires reform and is worthy of focus. However, it is not the only way and not even the best way to improve teacher quality. Teacher coaching, collaboration and mentoring are much better vehicles and are currently being used to great effect by many districts.
Those who think that teacher evaluation is universally accepted as a method of professional dialogue that can consistently improve teacher practice miss the reality of the process in many school districts. Teacher evaluation tools are designed to pass judgment on teacher performance. Judgment is uncomfortable for many teachers, and it is difficult to build the trust needed for true change in such a scenario. Many teachers put up with the process as a necessary annoyance, but question the validity of the judgment passed by the administrator/evaluator. Minimal observations do not capture the complexity of the teaching and learning processes. Teachers question the validity of the evaluators' comments, as they know that so much of what they do can never show up in an evaluation. However, a bigger problem with teacher evaluation is that school systems may not have defined effective instruction, and so evaluators and teachers may not have a common understanding about what makes a good teacher. In the absence of this fundamental prerequisite, evaluations often become a difference of opinion, not a call to action. Incorrect Evaluation Usage
Some would even say that this is the very reason to use student achievement data to evaluate teachers. Data provides hard facts about the efficacy of instruction. Well, maybe.
The value-added method seeks to use data from standardized test scores to determine the value that a teacher adds to student learning and also, by extension, whether a teacher is effective or not. The methodology has been questioned, as has the usefulness of the conclusions reached about teacher performance. At the very least, the process can provide a rough guide for administrators to do some further investigation. This method certainly has some value, but it is not without detractors. It is essential to analyze student performance, as measured by student data, as one in a variety of components to make decisions about student learning. The key is in how the information from the analysis is used: Is it used to measure student progress, for which it was designed, or as a sole means for evaluating teachers, for which it was not?
There are other holes in the theory that if school administrators could just do better evaluations, students would end up with better teachers. Even if schools develop a tool that effectively identifies the worst teachers, using student data in some way, and removes them from the classroom, there is no guarantee that they would replace a poor teacher with a better one. For the most part, schools would use the same inefficient process that placed the ineffective teachers in the first place to find a replacement. The two or three year route to teacher tenure forces districts to be right about teacher selection almost 100 percent of the time, where in other industries, workers serve a kind of trial period or apprenticeship to achieve a 25 to 50 percent success rate.
Less Resources, Less Support
Even the best evaluation system will still result in a fairly predictable distribution of outstanding, good, average and bad teachers, a bell curve so to speak, with a sizeable number of teachers in the average range. With current resources, principals can only really concentrate on providing intensive support for poor teachers. There simply are not enough evaluators to "evaluate" average teachers into better teachers. Advocates for value-added analysis will say that the average teachers will be motivated to do better by themselves if their value-added scores show that improvement is needed. This assumes that teachers accept the value-added analysis; that fatigue will not set in even if there is an initial positive reaction to using value-added analysis; or that teachers can improve without support. "Value-added" is a bad word for some unions at the moment. e United Teachers of Los Angeles, for example, recently sought a court order to block a pilot evaluation program that uses student data.
It will take time to reform teacher evaluations. In the meantime, districts can improve teacher quality right now. Many districts are already making great strides to improve the quality of instruction and increase student achievement through collaboration and professional inquiry that has universal "buy-in." Many districts use data from standardized test scores not to pass judgment, but instead to analyze areas of relative strength and weakness in student learning. Teachers are central to this process and embrace it because it is nonevaluative. is happens parallel to, and usually independent of, the traditional evaluation process. In a truly collaborative culture, with time dedicated in the school week for this kind of teamwork, data becomes enlightening, not threatening. e focus is on analysis and improvement, not judgment, ranking, punishment or embarrassment. With effective school leadership, all teachers can get on board with this process and improve along with the collective.
Technology Can Help
Technology plays a crucial role. Teachers are pushed for time to review their practice and technology can be a crucial time saver. Data management systems like DataDirector by Riverside Publishing or Acuity from CTB/McGraw Hill make it easy for teachers to collect and disaggregate data for student cohorts and individual students. ese systems can combine data from standardized and teacher-made tests to provide a comprehensive picture of student achievement. Easy-to-produce reports give teachers valuable feedback about students' areas of relative strength and weakness. With planning, teacher teams can then analyze the causal relationship between their practice and student performance.
New developments in video playback technology make it easier for teachers to review their practice in their own classrooms. Working teachers find it hard to make time to use video to analyze practice. It is time-consuming to set up video, play it back, and make it work. Video technology has advanced to a point where a computer program combines footage from multiple wide-angle lenses trained on the teacher and the students. Commercially produced packages like Reflect from Teachscape provide the cameras and software that make playback and location of particular scenes or frames quick and easy. Crucially, these packages provide a framework via planning guides, rubrics and matrices by which teachers can analyze student performance in a systematic way. If the footage is used in an inquiry based manner to analyze what students are doing in response to instruction, avoiding judgment calls on the teacher, this can be a very effective way to improve instruction. A sideways step to avoid judgment of teachers makes the use of data, in this case video evidence, acceptable to teachers.
True collaboration between administrators and teachers on the common goal of improved student achievement, grounded in analysis of student data and student work, should take place parallel to teacher evaluation and holds more promise for success.
Eamonn O'Donovan is a former principal and an assistant superintendent of human resources in southern California.