You are here


Making Assessments Work

Your district just overhauled its assessments. Are you sure these improvements are reaching your stu

There's lots of buzz about formative assessment these days; some analysts say this latest byproduct of No Child Left Behind comprises the fastest-growing segment of the education testing market. It's not difficult to understand, given how fundamentally unhelpful most summative tests are when it comes to informing instruction and measuring growth.

"The reason the topic of formative assessment has become so popular in the last year or so is people have realized that testing once a year isn't taking them where they need to go," says Rick Stiggins, founder of The Assessment Training Institute in Portland, Ore.

But there's no accepted definition of what constitutes formative assessment. In theory, it's using periodic assessments of student mastery of concepts to modify instruction. In the classroom it can mean anything from weekly, teacher-created quizzes, common assessments across a district, or shortened forms of the state's summative test given in the months leading up to the high-stakes test.

And behind the catchphrase are a vast slew of formative testing products that promise to improve summative results, and a growing pool of case studies that bear this out. But there is also a growing minority of educators who argue the current conception of formative assessment in the U.S. is merely an "early-warning summative system."

"This approach doesn't prevent failure, but gives you early warning which kids are going to fail," says Dylan Wiliam, the author of much of the leading research on the effectiveness of formative testing, and currently director of the learning and teaching research center at ETS.

Out of Alignment

The high stakes of failing to make adequate yearly progress means two things: educators can't wait until the end of the year to gauge how they're doing, and they have to be sure that what they're teaching day-to-day equates to what will be required of their students on the state's summative test. Integral to this is the reality that "teachers still struggle to match up what they're doing in the classroom to what's being evaluated on state tests," says Hardin Daniel, vice president of sales and marketing at ThinkLink Learning. His company sells computer-based predictive versions of state summative tests that are given three times each year.

Wyoming approached this fundamental problem with a radical solution: it's throwing out its entire high-stakes testing system and designing a new one that will map closely to what's going on in classrooms. Wyoming will also become one of the first states to offer online formative testing tools to all its districts for free.

"We're trying to align all of our summative tests to state standards," says Cheryl Schroeder, coordinator of standards and assessment at the Wyoming Dept. of Education. "Teachers will finally know what type of material is available to be tested from. This takes the guessing out of it."

The other major advantage to schools, or districts, using these common formative assessments is the potential to reduce the subjectivity that is inherent in classroom assessments. "You can walk into any school system, large or small, and ask to see samples of work that's proficient from five different fourth-grade classrooms. You'll get five radically different qualities of work," says Doug Reeves, chairman of the Center for Performance Assessment. "The only antidote to that is common assessments. Standards, common curriculum, MAPs, are all great ideas, but impotent unless you have common assessments."

The Value of Prediction

Students at Elmore Park Middle School outside of Memphis began using ThinkLink Inc.'s Predictive Assessment Series in December 2002. The 35- to 45-minute tests, which closely mirror the content tested on Tennessee's TCAP tests, immediately rate a child's performance as red, indicating serious deficits, yellow indicating that progress is being made, or green signaling mastery. Students take the ThinkLink tests three times each year: in the fall they measure content from the previous year; mid-year they test content that will be on that year's summative test; and about six weeks before the year-end summative test for predictions about whether the child will reach proficiency.

ThinkLink compares this periodic predictive testing to the painting technique pointillism.

" We're trying to ALIGN all of our summative tests to state standards. This takes the guessing out of it." -Cheryl Schroeder, coordinator of standards and assessment, Wyoming Dept. of Education

"If you get real close to the painting you can see the individual brushstrokes," says Hardin Daniel, vice president of sales and marketing at ThinkLink. "Every once in a while the teacher needs to back up and get that overall view of, 'How are we doing according to what the state is measuring?' "

Elmore Park Vice Principal John McDonald says the school chose ThinkLink because it needed to improve their value-added, or annual academic gain, scores.

"ThinkLink Learning was a way to identify and address students who were not making at least a year's academic growth in math, reading or English," McDonald says.

The tactic seems to be working. On the state report card Elmore Park raised its grade for value added in math from an 'F' to a 'B' in two years, and raised its value-added grade in reading from 'C' to 'A' in one year.

"We think [the extra testing] has been a significant reason for this improvement," McDonald says. "Not only has it allowed us to identify and remediate student deficits, but it has helped teachers identify gaps or deficits in their teaching."

Administrator buzz about the results has been so positive that what began as an initiative in Title I schools has led to all but three of the 41 elementary and middle schools in Shelby County purchasing ThinkLink from their individual school's budget, says Karen Woodard, testing supervisor for the district.

Every 10 Seconds

The results in Elmore Park are a persuasive argument in favor of using a series of formative tests throughout the year that mimic high-stakes tests. But the contrarians in the field argue that too often the results of this kind of testing aren't used to modify instruction on a day-to-day basis.

" What I mean by formative
assessment is not assessment that
takes place every fi ve to six weeks,
but assessment that takes place
-Dylan Wiliam, ETS' director of its learning and teaching research center

"What I mean by formative assessment is not assessment that takes place every five to six weeks, but assessment that takes place every 10 seconds," says Wiliam. Wiliam at ETS, and Rick Stiggins at the Assessment Training Institute, argue that teaching teachers how to do more effective assessments on a daily basis is really the key to improving learning, and eventually test scores. "We acknowledge there's a place for these 10-week reviews, but in most instances I regard that as too late."

Here's an example Wiliam gives of how a teacher can improve classroom assessment: Imagine a teacher asks a class of 20 children a question hoping to gauge their understanding of a concept just covered. Six raise their hands to volunteer an answer; the teacher calls on one. Polling one child who volunteers an answer is not an accurate way to assess learning, he argues. In a research ETS is conducting in New Jersey, Pennsylvania, Delaware and Maryland, each student is given a dry-erase board and a marker. When a teacher asks a question, each student must write their answer and then show their board.

"With whiteboards, you can't hide, everybody has to respond," Wiliam says. "Then the teacher has a very quick take on whether the class has understood something."

The idea can be modified to use colored cards or other simple mechanisms, and also requires deep student participation in the assessment discussion. Stiggins calls this assessment for learning, as opposed to assessment of learning, and he argues that most teachers, and principals, don't know how to do it.

"The vast majority of teachers have never been given the opportunity to learn about sound assessment practices," Stiggins says. "And assessment training has been nonexistent in leadership training programs."

That may be changing somewhat. Both Ohio and Illinois have launched programs dedicated to professional development for assessment for learning, Stiggins says.

The bottom line on formative assessments seems to be that the situation that's created the demand for formative testing is far too complex to be solved by a single product, or even a single assessment methodology.

"The whole point is not to have a 'Gotcha!' where people are surprise and embarrassed [by the results of summative tests]," Reeves says. "We need a seamless and morally fair link between assessment and what's happening in the classroom."

Rebecca Sausner is a contributing editor.

Related Information