The Richard Henry Dana Elementary School in Dana Point, Calif., houses 385 students on a bluff overlooking the Pacific Ocean. The demographics of the school would suggest a challenging scenario for raising student achievement: 78 percent of students participate in the free or reduced-price lunch program, and 59 percent of the students are English Language Learners. Yet by any measure, the students at this school demonstrate admirable growth in all areas assessed on California standardized tests. The overall Academic Performance Index (API) for the school has grown from 704 to 806 in two years (800 is the benchmark of excellence for the state). The data is even more impressive when disaggregated for subgroups of students: Hispanic/Latino students showed a 39 point gain, students from the disadvantaged socioeconomic subgroup gained 33 points, and the API for English Language Learners jumped 43 points.
Principal Chris Weber attributes this improvement to the use of assessment data to target instruction and intervention for all students at the school, particularly those below grade level in math and reading. California standardized test data is the end-point and is used to set goals and targets. It is one year-end measure of student performance, a summative assessment of learning. However, Weber and his staff use assessment data to guide instruction during the learning process. His teachers measure learning as it happens, guiding instruction in real time. This progress monitoring is conducted with a variety of formal and informal reading inventories, benchmark or periodic assessments, and teacher-made tests.
Learning from DIBELS
For example, primary grade teachers use Dynamic Indicator of Basic Early Literacy Skills (DIBELS), an assessment system used to measure student progress in the big ideas of early literacy development. It is designed to be a one-minute fluency measure to regularly monitor the development of pre-reading and early reading skills. These quick assessments provide benchmark measures at the beginning, middle and end of the school year.
Weber asserts that the best way to evaluate the reading ability of students is to listen to them read. His teachers use the Qualitative Reading Inventory, 4th Edition (QRI-4), to informally assess fluency and comprehension in reading for each individual student. The QRI-4 is administered between the DIBELS assessments to get a running record of student reading. Data from each assessment is analyzed by a data team to design reading intervention for each individual student. As the assessments are frequent, student instruction is modified in a timely manner.
This regular assessment and resultant adjustment of instruction was a big change for Weber's teachers. To get the ball rolling, he started doing the progress monitoring himself. He would go into classrooms, conduct the inventories, put data into a spreadsheet, match the scores with content standards, and place students into intervention activities. He structured the daily assignment of reading intervention staff so that students could rotate into activities during scheduled reading groups. As teachers saw the growth in student learning in targeted instruction, they took over the process themselves. More importantly, students saw their progress and were motivated to read more! The system was not perfect at first, but Weber and his staff learned a lot from assessing performance and adjusting learning for students. As the school leader, he rolled up his sleeves and jumped in.
Somewhat counterintuitively, more assessment promoted higher levels of student learning. At R. H. Dana, the administration of assessments is not seen as an imposition on class time; rather, it is viewed as an essential means of obtaining feedback to guide learning. Data teams perform a checkup on student learning, not a postmortem.
R. H. Dana teachers make an important distinction between assessment for learning and assessment of learning. This is in line with the work of Rick Stiggins, Robert Marzano and others that advocates the use of formal and informal assessment during the learning process to guide instruction and student behavior in the attainment of content and skill mastery. This formative assessment guides learning as it happens. There is also a place for summative assessment, obviously, as end-point measurements, but data from this type of assessment is often used to guide instruction for future learning of other students the following year. Formative assessment is frequent and provides regular feedback to current students and their teachers.
Weber and his team have developed excellent assessment practices:
1. Assessment is linked to the content and performance standards set by the state of California and sets the basic competency level expected for students.
2. Initial assessment determines the current level of each student.
3. The gap between expected and current performance is identified.
4. Students are placed in intervention activities based on their current performance levels.
5. Frequent measurement using short, one-to-one inventories guides the movement of students into appropriate learning activities based on their current performance.
6. Students receive frequent and meaningful feedback on their performance.
7. Teams of teachers meet regularly to review data and make instructional decisions for learning units and for individual students.
8. End-of-year standardized tests provide feedback about how the school's students are doing overall.
Progress monitoring is the essential link to help teachers determine the extent to which students are closing the gap between expected levels of performance and actual performance. If teachers can see and feel this gap in a quantifiable way, they are more likely to seek ways to close the gap for their students.
Eamonn O'Donovan is assistant superintendent of special education services in Capistrano Unified School District in California.