ESSA and what it means for district administrators
The Every Student Succeeds Act raises the bar of what qualifies as an evidence-based educational activity. Below is a discussion of what that means with practical guidance for district leaders.
Talk about “evidence-based interventions” under the Every Student Succeeds Act.
One of the significant differences between No Child Left Behind and the Every Student Succeeds Act is the “evidence-based” requirements for programs. Under ESSA, “evidence-based” is a critical factor for how state and local education agencies use federal funding to purchase interventions for students. ESSA also introduces levels of evidence (with higher levels considered “stronger” evidence). However, the levels speak more to the methodology used to do research than the quality of evidence. For example, it is possible to perform what is methodologically a randomized control study (ESSA Level 1) and still not have it be highly relevant or of high quality.
What are the implications for each level?
The federal government recommends that any interventions at least be informed by prior research and theory of action in the field. For example, in the teaching of reading, there’s plenty of research evidence that says taking a phonics-based approach to reading instruction is effective. If I use a theory of action and established practices in the development of my program, that doesn’t prove that my program works better than any other program. It just means I used these informed methods to develop it. That would be Level 4. Where it gets a little more interesting is establishing “correlational evidence with controls” or Level 3. Level 3 is not as controlled as Level 2 or 1, but it’s considered “promising” and opens up access to the more limited funding sources. For example, Title I schools choosing an intervention must have Level 3 evidence or above.
How should district administrators define successful research to meet ESSA requirements?
Show the evidence. If a program provider claims to have research evidence, you want to see it. Ideally, it should be published. Then dig into the evidence itself. That’s essential. Some programs have evidence that is more than 10 years old. In education, a lot of things have changed in the past 10 years. Standards keep changing. Many states have been through two rounds of evolving standards in the past 10 years. Also, ask yourself if the evidence is relevant to your students. For example, if a school has a high population of English language learners, you probably want to make sure that the research included a similar population and that the programs that you’re selecting work for that population.
Should district administrators be swayed by statistical significance in any research if it doesn’t have practical relevance as well?
In education research, there is a lot of focus on statistical significance. When evaluating the evidence for a program, you want to know how much practical relevance it has. If you’re interested in getting students reading, you do not want an intervention that had a statistically significant impact but ultimately amounted to a minimal change in students’ reading performance. You might want to see that a reading program leads to a significantly higher number of students hitting reading proficiency on state tests.
For more information on this topic, visit curriculumassociates.com/ESSA-DA to download Curriculum Associates’ recently published white paper on how to evaluate evidence and how to look for practical significance.