Under the Every Student Succeeds Act, state education leaders have a lot of flexibility to design and test new interventions for struggling schools.
By Carrie Conaway
Now that policy makers across the country have begun to implement their Every Student Succeeds Act (ESSA) state plans, they must decide how best to meet the law’s requirements for the use of evidence-based educational practices. To date, most have focused on one part of the law in particular: The lowest-performing schools must implement improvement strategies that have already been shown to be effective — that is, with strong (Tier 1), moderate (Tier 2), or promising (Tier 3) evidence that they “work.”
But if state leaders aim to build stronger connections among research, policy, and practice, the best opportunity to do so comes from the parts of the law that allow states and districts to invest in Tier 4 strategies: programs and practices that are informed by research and seem reasonably likely to succeed, but don’t yet have the kind of evidence that would place them in Tiers 1 to 3. For example, such strategies might include new models for teacher-led professional development, reducing exclusionary discipline, or drug and violence prevention. They may not be listed in the U.S. Department of Education’s What Works Clearinghouse, but when implemented, evaluated, and refined, they may very well prove to be effective.
At the Massachusetts Department of Elementary and Secondary Education, we’ve had great success using a Tier 4-like approach to identify interventions, study their implementation, measure their effects, and modify them to fit our local context.
In 2010, then-Massachusetts Gov. Deval Patrick signed the Achievement Gap Act, which significantly expanded the state’s authority to intervene in low-performing schools. But while my agency had a responsibility to make immediate improvements, it wasn’t clear from prior research what interventions would be effective in these schools, given their differing contexts and needs. So we decided to work with local stakeholders to design and implement our own turnaround model — focusing on strong school leadership and professional collaboration, improved classroom instruction, individualized student support, and positive school climate — while also investing in research that would allow us to improve that model over time.
We commissioned an impact study (LiCalsi & Píriz, 2016) rigorous enough to qualify for Tiers 1 to 3, along with qualitative studies (Stein et al., 2016) designed to identify specific practices that appeared to contribute to the program’s success. Thanks to this iterative process of providing support, studying its impact, and making improvements, our turnaround model has become strong enough that more than half of our lowest-performing schools have exited turnaround status. Not only has this process allowed our state agency to get better at policy and program development but, over time, we’ve been able to gather enough evidence of the effectiveness of our turnaround model that it now meets the ESSA evidence requirement for implementation in our lowest-performing schools.
The problem with “doing what works”
Too often, the rhetoric around evidence-based practice starts from the simplistic premise that K-12 education will improve if only practitioners can be made to “do what works,” as though they were implementation automatons who should do as experts say, rather than seeking their own solutions to the problems they care about.
This flies in the face of scholarly investigation into how research is actually used in practice. This literature shows that research findings tend to be used as conceptual tools — that is, they influence how people think about or frame a problem — rather than being used instrumentally, directly influencing a particular decision. And for good reason: In local contexts, prior research often has little to say about what effective practice might be. For example, while we’ve gathered strong evidence that our turnaround strategy has been effective in Massachusetts, it may not be effective in a state with a very different governance structure or one where the lowest-performing schools tend to be in rural rather than urban areas.
While our practices might not work elsewhere, though, other states may find that our efforts can inform theirs, suggesting broad strategies that they can adapt to their specific contexts. To the “do what works” crowd, that wouldn’t count as evidence-based practice — but it is, in fact, deeply rooted in a thoughtful application of evidence to local conditions. If the goal of evidence-based policy is for states and districts to improve their work, then Tier 4 offers a way for them to get better at getting better.
LiCalsi, C. & Píriz, D.G. (2016, September). Evaluation of level 4 school turnaround efforts in Massachusetts. Part 2: Impact of school redesign grants. Washington, DC: American Institutes of Research.
Stein, L.B., Therriault, S.B., Kistner, A.M., Auchstetter, A., & Melchior, K. (2016, September). Evaluation of level 4 school turnaround efforts in Massachusetts. Part 1: Implementation study. Washington, DC: American Institutes of Research.
NOTE: The views expressed in this essay are the author’s own and do not necessarily reflect the official policy or position of the Massachusetts Department of Elementary and Secondary Education.”
CARRIE CONAWAY (firstname.lastname@example.org) is chief strategy and research officer at the Massachusetts Department of Elementary and Secondary Education, in Malden, Mass.
Originally published in May 2018 Phi Delta Kappan 99 (8), 80. © 2018 Phi Delta Kappa International. All rights reserved.