Mapping America’s teacher evaluation plans under ESSA  

 

ESSA gave states more control over their teacher evaluation systems.  How have states responded? 

 

A map provides no answers. It only suggests where to look,” says Miles Harvey, an author and map expert (2010, p. 38). Maps provide orientation and direction, but not full explanations of the landscape. At minimum, they can help us understand current phenomena, such as teacher evaluation systems after the federal government’s adoption of the Every Student Succeeds Act (ESSA).  

Let’s begin by taking a step backward, for some historical context. Eight years ago, in 2011, the federal government effectively required all states to adopt and use what they called “reformed” teacher evaluation systems. Put simply, the higher the consequences attached to the data derived through these systems, the more federal Race to the Top funds states received. Not surprisingly, given how much states rely on federal education funds, almost all states adopted such teacher evaluation systems, and their effects were consequential.  

For example, many states required that teachers be evaluated according to their students’ test scores, often using value-added models (VAMs), which compared the growth of students’ test scores with those of demographically similar students in other classrooms. If students’ test scores did not show improvement over time, their teachers’ professional files could be permanently flagged or they could often be denied merit pay, tenure, or continuing teaching contracts. The situation surrounding teacher evaluation got so ominous, in fact, that teachers across the country sought judicial remedies to stop the punitive use of their students’ test scores. By 2015, teacher-plaintiffs filed at least 15 lawsuits across federal and state courts (Sawchuck, 2015) arguing, among other legal claims, that these teacher evaluation systems violated their constitutional rights, especially in states or districts that used VAMs as primary or sole factors in consequential decisions about teacher quality and compensation.  

When ESSA was passed, some commentators opined that VAMs would be wiped from the map.

In 2016, the federal government adopted ESSA. Perhaps in part due to these lawsuits as well as other protestations surrounding the uses of students’ test scores, ESSA retracted the federal government’s previous control over states’ teacher evaluation systems, permitting once again more local control (as is in line with the U.S. Constitution). After federal lawmakers signed ESSA into law, state departments of education had the opportunity to overhaul their teacher evaluation systems. But did they?  

Surveying the current landscape 

While ESSA enabled states to regain control over their teacher evaluation systems and, in turn, give more control to their local education authorities, the extent to which states actually revised their teacher evaluation systems has been unclear. We wondered whether states are actually breaking new ground or staying the course in terms of test scores and teacher accountability, as called for by such major organizations as the Council of Chief State Schools Officers (Burnette, 2015). And if states are making changes, we sought to understand the extent to which states are “all over the map,” per the title of this issue of Kappan 

Information about teacher evaluation systems can be confusing. To map out the current landscape, we sifted through ESSA state plans and state websites, collected state information from surveys sent to state department of education personnel from all 50 states and the District of Columbia, and for four state personnel who wanted to explain their teacher evaluations more carefully, we conducted detailed phone calls to collect information (For more information about our methods, see Close, Amrein-Beardsley, & Collins, 2018). 

VAMs: Still on the map 

When ESSA was passed, some commentators opined that VAMs would be wiped from the map, while others predicted that states would not simply walk away from the evaluation systems in which they had already significantly invested. It appears neither side got things quite right. Our findings suggest that while the use of VAMs to hold teachers accountable for their levels of effectiveness is still transpiring, VAMs are losing traction among states. 

When we asked states if they still encouraged the use of VAMs, 15 of 51 (29%) indicated that they still did, and 23 of 51 (45%) indicated that they no longer did (see Figure 1). While this does demonstrate a clear decline in VAM use over time, especially since ESSA’s passage, the withdrawal has not been as rapid as some people anticipated.  

Ten states indicated that they were now encouraging more local control over teacher evaluation systems, increasingly deferring to others (e.g., districts) to make their own teacher evaluation system and policy decisions. Perhaps this is related to the lawsuits that states mandating highly consequential teacher evaluation systems tended to face in recent years (Sawchuck, 2015).  

Other states noted that they were still endorsing or using VAMs, but only for informational and formative purposes. Put differently, many states have backed off of their previously consequential uses of VAMs, but have left VAMs in place so that the data could be used to provide actionable feedback, but not to yield awards or punishments for teachers.  

Two states noted that they were still developing their teacher evaluation systems and had yet to determine whether a VAM would be included. The bottom line is that districts can now, more than before, go their own directions, and they are beginning to do so. (For more on district roles under ESSA, see “The essence of ESSA: More control at the district level?” in this issue of Kappan.) 

Teacher observation: Across the landscape 

Teacher evaluation systems across the country continue to include teacher observations as a dominant feature. While teacher observational systems have always been a popular component of states’ teacher evaluation systems, the perception that they are subjective and potentially biased (see, for example, Weisberg et al., 2009) drove the move toward more “objective” measures, such as VAMs, in Race to the Top. Teacher observation, however, never went away; under ESSA, more than 70% of states reported to us that they are still using or encouraging teacher observations as part of their teacher evaluation systems.  

Two of the most often used observation systems are Danielson’s Framework for Teaching (Danielson, 2012; Danielson & McGreal, 2000) and the Marzano Causal Teacher Evaluation Model (Marzano & Toth, 2013). There is some overlap among states that use or encourage the use of Danielson’s framework and Marzano’s model, with eight states reporting using or encouraging teacher observations based on both of these models and/or others. For example, Alabama uses an observational framework based on a combination of the Alabama Quality Teaching Standards and the work of Danielson and Marzano. Alternatively, Alaska allows local school districts to select from several major frameworks including, but not limited to, Danielson and Marzano. Thus, teacher observations can also be described as “all over the map,” thanks to the local control now exercised by states and districts under ESSA. 

Student growth: An evolving presence 

While the legacy of VAMs as the “objective” student growth measure remains in place to some degree, the definition of student growth in policy and practice is also changing. Before ESSA, student growth in terms of policy was synonymous with students’ year-to-year changes in performance on large-scale standardized tests (i.e., VAMs). Now, more states are using student learning objectives (SLOs) as alternative or sole ways to measure growth in student learning or teachers’ impact on growth. SLOs are defined as objectives set by teachers, sometimes in conjunction with teachers’ supervisors and/or students, to measure students’ growth. While SLOs can include one or more traditional assessments (e.g., statewide standardized tests), they can also include nontraditional assessments (e.g., district benchmarks, school-based assessments, teacher and classroom-based measures) to assess growth. Indeed, 55% (28 of 51) of states now report using or encouraging SLOs as part of their teacher evaluation systems, to some degree instead of VAMs.

In other words, student growth still plays an important role in teacher evaluation; however, how states define student growth has been substantively expanded to include more definitions and conceptions of growth. While SLOs are not nearly as well researched or established as VAMs (see, for example, Reform Support Network, 2014), they certainly seem to be trending. The Nebraska Department of Education officially encourages SLOs, even though many local schools have been slow to adopt and use them. In Nevada, teachers and their supervisors use tools to create SLO metrics they call Student Learning Goals (SLGs), but the processes for creating SLGs vary significantly from school to school, also given the local control being exercised by states and districts. Maybe a next step for state departments will be to develop best practices for creating and using SLOs or similar measures. 

“Off map” indicators 

Our maps show that some state departments have more explicitly heightened their emphases on local control, a trend that also appeared when we asked state department of education personnel to discuss the strengths and weaknesses of their teacher evaluation systems under ESSA. Because this information can be sensitive, we can only talk about it in general terms to protect the identities of our state contacts. But what we learned through these conversations was nonetheless valuable.  

Our maps show that some state departments have more explicitly heightened their emphases on local control.

In short, two-thirds of state department personnel noted that local (district-level) control was a strength of their new teacher evaluation systems. These numbers match expectations, given the explicit push toward local control written into ESSA. However, because we were also interested in why state department personnel consider local control to be a strength, we probed further and found that the states’ reported appreciation of increased local control was due to the fact that states now had more freedom to increase stakeholder input into their teacher evaluation. State personnel reported that improved stakeholder input was a critical turning point post ESSA, in part because it helped states cultivate more cooperative and less combative relationships among teachers and state education leaders, policy makers, and other authorities (e.g., the state governor’s office).  

Our state-level informants also cited how important the increased calls for (in)formative uses of teacher evaluation were to their efforts. Some noted that they built their new systems with a genuinely “reformed” mind-set about how to evaluate teachers well. Under ESSA, their evaluations would not just hold teachers accountable for what they do or do not do well; instead, they would be more collaborative systems that support teachers’ professional advancement and improvement of their professional practices. Instead of using tools for measurement imposed in a commanding or consequential way, states have moved toward teacher evaluation systems that are informative and also supportive of teachers’ professional growth. Fully one-third of state department personnel discussed this shift as a primary strength of their ESSA evaluation programs.  

Where we are now, and where to go next

At minimum, these findings show promising directions in states’ approaches to teacher evaluation, the road moving away from a place many states and districts likely never really wanted to go. The most promising practices seem to surround states’ increased deference to district-level decision-making, the collaborations with stakeholders that this local control seems to have sparked, and the use of data in less consequential and perhaps more worthy ways. We can also see that states are turning away from VAMs and toward SLOs (which may be the next area of the map to explore as they become better researched and vetted). Finally, our “off map” findings may give us some insight into the minds of the personnel from state departments of education. The fact that many state department personnel view teacher evaluation as a formative process may ultimately prove to be the most important key to a better approach to teacher evaluation across the United States. 

Like a good map, our findings can help education stakeholders orient themselves in this post-ESSA landscape. Some states’ paths are not that well mapped out yet, but some of the geography under ESSA is beginning to become clear. Maps are just a place to start; it’s up to the education community to use these maps to determine where to go and where to look next. We hope our maps also provide some ideas for future directions for states looking to chart their own courses in this area. 

References 

Burnette, D., II, (2015, December 11). State chiefs say they will stay the course with ESSA reauthorization [Blog post]. State Ed Watch at Education Week. blogs.edweek.org/edweek/state_edwatch/2015/12/state_chiefs_say_they_will_stay_the_course_with_essa_reauthorization.html 

Close, K., Amrein-Beardsley, A., & Collins, C. (2018). State-level assessments and teacher evaluation systems after the passage of the Every Student Succeeds Act: Some steps in the right direction. Boulder, CO: Nation Education Policy Center.   

Danielson, C. (2012). Observing classroom practice. Educational Leadership, 70 (3), 32-37. 

Danielson, C. & McGreal, T.L. (2000). Teacher evaluation to enhance professional practice. Alexandria, VA: ASCD. 

Harvey, M. (2010). The island of lost maps: a true story of cartographic crime. New York, NY: Broadway Books. 

Marzano, R.J. & Toth, M.D. (2013). Teacher evaluation that makes a difference: A new model for teacher growth and student achievement. Alexandria, VA: ASCD. 

Reform Support Network. (2014). Targeting growth using student learning objectives as a measure of educator effectiveness. Washington, DC: U.S. Department of Education. 

Sawchuck, S. (2015, October 6). Teacher evaluation heads to the courts. Education Week.  

Weisberg, D., Sexton, S., Mulhern, J., & Keeling, D. (2009). 
The widget effect: Our national failure to acknowledge and act on differences in teacher effectiveness. New York, NY:  The New Teacher Project. 

 

Citation: Close, K., Amrein-Beardsley, A., & Collins, C. (2019, Sept. 23). Mapping America’s teacher evaluation plans under ESSA. Phi Delta Kappan, 101 (2), 22-26.

 

KEVIN CLOSE (kclose1@asu.edu) is a doctoral student in the Learning, Literacies, and Technologies Program at Mary Lou Fulton Teachers College at Arizona State University in Tempe.
AUDREY AMREIN-BEARDSLEY (audrey.beardsley@asu.edu) is a professor in the Educational Policy and Evaluation Program at Mary Lou Fulton Teachers College, Arizona State University. She is the author of Rethinking Value-Added Models in Education: Critical Perspectives on Tests and Assessment-Based Accountability (Routledge, 2014) and coeditor of Student Growth Measures in Policy and Practice: Intended and Unintended Consequences of High-Stakes Teacher Evaluations (Palgrave, 2016).
CLARIN COLLINS (clarin.collins@asu.edu) is director of scholarly initiatives at Mary Lou Fulton Teachers College, Arizona State University, Tempe.

One Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

stdClass Object ( [ID] => 67592 [display_name] => Kevin Close [first_name] => Kevin [last_name] => Close [user_login] => kevin-close [user_email] => kclose@fake.fake [linked_account] => [website] => [aim] => [yahooim] => [jabber] => [description] => KEVIN CLOSE (kclose1@asu.edu) is a doctoral student in the Learning, Literacies, and Technologies Program at Mary Lou Fulton Teachers College at Arizona State University in Tempe. [user_nicename] => kevin-close [type] => guest-author ) stdClass Object ( [ID] => 67599 [display_name] => Audrey Amrein-Beardsley [first_name] => Audrey [last_name] => Amrein-Beardsley [user_login] => audrey-amrein-beardsley [user_email] => aamreinbeardsley@fake.fake [linked_account] => [website] => [aim] => [yahooim] => [jabber] => [description] => AUDREY AMREIN-BEARDSLEY (audrey.beardsley@asu.edu) is a professor in the Educational Policy and Evaluation Program at Mary Lou Fulton Teachers College, Arizona State University. She is the author of Rethinking Value-Added Models in Education: Critical Perspectives on Tests and Assessment-Based Accountability (Routledge, 2014) and coeditor of Student Growth Measures in Policy and Practice: Intended and Unintended Consequences of High-Stakes Teacher Evaluations (Palgrave, 2016). [user_nicename] => audrey-amrein-beardsley [type] => guest-author ) WP_User Object ( [data] => stdClass Object ( [ID] => 1291 [user_login] => ccolllins [user_pass] => $P$BcrlWqqcv4Q/gzXqXUFq6lZtFh.V26/ [user_nicename] => ccolllins [user_email] => ccollins@fake.fake [user_url] => [user_registered] => 2019-09-17 20:04:36 [user_activation_key] => 1568750676:$P$BZTULF4EYfAt9Vmg0c61bHHtTuJqT2/ [user_status] => 0 [display_name] => Clarin Collins [type] => wpuser ) [ID] => 1291 [caps] => Array ( [author] => 1 ) [cap_key] => wp_capabilities [roles] => Array ( [0] => author ) [allcaps] => Array ( [upload_files] => 1 [edit_posts] => 1 [edit_published_posts] => 1 [publish_posts] => 1 [read] => 1 [level_2] => 1 [level_1] => 1 [level_0] => 1 [delete_posts] => 1 [delete_published_posts] => 1 [author] => 1 ) [filter] => [site_id:WP_User:private] => 1 ) 1291 |

MORE ON THIS TOPIC

Unlearning NCLB 


The essence of ESSA: More control at the district level? 


Is ESSA a retreat from equity? 


Assessing state ESSA plans: Innovation or retreat? 


From what it’s not to what it is