EdD graduate receives AERA outstanding writing award

By

Jennifer Priest Mitchell

Ed Sloat (EdD, '15) was awarded the American Educational Research Association 2016 Division H Outstanding Publication Award in Category 4, Outstanding Dissertation, for his work, "Examining the Validity of a State Policy-Directed Framework for Evaluating Teacher Instructional Quality: Informing Policy, Impacting Practice."

"I was honored to receive this award and I'm excited that people are paying more attention to the issue of how we measure teacher quality," Sloat said.

A 2011 change in Arizona led to current state requirements for teacher evaluation and to Sloat's research on the topic. He said the law (Senate Bill 1040) mandated a framework for evaluating classroom teachers across all school districts in the state. The legislation was, in part, a reaction to the federal government's announcement of a competitive education grant program. It also, said Sloat, reflected a national trend of imposing policy-directed teacher accountability systems.

When the law passed, Sloat was a doctoral student in the Leadership and Innovation Program at Mary Lou Fulton Teachers College. The program emphasizes conducting scholarly research to address applied problems of practice in organizational settings. He was also director of research in a large public school district and was charged with developing a measurement system that would meet the needs of the local district as well as comply with the new law. Sloat combined his work and his research to address whether the prescribed policy framework that districts were required to implement permitted making high-stakes judgments about teacher competency.

"This new law was a big change for our state," said Sloat, "breaking a long tradition of local oversight and community decision-making with regard to evaluating teachers. It also transformed what was traditionally seen as a capacity-building and professional growth activity into a high-stakes, consequential accountability system. The evaluation results are now placed in each teacher's permanent professional file and are accessible to future employers. In addition, state rules require teacher performance pay systems to be partially based on evaluations. Because of this, validating Arizona's new framework becomes critical at both the state and local implementation levels."

Though the law mandates a framework for assessing teacher competency, it does not specify the methods or metrics to use. Each district was required to independently develop its own analytic approaches that would assign its teachers one of four possible competency labels: ineffective, developing, effective or highly effective. Teachers designated "ineffective" must be placed on formal improvement plans, while teachers remaining in the "developing" category for two or more consecutive years also face sanctions, according to Sloat.

He conducted the research with a team of his peers (principals, teachers, district administrators), and reviewed published literature on measurement, evaluation systems, statistical modeling and professional development. "We used a number of sophisticated, multi-level models to measure the student-growth dimension," he said, "and adopted the Danielson Framework for Teaching, a nationally recognized set of instructional standards, for classroom observation."

The committee began discussions in 2010–2011 when the legislation was initially being considered. "The team was committed to having ongoing two-way communication with stakeholders," Sloat said, noting that they hosted many meetings, discussion groups, presentations, feedback surveys and interviews. He is quick to add that his district had an established research department with specialized staff to support the process and help inform decision makers; a benefit many school districts throughout the state did not have. He suggests this raises questions regarding the quality and validity of evaluation results generated across districts.

Sloat's research used data from the 2012–2013 academic year, and his findings suggest that the evaluation framework is insufficient for supporting high-stakes, consequential inferences of teacher instructional quality. The work highlights the value information brokers provide to both design and decision-making processes and the importance of having access to unbiased information when undertaking organizational change.

"What's omitted or underrepresented by the new evaluation framework is perhaps one of the most important findings of the study," Sloat said. "The impact a great teacher has on a student's personal relationship with learning is not adequately incorporated or valued by the framework. This injects bias into the judgments being made from the available measures. Stakeholders almost uniformly expressed this idea of transformational impact as one of the most critical attributes distinguishing a good teacher from a great teacher. This isn't being measured by test scores and isn't fully developed under standardized pedagogical competencies."