My colleague recently attended a meeting of school officials from around the state discussing approaches to solving problems in schools suffering the greatest academic distress. Lots of ideas were shared, considered, and dismissed. Consensus came when a superintendent stood up and pitched a program his former district had employed. At no point during the superintendent’s presentation was valid research cited to support the program. In fact, no research of any kind was cited. There were no data from the former district to suggest that the program worked there, nor was there a theoretical argument given to suggest that the program would work anywhere. Yet, in the end, it met with universal approval around the room and was slated for adoption.
One would wish this to be an isolated case, but what we see in the papers, as well as in person while consulting with educators, suggests this is not uncommon. Too many articles make their way into the popular press touting educational programs adopted by local schools despite a lack of supporting research. We recently read about a school in Baltimore that was using meditation in place of detention despite the lack of empirical support for that approach to address student behavior problems. By the way, we’re not big fans of detention either. The point is, these stories make their way into the press because the adopters are proud of the adoption, and put these stories forward to the press before any other. One can imagine that the adoptions that don’t make it to the papers are likely worse; or maybe they are actually effective but mundane. Who knows?
More frustrating is the fact that educational programs WITH supporting research are too often validated by studies with such serious methodological problems that the program effectiveness is in reality unsubstantiated. This is not a new problem. In a meta-analysis to evaluate the efficacy of early intervention programs conducted 30 years ago (one that I participated in and conducted analyses for), less than 10% of the 3,000 effect sizes were calculated from studies with very minor, or no threats to internal validity. More than half of the 600 studies drew conclusions from observations that could easily be explained by anything BUT the independent variable. In other words, one in ten studies was great, and more than half were appalling.
Unfortunately, there is no indication that things have gotten better since then. With more university faculty pressured to publish, it is likely that the newest and less willing faculty participants are undertrained in research methods. Fewer still have the resources to design, implement, analyze, and disseminate high quality educational research. If this weren’t true, would the pressure to publish still be necessary? In the most recent issue of Educational Researcher (Nov 2016), Malouf and Taymans examined only those interventions promoted by the What Works Clearinghouse and found that most of those interventions had little or no support from technically adequate research. And even if they did, effects were too small to meet education policy goals. These are supposed to be the “go to” interventions for schools and even they are suspect. If it wasn’t bad enough before, certainly the noise has grown to the point where the signal can barely be heard.
So how in the world can we expect schools to adopt evidence based practices when there are so many barriers for them to do so? The scenario presented at the beginning of this post is more than common, given the current circumstances, it’s expected. However, there is hope.
Let us consider the steps a school or district should take to adopt an appropriate and effective school program/practice to meet a need.
- Using data, identify a target for improvement
- Identify at least one practice to aim at that target
- Obtain personnel/training and the equipment/materials for the implementation of that practice
- Set measureable goals
- Implement the practice with fidelity
- Monitor the implementation and track progress toward the measurable goals
- Using data, re-nominate or change the practice
Although each step in this cycle is important, it’s the first two that I wish to focus on for this post. The remaining items are reminders of the SMART process and should be very familiar to most school personnel.
Using data, identify a target for improvement
Notice that I didn’t suggest that schools should identify a need or problem. For most schools, the need is higher test scores, but that doesn’t really help understand what a school CAN do to meet that need. The same is true if the need is higher graduation rates, reduced bullying, or better community support. What schools really need to do is identify their target.
A target is most often a common student (or teacher) misbehavior or skill deficit that functionally relates to a need and can be affected by the school and its personnel. Schools can’t be expected to address, for example, the low incomes of families in their neighborhood, but there are things under their control that should give them encouragement. For disappointing test scores the target could be anything from poor reading comprehension to excess tardies. For bullying the target could be anything from student interpersonal skill deficits to disrespected standards for student behavior in common areas. The list for any of these can be long, but there is always something the school can address that will help their students get more from their school experience.
Of course, it is imperative that schools use data to identify their target. This means both organizing extant data as well as collecting additional data aimed at anticipated targets. Although it is tempting for schools to just pick a target (e.g., homework or parent involvement) because it seems important or because other schools or districts have already chosen that target, that would be a bad idea. If that target doesn’t relate to a need, or there are no data to suggest that the target is in bad shape at your school or district, then precious time and resources have been wasted. This is why state or district mandates can be both helpful as well as frustrating.
Identify at least one practice to aim at that target
This step could have suggested that the practice be evidence based, but as this post has pointed out, that is more convoluted than it seems. Fear not, by the end of this post I intend to provide some accessible options. Despite the fact that there is a growing pile of useless or misleading publications, great research has been and still is being conducting to inform many facets of school operations.
This step could also have used “program” in place of “practice.” Although schools are more oriented to programs, I prefer practice because it implies an explicit intervention with few moving parts that is tied to a specific measureable target. The idea of a program sounds like it comes in a pretty package with lots of moving parts that you can use, discard, or rewire as your circumstances allow. When the effectiveness of any program (or practice) relies on a standardized and rigorous implementation, tossing or retrofitting parts is a sure way to dilute any effect the program might have had. Keeping the intervention focused and simple gives it a much better chance to be implemented with fidelity and to succeed.
As an aside, it may be helpful to understand what schools currently face when they adopt new programs. Yes, for this rant I will use “programs.” School leaders typically have years of experience examining new programs to adopt, and that experience often tells them that most of what they adopt won’t change outcomes much or for very long. There are three predominant reasons why school leaders have been let down in the past. First, as this post has pointed out, school leaders are likely to (and innocently) adopt interventions that won’t work no matter how well they’re implemented. The research said they’d work, but the research was wrong. Second, there was no way to get sufficient support from colleagues to implement the program with the rigor required to make it work as advertised. Those colleagues either don’t share the enthusiasm or have more faith in things they are already doing. And thirdly, circumstances at their school were too different from the research context for generalization to occur. Sure, it works for affluent kids in the suburbs, but not so much for inner-city students from backgrounds where basic skills are at predictable deficits. Thus, getting school leaders to muster enthusiasm for anything new is understandably difficult.
Also, there is a common belief that things come and go. Which sounds right but it’s only half true. In education, things come along and they even come back around, but they never really seem to go. That is schools and districts have a difficult time un-adopting programs even when they clearly don’t help. Thus, nothing is “new,” it’s only “more.” In this context, school leaders can only be enthusiastic about adoptions that don’t add too many new things for their colleagues to do, even if there are mountains of valid data to suggest they work. More to the point, new programs will be adopted if they seem very similar to things already being done at the school. Thus, leaders can claim to be doing the latest thing while avoiding change altogether. This is not the result of mismanagement; it’s actually wise given the circumstances. If experience doesn’t give much hope, why upset everyone for nothing?
Above, I promised to provide a guide to identifying practices, and although my advice sounds self serving, I believe that the Conditions for Learning are valid evidence based principles that give adopters a sound perspective for evaluating both new practices as well as current practices that might need retirement. To organize my advice, I refer to practices as interventions, something practices (and programs) “graduate” to when systematically administered by more than one person across classrooms, schools, or districts.
Below are four questions that can be asked when evaluating any intervention being considered or any program already being conducted in the school. Although it would be nice if the intervention was designed to build every condition, it will have value if it builds even just one.
- Does the intervention clarify expectations for performance? Remember that we defined “target” as a behavior or skill; something that can be performed, and in turn, observed. This applies to all behaviors and skills: academic, social, motor, even skills teachers need to engineer effective instruction. Every target requires clear expectations for performance to succeed.
- Does the intervention generate numerous opportunities to practice basic skills under the watchful eye of someone fluent in those skills? Motor skills are not the only skills that benefit from practice. Lots of reading makes for better readers, and this principle holds true for solving math problems, making friends, following instructions, praising students, and much much more. Plus, having a fluent observer allows for the timely recognition of correct performance or steps toward correct performance.
- Does the intervention ensure recognition for meeting performance expectations? Nothing is more effective than catching someone doing something right. Unfortunately and all too often, school professionals assume that recognition systems schools currently employ are sufficient, but our experience suggests that is far from true. Sadly, very few schools have data to assess and monitor positive behavior support efforts, and when they do they are initially disappointed with what they find. However, those schools and teachers that choose persistent measurement here, experience quick improvement and dramatic effects.
- Does the intervention build relationships and create trust? Relationships between teacher and student based on trust and esteem make everything possible. But this is true for relationships between students, between teachers, between school staff and the principal, between the school and parents, and on and on. Trust is the most efficient fuel for communication and learning.
Matthew J. Taylor, PhD