With an increased emphasis on collaborative teaming in schools, there has been a subsequent spread in the adoption of common formative assessments, or CFAs for short. CFAs provide teachers who are covering the same curricular standards to assess idea comprehension or skill acquisition close on the heels of instruction. These assessments provide collaborative teams with something to talk about, to plan over, and to guide re-instruction.
The idea is simple enough; deliver a lesson, then administer a short quiz covering that material to identify students who need clarification or re-instruction. Since the quiz is also being given in other classrooms by other teachers, hence common, data can be compared and instructional strategies with the best success can be shared, hence formative. This strategy appears to be systematic and practical. However, the students who benefit the most from this practice likely don’t need much additional support to catch what they missed initially, and systematic is a relative term.
Let’s start by defining terms. “Common” implies that the assessment given to students taught by a group of teacher collaborators has the same items in the same order. This is a less rigorous state than “Standard” which considers the context of the assessment. That is: the amount of time given to complete the quiz, the availability of help or hints from the teacher or other students, the presence of noise and other distractions, the method of scoring, primary language considerations, and even the time of day as hunger and fatigue can play a role in results. In the end, a classroom with better results from a “common” assessment may say nothing about the efficacy of that instructional practice. Worse, it really won’t tell the teacher which students could benefit from re-instruction. Many of the students who do poorly on a quiz suffer from skill deficits that require rigorous individualized interventions, not re-instruction.
“Formative” may be one of the more misused terms in education. In this context it refers to any data that informs an instructional decision; stuff for teachers. Unfortunately, it removes the one person most likely to benefit from the data; the learner. Imagine practicing an electric piano when only the teacher is wearing headphones. Imagine practicing free throws and only your coach gets to see if the ball goes in or not. Of course, your piano teacher can hear and your coach can see, but their feedback is merely an ear/eye-witness account of the behaviors that need to be tuned by that information. This doesn’t just apply to motor-skill acquisition; it is also true for any academic or social skills teachers wish students to acquire. It is important to understand that “Formative” should suggest “form”, not “inform”.
Finally, the practice of using Common Formative Assessments assumes that teachers can produce valid assessments for the material they cover. For practical reasons, teachers are not trained to be psychometricians. Yet even when CFAs are valid, standardized, and designed to inform the learner, they cannot provide trend data as the variability of content difficulty makes comparisons from week to week impossible. Worse yet, even the best CFAs often fail to suggest what to do about the deficits they uncover.
The solution is not to abandon the regular practice of collecting and analyzing data from learners, it is to consider what data are best to serve the functions of maintaining successful practices and improving other instructional practices and their consequences. Our research would suggest two options: data from students describing the instructional environment created and sustained by teachers, and data from learners demonstrating the fluency of basic literacy and numeracy skills that support leaning across all content and in every context. Regular collection of these data would provide collaborative teams with information to adjust practices, information students can track, and information that is comparable from assessment to assessment. These CFA strategies will also increase teacher confidence in themselves, their colleagues, and the collaboration process overall.
Matthew J. Taylor, PhD