Читать книгу Elements of Grading - Douglas Reeves - Страница 13

Оглавление

Chapter 3

THE IMPACT OF FEEDBACK ON ACHIEVEMENT

Although grading policies can be the subject of deeply held opinions, debates about grading are more constructive if we first agree on two important premises. First, we should be willing to agree that grading is a form of feedback. Second, we should be willing to agree that feedback is a very powerful instructional technique—some would say the most powerful—when it comes to influencing student achievement.

Evaluating the Evidence

Let’s look at the evidence. John Hattie’s (2009) synthesis of more than eight hundred meta-analyses evaluates the relative impact of many factors, including family structure, curriculum, teaching practices, and feedback on student achievement. The measurement that Hattie uses is effect size, or, simply put, the effectiveness of particular interventions. The impact of an effect size of 0.4 is, according to Hattie, about one year of learning. Therefore, any instructional or leadership initiative must at least pass this threshold. Many factors are statistically significant, as the following list will show. But statistical significance and practical significance are two different elements. Because of the overwhelming burdens on the time and resources of every school (Reeves, 2011a), it makes little sense to invest in initiatives that fail to cross the 0.4 level in effect size. An effect size of 1.0, Hattie suggests, would be blatantly obvious, such as the difference between two people who are 5 feet 3 inches (160 cm) and 6 feet (183 cm) in height—a difference clearly observable.

Even small effect sizes can be meaningful, particularly if they are devoted to initiatives that save lives. For example, Robert Rosenthal and M. Robin DiMatteo (2001) demonstrate that the effect size of taking a low dose of aspirin in preventing a heart attack is 0.07—a small fraction of a standard deviation—yet this translates into the result that thirty-four out of every one thousand people would be saved from a heart attack by using a low dose of aspirin on a regular basis.

The use of the common statistic for effect size helps busy teachers and school administrators evaluate alternative strategies and their impact on achievement compared to variables outside teachers’ and students’ control. For example, some of Hattie’s findings include the influence of the following on student achievement (Hattie, 2009).

• Preterm birth weight (0.54)

• Illness (0.23)

• Diet (0.12)

• Drug use (0.33)

• Exercise (0.28)

• Socioeconomic status (0.57)

• Family structure (0.17)

• Home environment (0.57)

• Parental involvement (0.51)

Most teachers would view these factors as outside of their control, although some would certainly argue that schools can do a better job of influencing diet, drug use, exercise, and parental involvement. During the eighteen hours every day that students are not in school, students and families make many decisions that influence learning in significant ways. But how important are these decisions compared to the variables that teachers and school administrators can control?

The Importance of Feedback

The effectiveness of any recommendation regarding teaching and education leadership depends on the extent to which the professional practices of educators and school leaders have a greater impact on students than factors that are beyond their control. The essential question is, Will this idea have a sufficient impact in helping students overcome any negative influences they face outside of school?

Fortunately, Hattie (2009) answers that question with a resounding affirmative response. He finds a number of teaching and leadership practices that, measured in the synthesis of meta-analyses, are more powerful than personality, home, and demographic factors when considering their impact on student achievement. Examples include teacher-student relationships (0.72), professional development (0.62), teacher clarity (0.75), vocabulary programs (0.67), creativity programs (0.65), and feedback (0.73).

Certainly, Hattie is not the first scholar to recognize the importance of feedback on student achievement. His findings are completely consistent with Robert Marzano’s (2007, 2010) conclusions that accurate, specific, and timely feedback is linked to student learning. Thanks to Hattie’s research, however, we can now be more precise than ever about how important it is. We can say that, based on the preponderance of evidence from multiple studies in many cultural settings, feedback is not only more important than most other instructional interventions but is also more important than socioeconomic status, drug use, nutrition, exercise, anxiety, family structure, and a host of other factors that many people claim are overwhelming. Indeed, when it comes to evaluating the relative impact of what teachers and education leaders do, the combined use of formative evaluation and feedback is the most powerful combination that we have. If we understand that a grade is not just an evaluation process but also one of the most important forms of feedback that students can receive, Hattie’s conclusion should elevate the improvement of grading policies to a top priority in every school.

Hattie (2009) also encourages a broadly based view of feedback, including feedback not only from teachers to students but also from teachers to their colleagues. We should recall that, as a fundamental ethical principle, no student in a school should be more accountable than the adults, and thus our feedback systems must be as appropriate for teachers and leaders as they are for students. Similarly, our standards for administrators, board members, and policymakers must be at least as rigorous as those we create for fourth graders. If that statement seems astonishing, then I invite you to obtain a copy of the fourth-grade academic standards for your area and lay beside them the standards that are officially endorsed for policymakers, such as legislators, members of parliament, members of Congress, or other educational authorities. You can then decide which standards are more demanding.

The Evidence–Decision Gap

It is therefore mystifying that a strategy with so great an impact on student achievement as feedback remains so controversial and inconsistent. It is as if there was evidence that a common consumer practice created an environmental disaster, but people ignored it and persisted in the destructive practice. Of course, that is hardly a hypothetical example, as our national habits—such as persistent use of bottled water, dependence on gas-guzzling cars, and appetite for junk food—illustrate. Rather than embrace the evidence and use filtered tap water, take public transportation, and eat fresh vegetables, we often choose the convenient alternatives that are less healthy for our families and the planet.

In sum, our greatest challenge is how to transform what we know into action. Indifference to research, though also present in medicine, business, and many other fields (Pfeffer & Sutton, 2006a), is particularly striking in education. An alarming example is the persistent use of retention and corporal punishment. In both cases, decades of evidence suggest that these “treatments” are inversely related to student learning. Retention does not encourage work ethic and student responsibility but only creates older, frustrated, and less successful students (Hattie, 2009). Corporal punishment does not improve behavior but legitimizes violence and increases bullying and student misbehavior (Committee on School Health, 2000). Nevertheless, politicians from all parties have excoriated social promotion and urged retention in a display of belligerent indifference to the evidence. More disturbingly, nineteen states and many other nations continue to permit corporal punishment decades after the evidence concluded it was counterproductive (Strauss, 2014).

Equipped with rich literature on the theory and practice of change, educators and school leaders should be fully capable of acknowledging error, evaluating alternatives, testing alternative hypotheses, and drawing conclusions that lead to better results. Instead, personal convictions that are not only antiquated but maybe even dangerous guide decision-making processes. We can be indignant about the physicians of the 19th century who were unwilling to wash their hands, but when the subject turns to education policies, we sometimes elevate prejudice over evidence.

Before we consider what quality feedback is, let us be clear about what feedback is not. Feedback is not testing.

Distinguishing Feedback From Testing

Consider two classrooms, both burdened by large class sizes and students with a wide range of background knowledge and skill levels. The role of the teacher in the first class is to deliver what, as a matter of school-system policy, has been described as a “guaranteed curriculum.” Administrators know that the curriculum is delivered because teachers list the instructional objectives on the board and post the details of the lesson plan supporting those objectives next to the door, where visiting leaders can easily inspect them. In this class, the most important feedback that students and teachers receive is on the annual test administered every spring. This feedback is very detailed, as it determines the success and failure of not only individual students but also the entire school, perhaps the entire school system. Moreover, external companies have established elaborate statistical formulas that give feedback to individual teachers, measuring the degree to which each teacher is adding value to each student.

When comparing students over three years, these analyses conclude that teachers whose students show gains in test scores have added value to their students, whereas teachers whose students do not make such gains have failed to add value. So ingrained is this sort of analysis that in the United States, one of the conditions states must meet in order to be competitive for federal funds is the commitment to link teacher evaluation to annual measures in student performance.

There is no question that annual tests are important, if by important we mean that decisions involving the lives of students, teachers, and school administrators, along with billions of taxpayer dollars, are influenced by those tests. Ask the teacher and students in the first class how they know when they are succeeding, and the answer is, almost uniformly, “We’ll know when we get our state test results back.” However, the question at hand is whether these test results really provide feedback.

The second class is no less rigorous than the first. Indeed, it can be argued that this class is more rigorous. The teacher provides informal feedback to students every day, and each week students update their learning logs to identify where they are with respect to their learning targets and next steps for moving forward. Students, along with the teacher, are continuously assessing their learning but not with a single standardized test. Moreover, the teacher in the second class assesses skills that are never tested by the state, including collaboration, critical thinking, creativity, and communication. This teacher is not assessing less but assessing more to prepare students not only for the state test but also for the broader requirements students will encounter in the years ahead.

Reconsidering Feedback

In her landmark work comparing high- and low-performing nations and high- and low-performing state education systems, Linda Darling-Hammond (2010) comes to an astonishing and counterintuitive conclusion. Since the 1980s, the three exemplars she considers—Singapore, South Korea, and Finland—made significant progress according to international education comparisons over the next three decades. More than 90 percent of the students in these countries graduate from high school, and large majorities go to college—“far more than in the much wealthier United States” (p. 192), Darling-Hammond concludes. Detailed field observations reveal the rich, nuanced feedback that students and teachers receive daily and can apply immediately.

“Wait,” you may say. “Don’t Asian countries like South Korea and Singapore also have a test-focused environment? Aren’t those the examples that we tried to emulate to improve our academic performance in mathematics and science?” In fact, this does not comport with Darling-Hammond’s (2010) evidence. These successful nations:

eliminated examination systems that had previously tracked students for middle schools and restricted access to high school. Finland and Korea now have no external examinations before the voluntary matriculation exams for college. In addition to the “O” level matriculation examinations, students in Singapore take examinations at the end of primary school (grade 6), which are used to calculate value-added contributions to their learning that are part of the information system about secondary schools. These examinations require extensive written responses and problem solving, and include curriculum-embedded projects and papers that are graded by teachers. (p. 192, emphasis in original)

Effective education systems certainly use some system-level examinations, but notice the important distinctions. In these examples, even national examinations include deep teacher involvement and, therefore, offer the opportunity for feedback that is far more nuanced than a simple score. Most importantly, the vast majority of feedback is in the daily interactions between students and teachers, not from test scores administered at multiyear intervals. Perhaps the most important consideration is how teachers and students evaluate their own success. While annual high-stakes testing leaves students and teachers wondering about their success (“We’ll know how we’re doing when we see the scores at the end of the year”), a system characterized by effective feedback offers a dramatically different view.

Darling-Hammond (2010) observes the dramatic difference between the feedback as testing model and the feedback as breathing model, with the latter characterized by feedback integral to the minute-to-minute reality of the classroom. The following words are not from a veteran teacher, nor are they from the graduate of a top-tier teacher-preparation program with several years of intensive mentoring. They are the words of a prospective teacher who was fortunate enough to see Darling-Hammond’s (2010) fieldwork but had not yet spent a day in the classroom. This teacher says:

For me the most valuable thing was the sequencing of the lessons, teaching the lesson, and evaluating what the kids were getting, what the kids weren’t getting, and having that be reflected in my next lesson … the “teach-assess-teach-assess-teach-assess” process. (as cited in Darling-Hammond, 2010, p. 223)

Bridget Hamre of the University of Virginia Curry School of Education notes that “high-quality feedback is where there is a back-and-forth exchange to get a deeper understanding” (as cited in Gladwell, 2009, p. 326). Bob Pianta, dean of the Curry School, reports on what a team he led observed in a class with high levels of interactive feedback:

“So let’s see,” [the teacher] began, standing up at the blackboard. “Special right triangles. We’re going to do practice with this, just throwing out ideas.” He drew two triangles. “Label the length of the side, if you can. If you can’t, we’ll all do it.” He was talking and moving quickly, which Pianta said might be interpreted as a bad thing, because this was trigonometry. It wasn’t easy material. But his energy seemed to infect the class. And all the time he offered the promise of help. If you can’t, we’ll all do it.

In a corner of the room was a student named Ben, who’d evidently missed a few classes. “See what you can remember, Ben,” the teacher said. Ben was lost. The teacher quickly went to his side: “I’m going to give you a way to get to it.” He made a quick suggestion. “How about that?” Ben went back to work. The teacher slipped over to the student next to Ben and glanced at her work. “That’s all right!” He went to a third student, then a fourth. Two and a half minutes into the lesson—the length of time it took [a] subpar teacher to turn on the computer—he had already laid out the problem, checked in with nearly every student in the class, and was back at the blackboard to take the lesson a step further.

“In a group like this, the standard MO would be: he’s at the board, broadcasting to the kids, and has no idea who knows what he’s doing and who doesn’t know,” Pianta said. “But he’s giving individualized feedback. He’s off the charts on feedback.” Pianta and his team watched in awe. (as cited in Gladwell, 2009, p. 329)

The danger in observing an exemplary teacher is that we can relegate these experiences to the realm of mystery. Why is he such a great teacher? Some people might conclude that it must be a combination of talent, intuition, mystical insight, and a knack—he just “has it” (it being those amazing qualities that all exceptional teachers share). However, we might not say that about a great physician, scientist, attorney, race car driver, violinist, or basketball star. Indeed, the overwhelming evidence is that talent is not a mystery but something developed with deliberate practice (Colvin, 2008; Ericsson, Charness, Hoffman, & Feltovich, 2006). Can we apply that generalization to teaching? Here, too, the evidence demonstrates convincingly that feedback, along with other effective teaching techniques, is a skill that can be observed, applied, practiced, and improved (Lemov, 2010).

The Four Elements of Effective Feedback

As we have seen, the clear preponderance of evidence is not only that feedback is important in influencing student achievement but also is relatively more important than almost any other student-based, school-based, or teacher-based variable. It should be noted that evidence on the power of feedback is hardly restricted to the world of education. Dianne Stober and Anthony Grant (2006) and Alan Deutschman (2007) provide evidence from a wide range of environments that depend on feedback, including health care, prisoner rehabilitation, recovery from addiction, and education. Kerry Patterson, Joseph Grenny, David Maxfield, Ron McMillan, and Al Switzler (2008) add to the body of evidence, using cross-cultural examples in which people are engaged in significant and profound change, even though they cannot read or write.

In brief, it is not the provision of a data-driven, decision-making seminar that helps individuals, organizations, or communities change. Instead, it is the ability to use feedback in clear and consistent ways. However, even the most clear and vivid feedback is useless if not applied with the FAST elements. Each of these is a necessary but insufficient condition for improvement. If information is accurate but not timely, it is unlikely to lead to any improvements. An autopsy, for example, is a marvelously accurate piece of diagnostic work, but it never restores the patient to health.

Almost every teacher I know labors to be fair, excluding any bias regarding gender or ethnicity, in their evaluations of student work, but the pursuit of fairness can impair accuracy. This is particularly true when teachers conflate a student’s attitude and behavior with the quality of his or her work. Many computer programs can provide rapid feedback, but if that feedback only informs students whether their performance is correct or incorrect, they will gain little information about how to improve the thinking process that led to an incorrect response or how to sustain the analyses that led to a correct one. Specificity is a component of effective feedback, but reams of data delivered months after students leave school are as ineffective as the detailed criticisms written on the high school English paper mailed to the student weeks after final grades are assigned.

Let’s take a closer look at how each of these FAST elements relates to feedback.

Fairness

My favorite lesson in fairness came from Mr. Freeman French, my junior high school orchestra conductor, who had students audition from behind a curtain. Neither students nor the teacher knew the gender, identity, ethnicity, or socioeconomic status of the player. We could only hear the music. While Mr. French’s commitment to fairness may seem extreme, it represents a commitment to principle that seems elusive in the context of bias that ranges from Olympic skating to World Cup soccer in which, to put it mildly, fairness is not always the primary value on display. Certainly the blind audition approach of Mr. French had its limits—he ultimately had to look at his performers and give them feedback face to face, but the tone of fairness that he set in his classes conveyed the fact, as well as the impression, that our screeching strings—sharp and flat, too fast or too slow—elicited his feedback solely based on our work and not our appearance.

Elements of Grading

Подняться наверх