Читать книгу Pathways to Proficiency - Eric Twadell - Страница 9

Оглавление

Chapter 1

Preparation

A shift to evidence-based grading is the logical next step for teams that are committed to the work of proficiency-based assessment. Evidence-based grading and proficiency-based assessment work hand in hand. An evidence-based grading model supports the type of discussion and dialogue that proficiency-based assessments enable. In fact, we feel that the shift to evidence-based grading is the natural outgrowth of proficiency-based assessment. However, as we know from experience, a shift to evidence-based grading is a very different challenge for a school to manage, as it upends decades of how we’ve traditionally communicated about student abilities. Likewise, a shift to evidence-based grading demands that all stakeholders in students’ education are clear on this grading model’s value and understand its purpose. This demand requires clarity and preparation.

As we consider working with schools and teams that plan to move toward an evidence-based grading model, we recognize that this change confronts past grading practices and undoes many routine approaches to teaching and learning. Likewise, preparing to change to an evidence-based grading model requires preparing all who are connected to the change—teachers, students, and guardians and parents. All these stakeholders must understand the value of this change and how it can foster better discussions around teaching and learning.

This chapter walks our team through the preparation phase.

• How teachers begin to think differently about their own instructional practices

• How students begin to talk differently about their own learning

• How parents or guardians can better understand education in a way that nurtures lifelong learning, understanding grading as a process of learning growth rather than a strict statement of measured ability

At our school, we ask ourselves, “How do we build from the good work we are already doing and make it better? What should we consider next?” In a culture of continuous improvement, teachers and students are always looking to improve on their current practices and find new and improved ways to support learning. Evidence-based grading and the conversations that are required for successful implementation fit well within a culture of continuous improvement. It encourages learning as an ongoing discussion of growth and development. Surprisingly, this is an unfamiliar mindset to students and families, who are used to grades that denote success or failure. These generations of students have been comfortable with a system of grading based on accumulating points and averages that somehow reflect intelligence. Shifting away from this long-standing mindset challenges us because it changes the way we communicate about learning.

Although school leaders may work diligently to prepare for professional development, we often hear from those for whom professional learning has become a stand-alone event with little follow-through. We designed this chapter to examine and suggest ways to first prepare faculty for a change in grading practices and how to then implement the change effectively. For this book’s purpose, we focus our attention on actually implementing an evidence-based grading model as the team grapples with its own questions and challenges.

Following are three key points to remember during the preparation phase.

1. To develop shared commitments, the collaborative team must be willing to question and challenge its current grading practices and then agree on more effective strategies that implement an evidence-based approach.

2. For equity, the team must be able to develop consensus and inter-rater reliability around grading practices. Inter-rater reliability simply means the team is calibrated around how it actually assesses the evidence of student proficiency—what represents proficiency to one teacher should represent proficiency to all teachers on the team.

3. For clarity and communication, team members must fully understand why they are being asked to consider changing their traditional grading practices and be able to explain this change clearly to both students and their parents.

Preparing individuals for change in grading practices goes beyond strong communication strategies. In many schools, every teacher might have his or her own grading policy and procedures; there might be multiple grading scales; and students might be graded differently depending on their teachers, not the subjects. These inconsistencies lead to inequitable grading practices. Hurdling traditional practices that sustain inequities and inconsistencies is one challenge evidence-based grading works to overcome. This shift means that a team must build a shared understanding and a shared commitment to change where consistent evaluation is valued.

As you read about our team’s journey of moving to an evidence-based grading model, consider the ways the team prepares for change—learning, investigating, questioning, and fleshing out each member’s knowledge and understanding. Also, think about how the team considers implementing the change and makes the decisions to bring about this shift toward greater consistency and equity.

We created this team scenario with some of our best teachers in mind—some willing to change, some questioning change, and some holding back. Each teacher is a change agent. What does each change agent need? How do leaders support teachers’ efforts early in the change process? How does an organization create and sustain meaningful change? As you read our team’s story, ask yourself how the team answers the following challenges.

• Is every team member fully committed to the value of evidence-based grading, and is he or she clear on how to talk about its purpose and intention so students and parents clearly understand the change in grading practices?

• Is the team paying close attention to inter-rater reliability in its grading practices? Is each member implementing a shared and communicated agreement about what it means to meet or exceed the team’s stated learning targets?

• Is the team identifying ways in which a shift to evidence-based grading fosters better communication about teaching and learning practices?

Our Team’s Story

Toward the end of May, Mario and his team are considering their next action steps. The team has worked hard for the past year to implement best practices of proficiency-based assessment. Members see success in their approach to curriculum, instruction, and assessment. Moreover, they are getting students to discuss learning and learning targets more often, rather than fighting to earn points. Seeing this stride forward, they know their next step requires a different approach to grades and feedback. This year, Mario, Joni, Maya, Britney, and Kevin are positioned and determined to implement an evidence-based grading model as a natural extension of their proficiency-based assessment practices.

Mario, the team leader, has spent a lot of time grappling with evidence-based grading’s concepts, and he is eager to work with the team and lead discussions around its implementation. Maya is in her second year of teaching and feels more comfortable with the curriculum than she did at first. Britney is in her seventh year of teaching and is indifferent to adopting a new grading system. Based on past experiences, Joni and Kevin know that a change to evidence-based grading means breaking away from years of past practices. Joni also notes that the shift isn’t just going to be hard for teachers to fully understand—it is going to be difficult for students as well. Likewise, it will confuse parents who have only ever known a points-and-percentage-based grading system.

Luckily, a couple of the content-based curriculum teams in the school have already made the shift to evidence-based grading, so our team thinks it can gather some good advice from other faculty members about how they already implemented and communicated the change to evidence-based grading. By no means does anyone claim to be an expert on the topic, but the teams that implemented the new model really like the outcome: discussing learning with students instead of confronting them about points and percentages.

Mario is excited but anxious, as team members are going to implement evidence-based grading into their subject area in a few months. At the end of May, the team gathers in a classroom to discuss its approach to implementation. John, director of assessment, and Kaori, assistant superintendent of curriculum, lead the meeting. These two school leaders worked previously with a number of teams through the challenging shift. John and Kaori welcome the team and begin to discuss evidence-based grading. They are up-front about the need to shift away from past grading practices and recognize the effort it will take. The teams that have implemented evidence-based grading continue to encourage them to move forward with the change.

“As other teachers are saying,” notes John, “once you shift to evidence-based grading, you will never want to go back.”

John and Kaori help Mario’s team by introducing a clear protocol to follow during evidence-based grading implementation. The first step is to ensure the team has a clear understanding of the purpose of making this change.

Kaori begins the meeting with two questions: “Why are we moving to an evidence-based grading system anyway? What benefit does evidence-based grading have that our traditional grading practices lack?”

The team sits silently, and Kaori, not expecting an answer, continues, “It’s important to start with understanding why we are making this change. Our mission is to ensure the most accurate and clear communication about learning to promote success for all students. Evidence-based grading principles support this mission.”

John then takes his turn. “It’s frustrating, but our current system, which we’ve been using for decades, doesn’t support our mission of clear communication. This came to me when I thought about real-life student experiences with grading and reporting practices. Let’s start with a scenario. Let’s suppose a student gets the following grades on five exams for one six-week grading period: 40, 60, 80, 90, 90. What grade does the student deserve?”

Mario replies, “I know this is not the answer we will give by the end of this meeting, but I would say 72 percent based on the way we calculate grades now.”

“Well,” John says, “a few things come to mind when I hear that. First, the student’s last two assessments yielded a score of 90, and also the student never scored in the 70s at any point during these assessments. If we look through the student’s grades, it appears that over time, he made significant improvements. The student learned over time. If we calculate the student’s grade average, aren’t we really discounting his growth? Aren’t we assigning a grade for the student based on what he wasn’t able to prove instead of what he is now able to prove?”

Maya speaks up. “I agree, but the student also has to be accountable for past mistakes. Averaging all those grades together is more accurate because the student was only at 90 percent for a short time period out of all the assessments. Therefore, a 72.5 percent, or C grade, is really a good picture of the student during the course. He didn’t do well the whole time, just part of the time.”

John says, “That may be so, but think of it this way. Do you remember learning how to ride a bike? When you learned, did you factor in all the times you fell to determine whether you could ride, or did you just finally learn to ride? You didn’t just average your ability to ride the bike and say, ‘I ride a bike at 72 percent.’”

Team members nod in agreement. John continues, “Would you consider a student who can now fluently speak another language not fluent because she made many mistakes along the way? Of course not. Or how about a student who didn’t know algebra or chemistry at the beginning of the school year but learned it by the end?”

John moves to make his point. “All learning is based on growth. In fact, that is the definition of learning. Evidence-based grading is a growth-based learning model and supports the expression of skill acquisition and knowledge. Our current system of grading does not express anything but percentages or point earnings. It doesn’t communicate learning’s growth and development.

“Let’s consider another example. Suppose a student gets the following scores: 0, 0, 0, 100, 100, 100. What percentage will she receive?”

Kevin says, “Traditionally, the student would get a 50 percent. She would fail the class.”

“Correct,” John replies. “Now, how many 100 percent grades would the student need to get in order to offset all those zeros and earn an A?”

The group is slow to answer this time. Some members mumble a few answers, but nothing seems correct. John explains, “In order to get an A grade, the student would need twenty-seven more 100 percent grades in the gradebook to offset the initial three zeros. In other words, it is almost impossible to outpace a particularly low grade, especially a zero.”

“So, you are saying we shouldn’t use zeros. I get it. But what about the student who just doesn’t do the work?” asks Maya.

Kaori says, “Behavior and academics must not coexist in one single letter grade. The comingling of behavior, skills, attendance, attitude, work ethic, and skills performance creates lack of clarity about why the student gets a certain grade. We then assume a lot about what is behind the grade. Think of the letter grade B. Some parents may think it means smart but the class is hard, while other parents may think not as smart as other students in the class. Some may think that their child didn’t work hard enough. Too many assumptions cause the grade to be less accurate. Therefore, it is not aligned with our mission, which is ensuring the most accurate assessment and communication of student growth and performance.”

John says, “We see this same problem in the assessments themselves. Let’s assume you have an exam this week. One student skips the test and another gets all the questions wrong … yet they both get a zero in the gradebook. So, what does that zero represent? Does it represent a lack of effort or a lack of knowledge? You really don’t know the grade’s intent on a report card.”

The team acknowledges his point. John continues, “So, can we agree now that zeros are not effective, averages do not work, and grade information must be reported separately from behavior as a way to communicate meaningfully?”

The team understands exactly what John is saying, and each member thinks about how the years of past grading practices might not have been equitable to students.

Kaori starts the next segment. “In evidence-based grading, we assess students on a gradation of learning that has four levels. Why only four levels? Let me explain. It is essential as assessors that we possess the capability to articulate a clear description of each level of achievement as well as the differences between these levels. This clarity about each level of achievement is not only important for equitable and just assessment of student performance, it is also important for feedback, curriculum, and instruction. In fact, evidence-based systems are based on this gradation of achievement and the ability to articulate it. Without clarity, it is impossible to assess students accurately.

“There are four levels in our system,” Kaori continues. “Fewer levels means that students are classified more accurately. As Thomas R. Guskey points out on page 36 of his 2015 book On Your Mark, ‘essentially, as the number of grade categories goes up, the chance of two equally competent judges assigning exactly the same grade to the same sample of a student’s work diminishes significantly.’ Let’s do a little exercise.”

“We currently use a one-hundred-point scale,” John says. “Together as a group, think about an assignment you recently gave to your students.”

Joni says, “A free-response writing assignment about the Battle of the Bulge.”

“Great!” says John. “Now think about the students’ grades, and ponder this question: What is the difference between a student who gets an 85 percent and a student who gets an 86 percent? Keep in mind that in order to have an accurate and fair grading system, the assessor must be able to articulate the difference between these two percentages.”

The team is silent for a few seconds, and then Mario laughs. “I can’t,” he says. “There is no real difference between 85 percent and 86 percent.” The others agree.

“The more levels we have,” John says, “the more we run the risk of potentially giving an incorrect rating. Even worse, we give inaccurate feedback. This is why we must have the fewest achievement levels possible that still promote quality feedback. This is why we use four.”

Kaori says, “Evidence-based grading is based on achieving a level of proficiency—proficiency in a skill or proficiency in consolidating information into actionable thoughts. This proficiency is assessed by a gradation of achievement that represents an assessor’s expectations. Expectations have gradations, and you use them to evaluate the current evidence of performance. Does that make sense?”

Again, team members nod, and Kaori continues, “Therefore, we must attach levels to our expectations. We believe that there are only four levels of an expectation, nothing more. There is no such thing as a ‘super-duper’ expectation or ‘terribly, horribly not-even-close’ standards. Or, at least there shouldn’t be.”

“What about our A, B, C system? That is a five-level system, and we have been using that for years,” Maya points out.

“Yes, that’s true,” John says, “but what is the difference between a D and an F? Does a D student know a little bit more than an F student? Do schools not worry about students with D grades? When a student is doing D work, don’t we work to provide him or her with interventions?”

“So, is a 4 really just an A in this system?” Mario asks with a bit of confusion.

“That’s a good question,” Kaori says. “I believe a lot of teachers might think the same way—that evidence-based grading substitutes numbers for letters. It is hard to think this way, but the numbers 4, 3, 2, and 1 have no numerical value; they are just positional markers that communicate the location one occupies relative to an expectation. You could use checks, pluses, animals, or letter combinations … it doesn’t matter. The preponderance of evidence is what matters in the evidence-based model, not numbers and scaled ranges of accumulated points. The 4 simply represents that a student is past the expected performance level, a 3 means the student is at the expected performance level, a 2 indicates he or she is approaching the expected performance level, and a 1 indicates that the student isn’t even close.”

Shaking her head, Britney asks, “Why isn’t 4 the expectation? Isn’t that what you want a student to ultimately achieve?”

John says, “For an evidence-based model, the expectation must never be the top rung of the ladder, so to speak. There is always space to go beyond the expectation. Expectations need levels to have context, and the expected level must sit at the third rung.”

Still not convinced, Joni says, “By this logic, a B is the expectation in our current system, but we don’t think that way. Students want an A. The A is the expectation, but there is nothing past an A.”

John says, “An A+ is past an A.” He pauses as he writes out the current A, B, C, D, F model’s plus/minus scale and then says, “If A is the expectation, A+ is the above and beyond. Then it would be all B and C, and then D and F.”

The team understands that this is a societal shift in thinking, not only an educational shift.

Kaori says, “In an evidence-based model, we judge students against a criterion, meaning if they show competency in certain criteria, we deem them competent. They would get the A, or the 3, or the checkmark, and so on. If they earn it, they deserve it. Actually, we have seen very little difference between evidence-based courses and non-evidence-based courses regarding grade distribution. In fact, they are almost identical, with the exception that in evidence-based courses, there are almost no failures. And this is what we want! Success for every student!”

Britney asks, “OK, I get all this, but if we can’t use points, what do we use to grade? I can’t seem to picture how we grade without points. Do I just give students a 4, 3, 2, or 1 on everything but use a letter grade for assignments in the gradebook?”

Having heard this question before, John says, “Gradebooks are set up with learning targets, not assignments or assessments. You are simply inserting a target and a number for the proficiency a student has demonstrated on that target.”

The group still seems confused, so Kaori begins writing the following on the board. “In our gradebooks now, we see this.”

• Assignment: Score

• Assignment: Score

• Assessment: Score

• Assignment: Score

• Assessment: Score

“So it looks like the following.” She continues writing.

• Homework 1: 10/12

• Formative worksheet: 10/10

• Quiz: 23/30

• Project: 36/40

• Test: 44/50

“However, in evidence-based grading, we see the following.” She writes on the board.

• Target: Proficiency score

• Target: Proficiency score

• Target: Proficiency score

“So, it would look like this.” She finishes writing on the board.

• I can explain … 4

• I can create … 3

• I can identify … 3

Kevin, looking a bit confused, asks, “What happens to all the assignments? We don’t report them?”

“In an evidence-based system, reporting focuses on acquiring proficiency, not achieving a task. So, accumulating tasks is not necessary to report, just the most prominent or current state of proficiency.”

“So, I just replace the score based on the evidence I have to interpret?” Kevin asks.

“Yes!” Kaori says. “When you convert to evidence-based grading, your grading policy becomes the professional interpretation of evidence, nothing more. This is Guskey’s principle, and we feel it is the fairest and most accurate way to determine student grades.”

The team members like the grading policy’s simplicity, but they feel nervous about the policy’s subjectivity.

“When teams collaboratively vet expected evidence from student performance,” John explains, “they also are collaboratively vetting their expectations. This clarifies feedback and instruction for students. In terms of assessment, the curriculum team has deeply calibrated and scrutinized student performance so it is far less subjective than a non-evidence-based grading system.”

“When a team attempts to do all this outside of an evidence-focused grading system,” Kaori says, “teams must write the exam together. That’s easy, but how do they decide how many questions to put on the exam? How do they decide how many points each question is worth, and how those points relate mathematically to total points for the semester or term? How do they decide how to reward answers with points? How do they decide how to award those points as they observe performance nuances? What assumptions can the team make about borderline answers or performances? All these layers are at play in non-evidence-based grading courses.”

“More important,” John says, “as you begin your journey, we need to ensure that we calibrate all perspectives, use high-quality assessments, and give all parties the right evidence. In this way, we make feedback purposeful and useful. We work to create these elements when implementing evidence-based grading.

“Remember, we are moving to this model for two reasons: First, the traditional grading model exposes students to a false sense of mastery because it has teachers approve students’ short-term acquisition of knowledge as learning. Second, and even worse, students are not developing the skills to identify and articulate their current state of learning. We feel evidence-based grading successfully addresses both concerns.”

Kaori asks the next logical question in order to move the team to its next phase of learning: “Can anyone tell me what a good learning target looks like?”

Team members spend the rest of the meeting discussing the next steps toward implementation and set an implementation date for the new process. The team knows summer is a good time to rest before the challenge of implementing its new grading model.

The Four Commitments in Evidence-Based Grading

In the preparation phase of team learning, members must commit to certain practices and perspectives. They must first resolve central issues and achieve coherence and clarity in order to build a solid foundation from which to learn. Committing to these perspectives is the first step to implementing evidence-based grading. Before team members can move forward, they must come to a consensus on the following four commitments of evidence-based grading.

1. Agree that the percentage system is a flawed grading model.

2. Eliminate four specific grading errors.

3. Focus on grading proficiency.

4. Use student-produced evidence.

Agree That the Percentage System Is a Flawed Grading Model

One of the more interesting questions we get when talking to colleagues about grading is, Where did the current grading system come from anyway? That is an interesting story.

It all began in the 19th century. Those few students who were lucky enough to forgo the farm and go to school typically went to a small, one-room schoolhouse with a teacher who usually taught the same students for many years. The teacher would deliver oral feedback on student performance a few times a year during home visits with students and their parents. Similarly, in the first half of the 19th century, most colleges and universities provided students with feedback in writing driven mostly by descriptive adjectives (Durm, 1993).

Pathways to Proficiency

Подняться наверх