Читать книгу The Handbook for Collaborative Common Assessments - Cassandra Erkens - Страница 10

Оглавление

2

Evidence and Research Supporting the Collaborative Common Assessment Process

As noted in Collaborative Common Assessments (Erkens, 2016), in a culture of assessment fatigue:

Collaborative common assessments provide a powerful mode of inquiry-based professional development that seeks to improve student achievement and professional practice. For teams to develop the shared knowledge and skills of assessment literacy and instructional agility, they must work together to ask the right questions, explore their own results, and create solutions to complex challenges. (p. 5)

Collaborative common assessments require teachers’ involvement in the entire process—from accurate design to effective use of classroom assessment information. Research and evidence show that, when teachers do this well, the full process benefits learners, teachers, and schools and systems.

A Win for Learners

When everyone fully participates in the consistent and systematic process of collaborative common assessment, no question, the learners win. Educational researchers and experts (Chenoweth, 2008, 2009a; Gallimore, Ermeling, Saunders, & Goldenberg, 2009; Hattie, 2009; Levin, 2008; Odden & Archibald, 2009) as well as practitioners consistently find that when teams use collaborative common assessment strategies, their schools experience remarkable change. See, for example, www.allthingsplc.info/evidence, which showcases the tremendous results K–12 schools of all sizes and socioeconomic circumstances, from all parts of the United States, and other countries as well, can achieve when they fully embrace the common assessment process as PLCs. The examples featured on this site highlight how schools’ student achievement dramatically increases when teams have consistent and clear work patterns and maintain a laser focus on using the practices necessary for collaborative common assessment. These achievement results have driven experts to unpack and analyze the strategies that teams use in these schools, which include collaborating, narrowing the curriculum and aligning it to standards, employing formative assessments for frequent results monitoring, and using data to inform instruction. Two such schools, Hawk Elementary in Texas and Rutland High School in Vermont, made great gains using the common assessment process in unique ways. While their stories began some time ago, the conditions under which they launched the work are worth noting.

In 2012, all grade-level teams at Mildred M. Hawk Elementary School (affectionately known as Hawk Elementary), a K–5 building in Texas’s Denton Independent School District, set about raising student achievement in mathematics through the use of collaborative common assessments. While Hawk Elementary didn’t have terrible aggregate scores compared with the state, they weren’t at 100 percent success, and they clearly had groups of learners who were struggling. The staff wanted to make certain that they did not simply focus on the results of the State of Texas Assessments of Academic Readiness (STAAR) test but instead prepared their learners to be career and college ready. They developed a schoolwide goal to increase the learners’ proficiency levels in the areas of problem solving and critical thinking, as they firmly believed that if their learners could do that level of rigorous work, they would perform well on any state test they encountered. That year, the third-grade team received its 2012 STAAR test results for mathematics, which table 2.1 shows.

Based on these state data, each team from kindergarten to grade 5 established improvement goals, commonly identified as SMART goals, that aligned to the building-level goal to improve mathematics scores (see chapter 3, page 41, for an in-depth explanation of SMART goals). All the teams then monitored learners’ growth, using their ongoing and grade level–appropriate classroom mathematics assessments. The entire building used common formative and summative mathematics assessments. Together, the teams created rubrics for three mathematical areas—(1) computational accuracy, (2) mathematical language, and (3) problem solving—that they would consistently use across all the grade levels.

Table 2.1: Hawk Elementary School’s Third-Grade STAAR Mathematics Results, 2012

StudentsTotal Number of StudentsPercentage of Students Passing
All12273
Economically Disadvantaged729
Asian8100
Black or African American757
Hispanic2268
Biracial or Multiracial367
White8273
Female5774
Male6572
Students Receiving Special Education1436

Source: © 2016 by Susannah O’Bara. Used with permission.

As vertical K–5 teams, teachers practiced scoring student work together to monitor student learning, calibrate scoring for common data, align their expectations across all the grade levels, and ultimately improve their targeted instructional decision making. Each teacher was randomly assigned a learner, whose work he or she always brought to the monthly staffwide data team meetings for vertical scoring (for example, kindergarten teacher A always brought student 3’s work to the team meetings). Simultaneously, all teachers monitored learners in all classrooms (not just the student whose work they brought to every team meeting) and engaged all their learners in the various common assessments, using the exact same measurement tools for all their learners in their grade levels. Vertical teams reviewed work samples during monthly meetings, and they posted the results as evidence to monitor progress toward their overall student achievement goal.

Gradually, teams increased the rigor of their expectations. For instance, once the kindergarten teachers realized the caliber of work their learners would face in third grade, they were able to better align their expectations for their kindergarten learners. Over time, teams noticed a significant improvement in the quality of all their learners’ work in mathematics. All the teams posted significant gains (S. O’Bara, personal communication, July 2016). As an example, table 2.2 (page 24) features the third-grade team’s results.

Table 2.2: Hawk Elementary School’s Third-Grade State STAAR Mathematics Results, 2012–2014


Source: © 2016 by Susannah O’Bara. Used with permission.

The qualitative data were equally rewarding. In Hawk Elementary, teachers commonly voice appreciation for their peers’ work. For example, in the spring of 2014, fifth-grade mathematics teachers noted their surprise and delight at the deep problem solving and rigorous work the kindergarten students generated in mathematics (S. O’Bara, personal communication, July 2016). Moreover, the principal was able to stop in any classroom and have conversations with random students that revealed rigorous thinking in their mathematics work. Even though the teachers had experienced great results, they knew their work was not yet done. All the teams had similar SMART goals, and all teams continued their energies in mathematics while adding in other focused areas (such as reading) with equal commitment and diligence.

Another school, Rutland High School in Rutland, Vermont, established itself as a PLC and started using collaborative common assessments after it learned the school needed improvement (B. Olsen, personal communication, July 2016). The staff could have found it challenging to develop collaborative common assessments when the teams were so small (just one or two people per course), but the staff members worked together to develop a consistent set of rubrics that they could use schoolwide, while still assessing their individual departments’ content standard expectations. Rutland High found innovative ways to organize small teams at the secondary level, such as the following.

• Ninth-grade mathematics and earth science teachers meet as an interdisciplinary team that focuses on science, technology, engineering, and mathematics (STEM).

• English 1 and World History 1 teachers meet as an interdisciplinary team that focuses on global studies.

• English 2 and World History 2 teachers meet as an interdisciplinary team that focuses on global studies.

• Special educators and paraeducators are integrated into the core-subject teams.

• Singletons who don’t have colleagues to collaborate with on-site instead collaborate off-site with colleagues in other schools.

The teams also found innovative ways to use common assessments with interdisciplinary subjects. They began with rubrics in technical reading and writing and, over time, added rubrics in cross-cutting skills and processes, like public speaking, analytical thinking, creative thinking, and researching. Teams meet for an hour every Wednesday, and they regularly use the schoolwide rubrics to monitor student achievement through the common assessment process within their individual curricula. New England Common Assessment Program (NECAP) data indicate that their hard work has improved their students’ learning in all tested areas. In addition, they continue to make significant gains in learning for all students, including the economically disadvantaged students who qualify for free and reduced-price lunch (FRPL). Students have demonstrated significant gains in reading, the area where teams began their focal work with common assessments across the content areas (as shown in table 2.3, page 26).

Table 2.3: Percentage of Students Meeting the NECAP Reading Standard


Source: © 2016 by Bill Olsen. Used with permission.

In 2015, Rutland High moved from lagging behind the state average in reading, writing, mathematics, and science to matching or—more often—exceeding the state average with consistency, even as the state average had increased in all but one area. While the gaps between Rutland High’s students and the state’s students have narrowed since 2013, Rutland High’s overall trajectory continues to go in an upward direction for all students, especially the economically disadvantaged—a group that continues to increase in size each year.

These two brief case studies—(1) Hawk Elementary, from a large urban district of approximately 25,000 students at the time, and (2) Rutland High, from a small rural district of approximately 2,200 students at that time—offer student achievement gain stories, and they are only a sampling of the repeated success stories educators can find in the literature and on websites like AllThingsPLC (www.allthingsplc.info). It makes sense that when educators work together to solve a complex problem, such as addressing gaps in student achievement, amazing things can happen. Learners win when teachers collaborate on their behalf.

A Win for Teachers

Richard DuFour, Rebecca DuFour, and Robert Eaker (2008) believe that the practice of using common assessments is critical in the work of PLCs. In fact, it is the engine that drives success. They highlight how the practice ultimately impacts student achievement but also offers teachers and their teams additional advantages; it increases their efficiency, promotes equity, improves monitoring, informs and refines teacher practice, and develops teacher capacity (DuFour et al., 2008). Collaborative common assessment helps teachers work smarter, not harder. The early stages of any new process can feel laborious and time consuming, but as with any process that becomes a standard operating procedure, time and experience can increase a team’s level of comfort, knowledge, and skills in a manner that increases efficiency and effectiveness.

Undoubtedly, teachers make a difference. “Educational researchers have proposed that teachers themselves are one of the most important determinants of their teaching practices and students’ achievement” (Guo, Connor, Yang, Roehrig, & Morrison, 2012, p. 4). But schools face the challenge of finding ways they can develop all teachers’ abilities to have the same powerful and positive impact on student learning. Through the collaborative common assessment process, teachers work smarter, highlight and share early successes and performance satisfaction, and develop a collective strength in navigating challenging situations. One of the greatest benefits, then, of the collaborative common assessment process is seemingly intangible and long term: it increases collective teacher efficacy.

When teachers have efficacy, belief in one’s ability to reach desired outcomes, it has a tremendous impact on student learning. In fact, as Anita Woolfolk points out in a 2004 interview:

Teachers who set high goals, who persist, who try another strategy when one approach is found wanting—in other words, teachers who have a high sense of efficacy and act on it—are more likely to have students who learn. (as cited in Shaughnessy, 2004, pp. 156–157)

In her research on teacher efficacy, Nancy Protheroe (2008) notes that:

Teachers with a stronger sense of efficacy—

• Tend to exhibit greater levels of planning and organization;

• Are more open to new ideas and more willing to experiment with new methods …;

• Are more persistent and resilient when things do not go smoothly;

• Are less critical of students when they make [mistakes]; and

• Are less inclined to refer a [challenging] student for special education. (p. 43)

Imagine, then, the power of a team of teachers exhibiting collective efficacy. Researchers who have studied the phenomenon note that some schools demonstrate a collective sense of efficacy (Goddard & Skrla, 2006; Hoy, Sweetland, & Smith, 2002; Ross & Gray, 2006; Supovitz & Christman, 2003). In such schools, teachers are less likely to shift blame for poor student performance to the students themselves or outside contributing factors (such as economic limitations, limited English proficiency, and lack of parent involvement) and are more likely to instead take responsibility with a positive attitude, willingly accept challenging student achievement goals, and persist in accomplishing those goals (Goddard, Hoy, & Hoy, 2000). Collaborative common assessments create the constructs that support the development of collective efficacy. Dana Brinson and Lucy Steiner (2007) indicate that although the research is in its early stages, the following constructs for leadership and teacher teams improve collective efficacy:

• Build instructional knowledge and skills [such as plan the common formative and summative assessments needed to guide instruction].

• Create opportunities for teachers to collaboratively share skills and experience [for example, map and execute instruction, intervention, and enrichment strategies to monitor and address results].

• Interpret results and provide actionable feedback on teachers’ performance [such as review data and student evidence to find opportunities for continued learning and action steps for closing achievement gaps].

• Involve teachers in school decision making [for example, use results to design, modify, and improve response to intervention strategies for behavioral and academic needs]. (p. 3)

Teaching is challenging work, and when teachers operate in collaborative teams, individual teachers can move away from confronting seemingly insurmountable challenges with individual learners and instead collaboratively monitor student needs, strategize, and ultimately problem solve and find solutions.

Albert Bandura (1977), an early theorist and researcher of teacher efficacy, defines efficacy as “the conviction that one can successfully execute the behavior required to produce the outcome” (p. 193). He also identifies beliefs that efficacious teachers hold regarding their impact on student learning (Bandura, 1997), the following four of which can be directly linked to and impacted by collaborative common assessment. Efficacious teachers believe they can:

1. Influence decisions made in school

2. Overcome the influence of adverse community conditions on student learning

3. Create pathways that make students enjoy coming to school

4. Help students believe they can do well on schoolwork

Through the collaborative common assessment process, teachers influence decisions made in school. They gather data to answer complex questions such as the following.

• “What SMART goals will we write to address our areas for growth?”

• “What priority standards will we need to have in order to address our areas of concern?”

• “How will we need to modify the curriculum so it better aligns with our standards?”

• “What assessments must we modify or create to track progress toward our SMART goals?”

At the very core of their work, collaborative teams must make critical decisions with students in order to guarantee learning, and they anchor those decisions in data they gather from common assessment processes. Moreover, such decisions at the classroom and grade or department levels have a schoolwide impact.

Efficacious teachers understand that their task to help all learners succeed in their school requires them to think outside the box so they can work around hurdles over which they have no control. In their collaborative efforts as a team, and often as an entire school community, teachers address and find answers to demanding questions such as, “How can we re-engage intentional nonlearners?” “How can we support the learners who struggle to keep track of their homework, who have difficulty focusing during class time, or who have limited access to resources like parent support once they leave the school?” and “How can we improve the effectiveness of our pyramid of interventions?” Many of these concerns extend beyond the teachers’ direct contact with learners during class time, yet the answers to these concerns directly impact the learning that happens in class.

Researchers Ronald Gallimore, Bradley A. Ermeling, William M. Saunders, and Claude Goldenberg (2009) find that teachers can better attribute student success to their teaching, especially in situations where students do not initially learn, when they engage in “(1) focusing on concrete learning goals, (2) tracking progress indicators, and, most critically, (3) getting tangible results in student learning” (p. 542). When collaborative teams look at their data with an eye toward ensuring student learning, they engage in a form of instructional inquiry that draws teachers’ attention to and helps them discover “causal connections between their teaching and student performance” (Gallimore et al., 2009, p. 542). Teachers engaged in the collaborative common assessment process often extend their assessment practices to inquiry-based strategies that help them gather additional information to better understand their learners’ needs. They become, as DuFour et al. (2008) assert, action researchers while they seek the best ways to ensure all students learn at high levels. Empowered, efficacious, and collaborative teacher teams do the work that Hattie (2009) finds best influences student outcomes: they establish challenging student achievement goals, engage in conversations that contest the status quo of achievement, seek current and new ways to address emerging concerns, design and implement strategies intended to enhance achievement, and monitor progress and effectiveness of teaching.

When teams experience success in their current work, including their assessment work, they are more apt to stretch themselves and their goals for students. Success with early assessment experiences provide teams with insight and motivation to challenge the quality of their own assessments. Teams can challenge and stretch their assessments’ quality by asking themselves questions such as, “What will engage the learners in meaningful ways? What will a true representation of learning look like? If the demonstration of learning must be performance oriented, how could we make that happen? What assessment strategies best promote true learning and retention?”

Interestingly, the more excited teachers feel about preparing their learners for the planned assessments, the more exciting they make the assessments for their learners. It becomes self-fulfilling. When teachers strive to design and employ accurate, meaningful, and interesting collaborative common assessments, they are better able to enjoy the assessment process.

Growth in achievement data helps teachers engage individual learners in believing they can do well in their schoolwork. Success breeds success; a positive momentum in collective achievement results fosters a collective belief that the work is doable and all learners can have success. In many cases, the entire class pitches in to encourage mastery on everyone’s behalf; learning becomes collaborative and creates an environment of success and social celebration. When teams use the right data in the right ways, they can empower them.

A Win for Schools and Systems

If collaborative common assessment increases student achievement, involves working smarter rather than harder when ensuring learning happens, and positively impacts teachers’ collective efficacy, then the process must benefit schools too. When schools employ collaborative common assessments with consistency across all buildings and programs within the organization, every grade level and department experiences the same process, and student achievement increases at the building level. Allan R. Odden and Sarah J. Archibald (2009) conducted research at effective schools that have doubled their student achievement. Odden and Archibald (2009) note that teachers at these schools consistently team up at critical junctures and use the common assessment process to isolate and collaboratively respond to what their learners do and do not understand. These teachers intentionally focus their professional dialogue on what matters most: student learning.

The collaborative common assessment process leads to systemic change within a school. It impacts critical systems, like curriculum and assessment, in positive and profound ways. The following sections will show how using collaborative common assessments supports a guaranteed and viable curriculum, assessment literacy, accurate assessment design, and effective data use.

Guaranteed and Viable Curriculum

Discrepancies have long existed between the intended curriculum, the implemented curriculum, and the attained curriculum (DuFour & Marzano, 2011; Marzano, 2003). A curriculum becomes guaranteed when teachers prove that they have delivered the intended curriculum through the results of the attained curriculum. Unfortunately, there are many opportunities for inaccuracy or error in this work.

• The intended curriculum, often articulated through curriculum maps, may not be viable if it gives teachers too many things to teach. In addition, the selected curricular materials the teachers use may not even connect to the standards’ requirements.

• Even though specific pacing guides and agreed-on standards spell out the implemented curriculum, interpretations can still differ from teacher to teacher. Left to their own professional judgment, classroom teachers may choose to emphasize, add, or remove key features in their daily instruction (DuFour & Marzano, 2011).

• The attained curriculum, as measured through evidence of student learning on the assessments, may show either inconsistencies in content (questions asked do not adequately match expectations from the intended curriculum) or insufficiencies in mastery (too many learners do not achieve mastery for each target expectation).

In essence, what teachers assess and how they assess it are as important as what interventions they employ when students do not attain the desired learning. Grant Wiggins and Jay McTighe (2007) advise:

The job is not to hope and assume that optimal learning will occur, based on our curriculum and initial teaching. The job is to ensure that learning occurs, and when it doesn’t, to intervene in altering the syllabus and instruction decisively, quickly, and often. (p. 55; emphasis added)

When the educational system accepts all scores that learners receive—including failing scores—without providing interventions that support all learners in reaching proficiency, it is fair to say that the system is not concerned about guaranteeing the curriculum becomes attained.

Teachers face an essentially insurmountable amount of curriculum. In a 2000 keynote address, curriculum design expert Heidi Hayes Jacobs states:

Given the limited time you have with your students, curriculum design has become more and more an issue of deciding what you won’t teach as well as what you will teach. You cannot do it all. As a designer, you must choose the essential. (as cited in Ainsworth, 2003a, p. 12)

But when this is left to individual teachers, schools cannot get to a guaranteed and viable curriculum. According to Richard DuFour and Robert J. Marzano (2011):

If schools are to establish a truly guaranteed and viable curriculum, those who are called upon to deliver it must have both a common understanding of the curriculum and a commitment to teach it. PLCs monitor this clarity and commitment through the second critical question that teachers in a PLC consider, “How will we know if students are learning?” That question is specifically intended to ensure that the guaranteed curriculum is not only being taught to students but, more importantly, is being learned by students. (p. 91)

The collaborative common assessment process requires that collaborative teams come together to determine their priority standards, the learning targets within those standards, the assessments required to measure the intended learning, the pacing of their work, and their re-engagement plans for learners who’ve yet to attain the expectations. Schools that engage teams in the work of collaborative common assessment are far more likely to attain a guaranteed and viable curriculum than those that choose to follow premade curriculum programs (Hattie, 2009).

Assessment Literacy

Most teachers in North America have had insufficient formal training, practice, feedback, and ongoing support regarding the principles of sound assessment. As Rick Stiggins (2008) notes, education has primarily relied on textbook and testing companies to design high-quality assessments. Both undergraduate and graduate teacher-preparation programs have an obvious and alarming absence of courses regarding effective assessment design and use (Stiggins & Herrick, 2007). When teachers do not understand the theory and practice of valid and reliable assessments, teachers have no option but to use predesigned assessments from their textbooks or make up assessments. Often, they replicate the poor assessment practices that they themselves experienced as K–12 students.

Unless a teacher uses sound assessments, the teacher has no way to ensure that the teaching has actually transferred into learning. Assessment is a core teaching process. Teaching in the absence of constant sound assessment practices is really just coverage of content. The only way teachers can guarantee learning is if they all use sound assessment practices effectively.

If that’s the case, wouldn’t it suffice if teachers just used already provided assessments in the way the assessment materials advise? Absolutely not. Time and again, when teachers use existing assessments, it accidentally creates gaps in teaching and learning because teachers rarely analyze a prewritten assessment carefully before they administer it, and then, they accept the resulting data at face value. Table 2.4 includes some of the inaccuracies and insufficiencies that result from this lack of analysis.

Table 2.4: Possible Inaccuracies and Insufficiencies Resulting From Assessment Errors

Assessment Error Inaccuracy Insufficiency
Item or Task Quality The items or tasks are often set up to gather data in the quickest possible manner. When that happens, the assessment falls short of truly measuring the full intent of the standards it is designed to assess (for example, many performance-oriented standards are assessed with the more easily scored pencil-and-paper test). The items or tasks fall short of deep application or higher-order reasoning. Many assessments stop at ensuring students possess content information and in some cases can execute the algorithms that accompany the knowledge. Few assessments move to the level of requiring students to integrate knowledge or construct new solutions and insights in real-world applications.
Sampling An assessment might include standards that are not within the expectations of the teachers engaged in that curricular material. An assessment might include too many standards and not have enough samples of each standard to ensure any reliability.
Results All items have equal weighting—even items that the curricular resource itself might have deemed nonsecure goals. Teachers tally and report the full data for decision making even though the final results should not include some of the generated data. The data may result in learners unnecessarily receiving interventions. Error analysis of what went wrong in an individual student response (reading error, concept error, or reasoning error) frequently stops at the point of the resulting percentage or score. Item analysis is limited to whether the item or prompt was of high quality based on the responses it generated. The data do not offer insight into student thinking or inform next instructional steps.

According to Helen Timperley (2009), “Knowledge of the curriculum and how to teach it effectively must accompany greater knowledge of the interpretation and use of assessment information” (p. 23). Teachers must experience assessment development and deployment in order to understand it. Designing assessments in advance of teaching creates a laser-like focus on and comprehensive understanding of the instruction required to attain mastery. This does not mean that teachers should only use assessments they themselves create; instead, it means that they can no longer depend solely on the assessments that come premade from outside testing vendors with their curricular materials and software item banks.

When teacher teams design and employ assessments and interpret their results, they build shared knowledge regarding assessment accuracy and effectiveness. Teachers who engage in the collaborative common assessment process learn both how to design assessments accurately and use assessment data effectively.

Accurate Assessment Design

Teams are better able to create more accurate assessments when they agree to design their assessments so that they align to standards; have clear, uniform targets; feature accurate prompts and measurement tools; include varied assessment methods and data points; and foster increased rigor and relevance. The adage Many hands make light work, is as relevant here as the notion that many eyes can bring multiple perspectives into clear focus.

Alignment to Standards

From the late 1980s until the early 2000s, schools and districts told teachers to follow the pacing guide and implement the curriculum with fidelity. In some schools and districts, mandates and monitoring made it dangerous for teachers to deviate from the prescribed curriculum plan. As of 2019, no single curriculum has fully aligned with any state’s standards. While textbook companies can demonstrate that their curricular materials address a state or province’s standards, they cannot prove that the materials address every standard, that they do so at the depth that the state or province’s testing system requires, or that their curriculum-based assessments match the types of questions the state or province might ask. When teachers develop collaborative common assessments, they begin with the standards, not the curriculum, to make their instruction and assessment decisions. That early alignment process can better support accurate design.

Clear, Uniform Targets

When teachers unpack standards together, they develop a shared understanding of the target expectations that the standards require. It is imperative that teachers agree to the specific learning expectations outlined in the standards. It is equally imperative that they agree on the meaning of key verbs. For example, they might ask, “What exactly does summarize mean? Is summarize similar to or different from generalize? What type of task would best engage learners in the process of summarizing, and what quality criteria would guarantee high-quality summaries in every classroom?” If teams are clear on the individual terms and the specific demands of the standards, they can provide more consistent and accurate instruction leading into the assessments. They can also make individual decisions that allow for variances in the assessments but that remain contingent on clear, agreed-on lists of learning targets unit by unit.

Accurate Prompts and Measurement Tools

It is impossible to write a perfect assessment task, item, or rubric; it is sometimes hard to even write a good one. However, when teams work collaboratively, they generally develop such prompts and measurement tools in a more thoughtful way. They often seek clarifying examples, challenge each other’s personal schemas, refine their work based on the evidence it generates over time, and, most importantly, calibrate their expectations so they have consistency from classroom to classroom.

Varied Assessment Methods and Data Points

A deep exploration into standards and target language engages teams in exploring the proper questions, prompts, or tasks that will truly assess students’ expected attainment and mastery of the content. This exploration makes it apparent that one assessment, or even one type of assessment, will not suffice to accurately certify a learner’s degree of mastery of a standard. For example, it is important to assess the small, specific tasks (such as identifying text-specific details) of a large concept or skill (for example, drawing conclusions or making predictions) to verify that learners are ready to engage in the larger concept or skill, but it is equally important to engage learners in a comprehensive assessment that certifies that they can put all the parts together. Multiple assessment methods and multiple data points provide a more comprehensive and accurate picture of student learning.

Increased Rigor and Relevance

Teachers can find it hard to write a high-quality assessment, much less a high-quality, rigorous, and relevant assessment. Too often, an assessment—which seeks to assess what teachers taught—misses the importance of the learning’s larger context. For example, the Common Core State Standards (CCSS; National Governors Association Center for Best Practices [NGA] & Council of Chief State School Officers [CCSSO], 2010a) require students to understand text features. But after teaching text features, how should teachers assess students’ understanding of text features? Should teachers just assess that students can point to features, identify them with appropriate labels, and explain how those features make a text easier to comprehend? Many summative assessments stop at the level of knowledge and skill instead of getting to strategic or extended thinking. While it is necessary to ensure such basics are in place, a check on the basics can take place in formative stages. Ultimately, however, such assessments at the summative level miss the mark of having students create meaning within a complex text because of their ready access to and interpretation of the document’s text features. When teacher teams collaborate to write assessments, they question the relevance of the learning and the rigor of the potential tasks or items. They challenge their assumptions, materials, and practices as they explore rigor and relevance while writing the assessment.

Effective Data Use

While assessment literacy requires that teams design assessments accurately, it is equally imperative that teams use the data effectively. For example, an adult could take a child’s temperature over and over with a thermometer and generate an accurate reading of 103 degrees. But the data generated won’t suffice to effectively address the findings. The adult would need to administer an aspirin in response to the data that the child has a fever. Teachers need to understand how to effectively address their classroom assessment findings. To do this, teams must agree on the best ways to respond to their assessment results; possible responses include identified error analysis, targeted instructional responses, effective feedback, dynamic student involvement, systemic reflection, and positive cultural change.

Identified Error Analysis

Teachers could easily look at assessment data by the resulting percentages and then sort learners based on their percentage scores into groupings for re-engagement and extension opportunities. Percentage-based scores, however, never suffice. They should serve only as an indicator requiring deeper exploration, not as an exact conclusion requiring on-the-spot decision making. When teachers deeply own the results of their team-created assessments, they look at percentages as indicator data, which drives them more deeply into the actual student work so they can gain insights into what went wrong. Did students make reading errors? Concept errors? Reasoning errors? Collaborative teams use data to launch deeper investigations through error analysis.

Targeted Instructional Responses

A deeper exploration into data points and student work can offer significant insights into the appropriate instructional responses. For example, a student might have scored 65 percent on the learning target of “drawing conclusions,” but what went wrong? Teams using the collaborative common assessment process engage in error analysis to find critical answers to questions like, Did the student identify explicit evidence but neglect to identify implicit evidence before drawing a conclusion? Did the student have insufficient evidence before drawing a conclusion? Did the student have sufficient implicit and explicit evidence but employ faulty reasoning when drawing the conclusion? Clearly, it would be a mistake to reteach all of this learning target to a student who scored 65 percent. Teams would find it far better to analyze the type of error the learner has made and then identify a targeted instructional response. With such information in their hands, teacher teams can close learning gaps in short order because they directly match their interventions to the type of error the learner made.

Effective Feedback

When collaborative teams share unified commitments to consistent learning targets, employ uniform criteria for evaluating quality, and collectively explore the specific types of errors students make in the learning process, they can better provide the necessary feedback that supports learners in reducing the discrepancy between where they currently are and where they need to be. In the absence of such work, individual teachers provide feedback of varying degrees of quality. John Hattie and Helen Timperley (2007) argue that feedback “is most powerful when it addresses faulty interpretations, not a total lack of understanding” (p. 82). In order to identify the right feedback to offer, teachers should work together to find and analyze learners’ faulty interpretations of concepts. According to Hattie (2009), feedback is one of the most powerful instructional strategies a teacher can employ. Yet assessment experts continue to cite research that indicates teachers often misunderstand this strategy and seldom employ it with a high degree of effectiveness (Chappuis, 2009; Hattie, 2009; Hattie & Timperley, 2007; Ruiz-Primo & Li, 2011, 2013; Wiliam, 2013).

Feedback is a two-way street—teacher to students and students to teacher. Both directions require tremendous clarity and energy. Hattie (2009) reminds teachers that:

Feedback to students involves providing information and understanding about the tasks that make the difference in light of what the student already understands, misunderstands, and constructs. Feedback from students to teachers involves information and understanding about the tasks that make the difference in light of what the teacher already understands, misunderstands, and constructs about the learning of his or her students. It matters when teachers see learning through the lens of the student grappling to construct beliefs and knowledge about whatever is the goal of the lesson. This is never linear, not always easy, requires learning and over learning, needs dollops of feedback, involves much deliberative practice, leads to lots of errors and mis-directions, requires both accommodating and assimilating prior knowledge and conceptions, and demands a sense of excitement and mission to know, understand, and make a difference. (p. 238)

A deep analysis of common assessment data leads collaborative teams to better understand the errors their learners are making so they can help their learners autocorrect. Likewise, the process enables teams to stand behind their proffered feedback with focused instruction and ongoing monitoring.

Dynamic Student Involvement

Assessment should never be something that teachers do to learners; rather, they must do it with and for learners. This requires that they utilize even common assessments with the ultimate end user—the learner—in mind. Jan Chappuis (2009) asserts:

Formative assessment is a powerful tool in the hands of both teachers and students and the closer to everyday instruction, the stronger it is. Classroom assessment, sensitive to what teachers and students are doing daily, is most capable of providing the basis for understandable and accurate feedback about the learning, while there is still time to act on it. And it has the greatest capacity to develop students’ ability to monitor and adjust their own learning. (p. 9)

Teams that develop assessment literacy through common understandings and practice embrace the conversation of engaging learners in analyzing and responding to their own results. In this way, an instructional re-engagement strategy exponentially increases in power as teachers target their responses to specific gaps in understanding while students rally their attention and energy around mastering their own gaps.

Systemic Reflection

The work of collaborative common assessments is ultimately about professional inquiry and learning. In teams, teachers gather the necessary data to explore their instructional impact, identify and deliver strategic responses, and reflect on their practices and beliefs. Timperley (2009) observes:

When teachers … interpret assessment data in order to become more responsive to their students’ learning needs, the impact is substantive. Teachers, however, cannot do this alone…. Creating the kinds of conditions in schools in which teachers systematically use data to inform their practice … requires that they teach in contexts in which such practice becomes part of the organisational routines. (p. 24)

The entire system engages in reflective practice; the findings of a few inform the work of many. Collaborative common assessment engages the entire system as a professional learning community.

Positive Cultural Change

Strategies and initiatives may come and go, whereas a school’s culture can seem steadfast—almost to the point that it serves as an insurmountable wall of resistance when the school requires change. Educational literature has clearly and forthrightly shown that any school-improvement effort that does not positively impact culture is doomed to failure (DuFour & Fullan, 2013; Muhammad, 2018). The work of collaborative common assessments embeds itself in teacher teams’ routines, practices, and even beliefs—that is, it can impact culture in significant ways. The resulting gains in student achievement lead to changes in belief and practice. Richard F. Elmore (2003) says:

The Handbook for Collaborative Common Assessments

Подняться наверх