Читать книгу The Handbook for Collaborative Common Assessments - Cassandra Erkens - Страница 9

Оглавление

1

Understanding Collaborative Common Assessment

It’s critical that educators ensure every learner graduates prepared to thrive in the complex world that awaits. Toward that end, educators must vigilantly monitor the arc of learning over time. Checklists and tools for designing and monitoring standards, assessment, curriculum, and instruction are key. How do schools and districts equip teachers to harness the power of assessment while bringing joy and passion back to teaching? The answer lies in the collaborative common assessment process. When teacher teams properly design, deliver, and analyze collaborative common assessments, it helps teachers build instructional agility, the ability to quickly adjust instruction so it responds to learners’ needs. Done well, collaborative common assessments are the educators’ formative assessments; the resulting information from common assessments gives educators, like students, additional opportunities to improve their results over time.

As a summary to his meta-analysis of over eight hundred research studies in education, John Hattie (2012) provides a ringing endorsement of the power of common assessments to generate excellence in education when he concludes:

a major theme is when teachers meet to discuss, evaluate, and plan their teaching in light of the feedback evidence about the success or otherwise of their teaching strategies and conceptions about progress and appropriate challenge. This is not critical reflection, but critical reflection in light of evidence about their teaching. (p. 19)

Using such evidence can increase precision, flexibility, and responsiveness among teachers, making common assessments the vehicle for creating teachers who are instructionally agile and teams that are collectively efficacious.

As teams begin the journey of implementing the collaborative common assessment process, they will find it helpful to understand certain foundational concepts of the process. To begin, it’s important teams have a clear, working definition and established criteria for collaborative common assessments. Fortunately, there are many protocols and tools that can help teams determine whether they are meeting quality indicators for their work.

Defining Collaborative Common Assessments

Experts agree that common assessments yield data that educators can use to improve learning (Ainsworth & Viegut, 2006; Bailey & Jakicic, 2012; DuFour et al., 2016; Hattie, 2009; Reeves, 2006). Every author on this subject offers a slightly different definition of common assessments, but all authors—even those who do not classify themselves as professional learning community (PLC) experts—stick with the same theme; namely, common assessments provide the real-time evidence required for educators to reflect critically on their impact so they can then design targeted responses to move learning forward for their students (Ainsworth & Viegut, 2006; Bailey & Jakicic, 2012; DuFour et al., 2016; Hattie, 2009; Reeves, 2006).

The collaborative common assessment process puts educators in the driver’s seat and provides teachers with the necessary opportunity to assess according to their learners’ needs. The process needs to remain as close as possible to the classroom for teachers and their learners. When teachers reference their local classroom assessment results with their observations, experience, and curricular expertise, they tend to have a higher degree of clarity regarding what comes next in the learning for the students they serve. Likewise, schoolwide interventions can miss the mark if the classroom teacher’s concerns and insights are ignored. Teachers must drive the assessment and intervention decisions at the classroom level first.

A collaborative common assessment is any assessment that meets all five of the following criteria.

1. Formative or summative

2. Team created or team endorsed

3. Designed or approved in advance of instruction

4. Administered in close proximity by all instructors

5. Dependent on teamwork

Each of these criteria is integral to the collaborative common assessment process.

Formative or Summative

The goal of using formative assessments is to provide information that improves a learner’s ability to be successful, whereas the goal of using summative assessments is to prove a learner’s level of proficiency at the conclusion of the learning journey (Chappuis, Stiggins, Chappuis, & Arter, 2012; Erkens, Schimmer & Vagle, 2017, Wiliam 2011, 2018). Because both are necessary to support learning, common assessments should be both formative and summative in nature (Erkens, 2016). A team requires a common summative assessment (CSA) in order to ultimately certify mastery on a predetermined priority standard. If teams do not start by framing a collaborative summative assessment, then their CFAs serve as loose pebbles on a meandering pathway, rather than sequential rungs on a ladder with a clear trajectory and targeted destination. A team requires common formative assessments to discover and address areas needing improvement before the summative assessment is given. It is far better to intervene during the unit of instruction than it is to re-engage students in learning after the summative has been given. Teams that develop and effectively employ CFAs typically find that they need to conduct fewer and fewer re-engagement strategies following a CSA.

Team Created or Team Endorsed

The entire team must either write the assessment together or co-review and endorse the assessment that it has selected for use. This detail matters greatly. Asking teachers to give an assessment over which they have little ownership is like asking them to ride a city bus and care deeply about the road signs the bus encounters along the way. They will care deeply about the many road signs only if they are driving the bus. Moreover, if one person writes the assessment for the team and something goes wrong with the assessment process, the team generally blames the author. The entire team must take an active role in determining the assessments that it will use to monitor its instruction.

Designed or Approved in Advance of Instruction

Everyone loses when teachers retrofit assessments to the instruction that preceded the testing experience. Since instruction is the visible and immediate actionable step in the teaching and learning process, it feels natural to plan it first. However, a closer look reveals how that practice costs teachers and students time and learning opportunities. Teachers lose because they have to try to remember all the things they said during instruction and then begin the time-consuming process of prioritizing what’s important to test. Many times, this leads to inaccurate assessments, primarily because they don’t align to the standards. Instruction that wanders without a known, specific target has no chance of hitting its desired mark for teachers or their learners (Erkens et al., 2017; Hattie, 2009; Heritage, 2010, 2013; Wiliam, 2011, 2018). When teachers don’t frame the assessment road map or architecture in advance of the instruction, the instructional designs can misfire, and learners then miss critical components and interconnected concepts.

The greatest concern when teachers retrofit assessment to instruction, however, is that inaccurate assessments yield inaccurate results. In such a case, both the teacher and the learner draw conclusions based on dirty data. Dirty data contain inaccuracies, hide truths with oversimplifications, or mislead with false positives or false negatives. Such data can only lead to inaccurate feedback. When that happens, learners cannot receive the appropriate support they need to master not only what they learn but also how they learn. Conversely, when teams clarify summative assessments in advance of instruction, teams are often able to find instructional time, instead of waste it, because they can strategically determine what it will take for each learner to be successful on the assessment, they can ensure alignment of their assessment and curricular resources, and they can respond more accurately and with a laser focus in their intervention efforts. While the educational literature has recognized this model—backward design—since the 1990s (Jacobs, 1997; McTighe & Ferrara, 2000; Wiggins & McTighe, 2005), it is still not a prevalent practice.

Administered in Close Proximity by All Instructors

While most teams succeed in having all students take a common assessment on the same day, that isn’t always doable, as many things (school cancellations, emergency drills, and so on) can easily interrupt the school day. If teams are to respond to learners who have not yet achieved mastery and learners who need extension, then individual teachers must give the assessment in a relatively short time frame so that they can collaboratively respond in a timely fashion.

Imagine that a team has designed an assessment task that requires students to use the school’s only computer lab, so the team members’ students take turns using it (for example, teacher A’s students use the lab and complete the task in September, teacher B’s in October, and teacher C’s in November). This is the same assessment, but it does not function as a common assessment should. The team members provide the exact same task with the same criteria and grade-level content. However, the team members are on their own for strategizing how to intervene or extend the learning for their individual classrooms. They miss the power of the collective wisdom and creativity of their peers in addressing the challenges that emerge from their individual results. In a case where teachers do not give the same assessment in the same time frame, teams can only look at the data in hindsight and then produce program-level improvements that answer the following questions.

• “Was the assessment appropriate and engaging?”

• “Were the scoring criteria accurate, consistently applied, and sufficient?”

• “Did the curriculum support the learners in accomplishing the task?”

• “Were the instructional strategies successful overall? Do we need to make any changes moving forward?”

The pace of data collection in this case cannot support instructional agility. The learners in September will not benefit from the team’s findings in November, when all the learners have finished the task.

Dependent on Teamwork

The collaborative common assessment process requires teamwork to help ensure accurate data; timely re-engagement; consistent scoring; and alignment between standards, instruction, and assessment so all students learn. Collaboration is central to the process as teams examine results, plan instructionally agile responses, analyze errors, and explore areas for program improvement.

Collaboratively Examined Results

When teachers use a common assessment, that does not guarantee it will generate common results. The notion of common data implies a high degree of inter-rater reliability, meaning the data generated are scored similarly from one rater to the next. Even when using test questions that have clear right and wrong answers, teachers can generate uncommon results. For example, teachers may interpret student responses differently, or some teachers may offer partial credit for reasoning while others only offer credit for right answers. Many variables impact the scoring process, and many perceptions lead teachers to different conclusions, which can create data inconsistency from classroom to classroom. No matter the test method, teachers must practice scoring together on a consistent basis so that they can build confidence that they have inter-rater reliability and accurate data.

Instructionally Agile Responses

The purpose of using collaborative common assessments is to impact learning in positive, responsive, and immediate ways, for both students as learners and teachers as learners. When teachers analyze assessment data to inform real-time modifications within the context of the expected learning, they improve their instructional agility and maximize the assessment’s impact on learning. It seems logical that teams of high-quality instructors will have more instructional agility than an individual teacher for the following reasons.

More accurate inferences: Teams have more reviewers to examine the results, conduct an error analysis regarding misconceptions, and collaboratively validate their inferences.

Better targeted instructional responses: Teams have more instructors to problem solve and plan high-quality extension opportunities for those who have established mastery, as well as appropriate corrective instruction for those who have various misconceptions, errors, or gaps in their knowledge and skills.

Increased opportunities for learners: Teams simply have more classroom teachers surrounding the learner who can provide informed interventions and skilled monitoring for continued follow-up.

This is not to suggest that teams will always develop better solutions than individual teachers might, especially if an individual teacher has reached mastery in his or her craft, knowledge, and skill. Rather, it is to suggest that educators can increase the likelihood of accuracy, consistency, and responsiveness over time if they collaboratively solve complex problems with the intention to increase their shared expertise and efficacy.

Error Analysis

There is no such thing as a perfect test; all tests will have some margin of error. So typically, before teachers employ a measurement tool (such as a scale, rubric, or scoring guide) or an assessment (such as a test, an essay, or a performance task), the designers must attempt to find, label, and address the potential errors in the measurement tool, the assessment, or the administration process itself, noting that a margin of error could exist in the findings. This practice helps trained test designers review the results for any potential dangers in students’ resulting inferences. By using a similar error-analysis process, classroom teachers—not trained as assessment experts—can identify potential mistakes and misconceptions in their classroom assessments. Error analysis involves examining various students’ responses to an individual task, prompt, or test item and then identifying and classifying the types of errors found. Identifying the learners’ errors is critical to generating instructionally agile responses that guide the learners’ next steps, as the type of error dictates the appropriate instructional response.

Program Improvements

A benefit of engaging in collaborative common assessments involves gathering local program improvement data. When teachers do not create, use, and analyze assessments collaboratively and commonly, they have only isolated data to offer. Such data are filled with more questions than answers: What happened in that classroom? Was it an anomaly? Or, did the instruction, the chosen curricular resources, the pacing, the use of formative assessments, or the student engagement practices cause it? The data from one classroom to the next will have too many variables to provide valid and reliable schoolwide improvement data. When data are common and teams assemble them in comparative ways, however, patterns, themes, and compelling questions emerge. These allow teams to make more informed, strategic decisions and establish inquiry-based efforts to answer complex problems. Using common data, teams may focus their program improvements in the following areas.

Curriculum alignment and modifications: Teams make certain that they have selected a rigorous curriculum that aligns with the standards. For example, using collaborative common assessment data, team members might discover they need to increase their focus on nonfiction texts, which alters their future curricular choices.

Instructional strategies and models: Having teams analyze instructional strategies and models or programs does not mean teachers must teach in the exact same ways. It does mean, however, that teachers must isolate the strategies (which they can deliver with their own creative style) that work best with rigorous content, complex processes, or types of challenges that learners may be experiencing.

Assessment modifications: When assessment results go awry, teams will often engage in improving the assessment before they examine curriculum or instructional implications. But by doing so, teams can accidentally lower the assessment’s rigor to help learners meet the target when the assessment may not have caused the issue. For this reason, teams should explore needed assessment modifications after they explore curriculum alignment and instructional implications. But it is always important that teams examine the assessment itself. Sometimes, weak directions or confusing questions or prompts are the variables that cause common student errors.

The more valid, reliable, and frequent local improvement data become, the more likely teams and schools can manage program improvements in significant and timely ways without relying so heavily on external testing data.

Overall, the collaborative common assessment process requires a far greater commitment to teamwork, instruction, and results than the simplistic, popular notion that teams give benchmark assessments and look at the results together.

Ensuring Safety and Shared Commitments

Without a doubt, learning in a public setting by exposing personal successes and failures is risky business. Because of this, the mere suggestion of common assessments may terrify teachers. Without clarity of purpose and commitments of support from administrators, teachers may fear that a negative motivation underpins the organization’s intent.

In truth, common assessments were always only meant to serve as a promising practice to increase teacher success and student achievement. But if teams, schools, and districts don’t handle the common assessment process with thought and great care, concerns regarding uniformity, competition, compliance, or overtesting could become a reality. Think of the common assessment process like any other tool; for example, a hammer could help build a house, but it could also help tear a house down.

When all levels of the organization—teams, schools, and even districts—manage common assessments in collaborative ways, and when teachers receive clear expectations and participate in generating and endorsing shared commitments, then teachers can feel safe to take intellectual risks and explore what deep learning looks like in their content area and grade level. This, then, makes collaborative common assessments the most promising practice teachers can use to support job-embedded, real-time learning regarding the complex issues they face daily in the classroom.

To support the right work happening, leaders must have transparency. Transparency, however, must exceed simply clarifying purpose, as that rarely removes suspicion of motivation. It’s extremely helpful when leaders engage teachers in generating shared commitments to allow for an ego-free zone. Shared expectations provide clarity of purpose, but truly shared commitments provide teachers with the language and tools to keep each other safe and hold each other mutually responsible for the work at hand. It is only when teams feel safe on the journey that they will launch into the risk taking necessary to learn from their experiences.

Shared commitments establish clear understanding and develop parameters to guide the work at all levels of the organization. Such commitment statements offer the organizational promises necessary to create the culture of safety required for intellectual risk taking among professionals.

The following examples of shared commitment statements highlight the kinds of agreements teams might create in order to guide their future decision making and hold each other mutually accountable to the work of common assessments.

• Team commitments statements:

° We will strive to set preferences aside and come together collaboratively to examine best practices and appropriately adapt in data-based ways to address individual student learning needs. Ultimately, we will increase student achievement.

° We will use the collaborative common assessment process to become more reflective and to improve our core instruction, our assessment practices and tools, and our curricular resources.

° We commit to provide extensions and interventions for all of our learners, ensuring they receive the targeted support required to move them forward. We will continue to work with them to ensure mastery on our prioritized learning expectations.

• School commitment statements:

° We will use collaborative common assessments within our teams and across this school to generate evidence of learning. We will use the evidence to reveal successes, learn about improvements, and create supporting learning structures for our students.

° We will build a system of interventions to target the instructional needs that emerge from common assessments. We will monitor the effectiveness of our intervention system and commit to improve it when and where necessary.

° We will improve and refocus instruction based on emerging evidence from common assessments so we can better prepare our learners to succeed beyond our school walls and ultimately to contribute to a global and competitive society.

• District commitment statements:

° We commit to employ rigorous and relevant benchmark assessments with stakeholder input and to monitor the consistency of opportunity from school to school and classroom to classroom.

° We will empower PLCs to elicit, analyze, and act on evidence of student learning for the purpose of continuous improvement in teaching and learning. A shared ownership of learning is critical to the success of both teachers and students.

° We will work with schools to identify areas of concern so that we can support teachers in understanding and implementing the work.

Collective commitments, developed by all who have a hand in doing the work, provide the safety net teachers require to feel safe during the common assessment process. Figure 1.1 (page 14) outlines a general process that organizations can use to develop commitment statements.

While developing commitment statements can take time, the work allows all stakeholders to become clear and comfortable with the changes the statements ask them to make. Commitment statements increase buy-in and provide the assurances needed to encourage intellectual risk taking. Shared agreements naturally form the parameters for all future decision making.


Figure 1.1: Transforming collective statements into commitment statements.

Visit go.SolutionTree.com/assessment for a free reproducible version of this figure.

Navigating the Process

Seeing the big picture of the process of collaborative common assessments does for teams what the global positioning system (GPS) does for drivers: it helps teams see the path ahead so they can anticipate next steps. And, just as most maps offer no straight line or single option to get from point A to point B, the road map for collaborative common assessments is recursive and iterative. Teams may find themselves moving from the foundation to monitor learning and going back to the foundation for clarification, and so on. Figure 1.2 offers a pictorial representation of the collaborative common assessment process. This figure maintains the same elements of the collaborative common assessment process defined in Collaborative Common Assessments (Erkens, 2016); however, inside arrows have been added to show the relationships among different parts of the process, and some terminology is different within the individual boxes because, as with any quality process, improvements must occur.

Teams find it helpful to keep figure 1.2 on hand during their work, especially as they initially learn the process. In addition, it helps when teams consider the criteria for quality during the design, delivery, and data phases.

Using Quality Indicators

Because the quality of the common assessment system can vary based on how teams implement it, it’s helpful to have a set of quality indicators to guide teams as they develop and continuously monitor their common assessment system. Figure 1.3 (pages 1617) offers a tool for discussing, planning, and monitoring quality indicators during the design, delivery, and data phases of the collaborative common assessment process. Teams can use the listed indicators to guide decision making during the planning phases or to evaluate their current efforts. The tool includes a rating scale for teams that prefer to use it as a discussion tool about quality levels for each criterion.


Figure 1.2: A pictorial representation of the collaborative common assessment process.

Visit go.SolutionTree.com/assessment for a free reproducible version of this figure.



Figure 1.3: Quality indicators for collaborative common assessments.

Visit go.SolutionTree.com/assessment for a free reproducible version of this figure.

Teams can use such a tool in many ways. At the school level, teachers could submit their individual responses anonymously to designated leaders (teacher leaders or administrators) so those designated key leaders have a sense of how the collaborative common assessment process is going across the building. Or, a team with high levels of trust and rapport could simply conduct a round-robin to share what teachers individually scored each item and then discuss their final results. If they do not use this as a survey tool, teams could use each indicator as a discussion point as they strive to revise and continuously improve their design, delivery, and data systems. Healthy and productive teams consistently self-evaluate their processes and then make the necessary modifications or refinements to guarantee their ultimate success (Erkens & Twadell, 2012).

Conclusion

Confusion and mistrust reign when the common assessment process is approached as a series of disparate activities rather than a cohesive and integrated system. Staff need to see the big picture of the common assessment process. Members need to understand the rationale, co-create commitments to one another, and then, with a high degree of comfort in place, make themselves vulnerable and available to learning. The teaching and learning process should never be something educators reserve exclusively for the classroom. When collaborative team meetings involve exploring the team’s impact from a place of openness, inviting intellectual risk taking for creative problem solving, and sharing responsibility for challenges that loom ahead, then teachers engage the teaching and learning process within their team meetings.

Team Reflections

Take a few moments to reflect on the following questions.

• What benefits do we anticipate having, or what benefits have we already experienced, from the process of engaging in collaborative common assessment?

• Do the staff (or does our team) clearly understand the work of collaborative common assessment? If so, what evidence do we have to support our belief? If not, what evidence do we have to support our belief, and how can we help them understand it?

• Do staff (or does our team) feel safe? How do we know? Are there promises we need to provide to create a safety net? If so, what might those be? What process or processes would we have to use to create and share those promises?

• How will our current teamwork help us fully utilize the collaborative common assessment process? Are there things we might need to improve on? If so, what, and how?

• Is our team, school, or district system ready for the collaborative common assessment process? Have we identified the elements that we will need to add, modify, or delete as we embark on the collaborative common assessment process? If so, what are they?

The Handbook for Collaborative Common Assessments

Подняться наверх