Читать книгу The New Art and Science of Teaching - Robert J Marzano - Страница 8
ОглавлениеCHAPTER 2
Using Assessments
At its core, assessment is a feedback mechanism for students and teachers. Assessments should provide students with information about how to advance their understanding of content and teachers with information about how to help students do so.
The desired mental states and processes for assessment are that:
Students understand how test scores and grades relate to their status on the progression of knowledge they are expected to master.
To achieve these outcomes in students, there must be a transparent relationship between students’ scores on assessments and their progress on a proficiency scale. The following elements are important to effective assessment.
Element 4: Using Informal Assessments of the Whole Class
Informal assessments of the whole class provide a barometer of how the whole class is performing regarding the progression of knowledge articulated in a specific proficiency scale. Informal whole-class assessments typically don’t involve individual students’ recorded scores. The specific strategies associated with this element appear in table 2.1 (page 22).
The strategies in table 2.1 provide teachers with a wide array of options for informal assessment. Teachers can execute voting techniques quickly and repeat them multiple times. For example, the teacher asks a series of multiple-choice questions on score 2.0 content from a proficiency scale using PowerPoint slides. Students then use voting devices (such as clickers) to signify their answers. The teacher keeps track of the number of students who vote on the correct answers but does not record individual student scores. However, the teacher does report on the percentage of students with correct answers and uses that percentage as a barometer of how well the class as a whole is doing on score 2.0 content.
Response boards are similar to voting techniques. However, they provide more information. With this technique, students record their responses on erasable boards that are small enough for them to handle individually. Response boards allow for students to write short constructed-response answers. Upon the teacher’s direction, students hold their response boards up so only the teacher can see. The teacher quickly surveys student responses and reports on what percentage of the class seems to know the correct answer.
Table 2.1: Using Informal Assessments of the Whole Class
Strategy | Description |
Confidence rating techniques | The teacher asks students to rate how confident they are in their understanding of a topic using hand signals (thumbs-up, thumbs-sideways, or thumbs-down) or using technology (for example, clickers or cell phones). |
Voting techniques | The teacher asks students to vote on answers to specific questions or prompts. |
Response boards | The teacher asks students to write their responses to a question or prompt on an erasable response board or response card. |
Unrecorded assessments | The teacher administers an assessment and immediately has students score their own tests. The teacher uses scores as feedback but does not record them. |
Source: Adapted from Marzano Research, 2016s.
When the strategies in this element produce the desired effects, teachers will observe the following behaviors in students.
• Students readily engage in whole-class assessment activities.
• Students can describe the status and growth of the class as a whole.
• Students seem interested in the entire class’s progress.
• Students appear pleased as the whole class’s performance improves.
Element 5: Using Formal Assessments of Individual Students
Formal assessments of individual students provide accurate information about their status at a particular point in time on a specific topic. To obtain such information, the teacher designs assessments based on the proficiency scale for a unit or a set of related lessons. In effect, the proficiency scale is the foundation for any and all assessments. A specific assessment might focus on all the content levels of a proficiency scale (scores 2.0, 3.0, and 4.0 content) or it might focus on only one level of a proficiency scale (such as score 2.0 content).
The various strategies that teachers might use to address this element appear in table 2.2.
Table 2.2: Using Formal Assessments of Individual Students
Strategy | Description |
Common assessments designed using proficiency scales | Teachers who are responsible for the same content taught at the same level work together to design common assessments that they use to provide formative and summative feedback to students on specific topics. They then express topics as proficiency scales. |
Assessments involving selected-response or short constructed-response items | The teacher creates and scores traditional assessments that employ selected-response and short constructed-response items. |
Student demonstrations | The teacher asks students to generate presentations that demonstrate their understanding of a topic. Teachers typically use student demonstrations with skills, strategies, or processes. |
Student interviews | The teacher holds conversations with individual students about a specific topic and then assigns each student a score that depicts his or her knowledge of the topic. |
Observations of students | The teacher observes students interacting with the content and assigns a score that depicts their level of knowledge or skill regarding the specific topic observed. |
Student-generated assessments | The teacher invites students to devise ways they will demonstrate competence on a particular topic at a particular level of proficiency. |
Response patterns | The teacher identifies response patterns at scores 2.0, 3.0, and 4.0 as opposed to adding points to create an overall assessment score. |
Source: Adapted from Marzano Research, 2016o.
Many of the strategies in this element represent different ways to assess students. For example, common assessments are those that collaborative teams create around a specific proficiency scale (see Marzano, Heflebower, Hoegh, Warrick, & Grift, 2016). To illustrate, assume that a collaborative team of three teachers is designing a common assessment. The teachers start by creating a proficiency scale like the one in figure 2.1.
Source: Marzano Research, 2016o.
Figure 2.1: Proficiency scale for common assessment.
Creating a proficiency scale is always the first order of business when designing a common assessment. As described in chapter 1, if the district has created proficiency scales for each subject area and grade level, this work is already done for collaborative teams.
The next step is to design an assessment that addresses scores 2.0, 3.0, and 4.0 content from the scale. Such an assessment appears in figure 2.2 (page 24).
The assessment in figure 2.2 includes items and tasks for score 2.0 content in section A, items and tasks for score 3.0 content in section B, and items and tasks for score 4.0 content in section C. Other assessments individual teachers generate might follow this same format. However, there are a variety of other forms assessments might take. For example, interviews are a type of assessment that involve teacher-led discussions during which the teacher asks questions that address level 2.0 content, level 3.0 content, and level 4.0 content. Based on students’ oral responses, the teacher assigns an overall score.
Source: Marzano Research, 2016o.
Figure 2.2: Assessment with three sections.
Student-generated assessments are those that individual students propose and execute. This particular strategy provides maximum flexibility to students in that they can select the assessment format and form that best fit their personality and preferences.
Probably the most unusual strategy in element 5—response patterns—involves different ways of scoring assessments. To illustrate this strategy, consider figure 2.3.
Source: Marzano Research, 2016o.
Figure 2.3: The percentage approach to scoring assessments.
Figure 2.3 depicts an individual student’s response pattern on a test that has three sections: (1) one for score 2.0 content, (2) one for score 3.0 content, and (3) one for score 4.0 content. The section for score 2.0 content contains five items that are worth five points each for a total of twenty-five points. The student obtained twenty-two of the twenty-five points for a score of 88 percent, indicating that the student knows score 2.0 content. The student acquired 50 percent of the points for score 3.0 content and only 15 percent of the points for score 4.0 content. This pattern translates into an overall score of 2.5 on the test indicating knowledge of score 2.0 content on the proficiency scale and partial knowledge of score 3.0 content.
When the strategies in this element produce the desired effects, teachers will observe the following behaviors in students.
• Students can explain what the score they received on an assessment means relative to a specific progression of knowledge.
• Students can explain what their grades mean in terms of their status in specific topics.
• Students propose ways they can demonstrate their level of proficiency on a scale.
Planning
The design question pertaining to using assessments is, How will I design and administer assessments that help students understand how their test scores and grades are related to their status on the progression of knowledge they are expected to master? The two elements that pertain to this design area provide specific guidance regarding this overall design question. Teachers can easily turn these elements into more focused planning questions.
• Element 4: How will I informally assess the whole class?
• Element 5: How will I formally assess individual students?
The teacher can address the planning question for element 4 in an opportunistic manner in that he or she might simply take advantage of situations that lend themselves to informal assessments of the whole class. For example, a teacher is conducting a lesson on level 2.0 content. She decides to employ electronic voting devices to keep track of how well students are responding to the questions. As the lesson progresses, she notices that more and more students are responding correctly to questions. She uses this information as an opportunity to celebrate the apparent growth in understanding of the class as a whole. While she could have planned for this activity, the opportunity simply presented itself, and she acted on it.
The planning question for element 5 generally requires more formal design as to the assessments teachers will administer over the course of a unit or set of related lessons. Typically, teachers like to begin a unit with a pretest that addresses scores 2.0, 3.0, and 4.0 content in the proficiency scale. They must plan for this. It is also advisable to plan for a similar post-test covering the same content but using different items and tasks. Although teachers may plan for one or more other tests to administer to students in between the pre- and post-tests, it is also advisable for the teacher to construct assessments as needed and administer them. As long as they score all assessments using the 0–4 system from the proficiency scale, teachers can compare all scores, providing a clear view of students’ learning over time.
Implications for Change
The major change this design area implies is a shift from an assessment perspective to a measurement perspective. This is a veritable paradigm shift that has far-reaching implications. Currently, teachers view assessment as a series of independent activities that gather information about students’ performance on a specific topic that has been the focus of instruction. Teachers score most, if not all, of these assessments using a percentage score (or some variation thereof). At some point, teachers combine all students’ individual scores in some way to provide an overall score for the students on each topic. Usually, teachers use a weighted average, with scores on some tests counting more than others. They then translate the overall score to some type of overall percentage or grade.
This process tells us very little about what specific content students know and don’t know. In contrast, scores teachers generate from a measurement perspective provide explicit knowledge about what students know and don’t know. This is because a measurement approach translates scores on assessments into scores on a proficiency scale. No matter what type of assessment a teacher uses, it is always translated into the metric of a scale. For example, a teacher uses a pencil-and-paper assessment and assigns a score of 2.0 on the proficiency scale. A few days later, the teacher has a discussion with the student about score 3.0 content and concludes that the student has partial knowledge of that content. The teacher assigns a score of 2.5 on the proficiency scale based on that interaction. A week later, the teacher administers a test on the 3.0 content and concludes that the student demonstrates no major errors or omissions. Based on this assessment, the teacher assigns a score of 3.0 on the proficiency scale. This process employs a measurement perspective like that shown in figure 2.4.
Figure 2.4 indicates that assessments can take many forms, including tests, discussions, student-generated assessments, and so on. These different types of assessment might have their own specific format scores. For example, a teacher might initially score 2.0 content on a percentage basis. This percentage score is a format-specific score. Teachers can then translate format-specific scores into a score on a proficiency scale. This is the essence of the measurement process—assessments of differing formats and scoring protocols are always translated into a score on a proficiency scale. Measurements over time provide a picture of students’ status at a particular time and students’ growth. I believe this process allows teachers to gather more accurate, more useful information about students’ status and growth than the current practice of averaging test scores.
Source: Adapted from Marzano, Norford, Finn, & Finn, in press.
Figure 2.4: The measurement process.