Читать книгу A Summing Up - Robert Eaker - Страница 12

Оглавление

CHAPTER 2

The Consumer-Validation Approach

Research Into Practice


The clinical supervision framework continued to be the focus of my thinking about instructional improvement well into the mid-1970s. In the fall of 1972, I joined the faculty at Middle Tennessee State University. I was sure I would stay at the university for four or five years and then move to a larger university. Instead, I remained for the next forty-one years!

There were many reasons. Some were personal. We were settled in the Murfreesboro community, and our extended family was in nearby Chattanooga. Other reasons were professional. I eventually worked for five university presidents, all of whom treated me wonderfully. From the time I joined the faculty until I became dean of the College of Education, I was mentored by Ralph White, chair of the Department of Educational Leadership. More than a mentor, Ralph became a great friend. And importantly, each dean, vice president, and president not only allowed me to continue my consulting work with K–12 schools but also encouraged me to do so. Although in subsequent years I had opportunities to move to the University of Georgia and the University of Virginia, both of which I seriously considered, in the end, I realized I could not find another university culture as supportive as Middle Tennessee State.

In the early 1970s—along with Peabody College, Tennessee State University, and the Metropolitan Nashville Public Schools—Middle Tennessee State joined a consortium, the Teacher Education Alliance for Metro, to supervise and develop preservice teachers and provide in-service opportunities for educators. When I joined the university faculty, my initial assignment was with the consortium. During my second year on the faculty, I was joined by my good friend and colleague Jim Huffman, who had also received his doctorate from the University of Tennessee. Working daily in Nashville’s schools gave us the opportunity to expand our public school experiences, particularly in inner-city schools, even though we were full-time university faculty. Although the consortium was our primary responsibility, we were also assigned to teach graduate classes.

The combination of working in schools within a large metropolitan district, consulting in suburban districts in Chicago and Long Island, and teaching graduate classes composed primarily of teachers from surrounding suburban and rural districts in Tennessee provided a perfect laboratory for our interest in research-based instructional improvement.

As a university faculty member, my interest in the research on effective teaching practices became more focused. I learned of efforts by the U.S. Department of Education and the educational research community, particularly at Michigan State University and the University of Texas, to improve student achievement through research to identify specific teaching behaviors that directly affected student learning and behavior—what was at the time referred to as the teacher effects research. One of the researchers who had an early impact on our thinking was Jacob S. Kounin (1970).

The Kounin Research on Classroom Management

The work of Kounin (1970) was groundbreaking in two ways: one, it impacted the way both researchers and practitioners viewed classroom management, and two, it fueled the emerging interest among researchers in objectively observing classroom instruction and its effect on student learning and behavior.

Kounin’s (1970) interest in classroom management began quite accidentally. During one of his classes, he reprimanded a student for reading a newspaper during the lesson. He noticed that although he had reprimanded only one student, his reprimand had an effect on other students in the class. He later asked, “Why were students who weren’t targets of the reprimand affected by it? Do differences in the qualities of the reprimand produce different effects, if any, on non-target students?” (Kounin, 1970, p. iii).

Kounin’s curiosity eventually led to years of research on the subject of classroom discipline—specifically, the effects of how teachers handle student misbehavior. He sought to discover if some discipline techniques are more effective than others when it comes to affecting the behavior of an entire class. He also wanted to learn if and how the discipline techniques of teachers who are perceived as good disciplinarians differ from the techniques of those who are perceived as weak disciplinarians (Eaker & Keating, 2015).

After five years of study, Kounin did not find many, if any, differences in the effects of various disciplinary techniques on the larger classroom environment. He found that the manner in which teachers handled misbehavior made no difference in how audience students reacted. He was unable to predict any ripple effect from the quality of a disciplinary event. Further, he found that a teacher’s actions after a student misbehaves (desist techniques) “are not significant determinants of managerial success in classrooms” (Kounin, 1970, p. 71).

Kounin was not deterred. He sought to know why some teachers are generally viewed as better disciplinarians than others. What differentiates good disciplinarians from weak disciplinarians? To answer this question, he began another research project. This second study differed in that Kounin and his colleagues collected data from videotapes. The use of videotapes allowed the researchers to gather data about the larger issues related to how teachers manage their classrooms, rather than only data focusing on the more specific issue of how teachers respond to misbehavior (Eaker & Keating, 2015). Kounin and his researchers were able to analytically review teacher and student behavior in real classrooms without relying on classroom observations in real time.

The findings in this second study were significant. Kounin and his colleagues found that how teachers managed their lessons prior to student misbehavior had a far more powerful effect on student behavior than teachers’ disciplinary actions after student misbehavior occurred. In other words, what teachers were doing prior to misbehavior to manage the whole classroom was more significant than how they dealt with individual incidents.

Kounin (1970) was able to group various classroom management behaviors into four categories.

1. Withitness and overlapping: Kounin (1970) defines teacher withitness as “a teacher’s communicating to the children by her actual behavior (rather than by simple verbal announcing: ‘I know what’s going on’) that she knows what the children are doing, or has the proverbial ‘eyes in the back of her head’” (p. 82). Associated with withitness is overlap: “what the teacher does when she has two matters to deal with at the same time. Does she somehow attend to both issues simultaneously, or does she remain or become immersed in one issue only, to the neglect of the other?” (p. 85).

2. Smoothness and momentum: Kounin and his colleagues found that effectively managing instructional and noninstructional transitions and movement reduced student misbehavior:

A teacher in a self-contained classroom, then, must initiate, sustain, and terminate many activities. Some of this involves having children move physically from one point of the room to another, as when a group must move from their own desks to the reading circle. At other times it involves some psychological movement, or some change in props, as when children change from doing arithmetic problems at their desks to studying spelling words at the same desks. (Kounin, 1970, p. 92)

There are any number of teacher behaviors that affect smoothness and momentum in classrooms. One such behavior identified by Kounin and his team is stimulus-boundedness—an event in which the teacher:

Behaves as though she has no will of her own and reacts to some unplanned and irrelevant stimulus as an iron filing reacts to some magnet: she gets magnetized and lured into reacting to some minu-tia that pulls her out of the main activity stream. (Kounin, 1970, p. 98)

Another such behavior they identify is dangle. This occurs when “a teacher started, or was in, some activity and then left it ‘hanging in mid-air’ by going off to some other activity. Following such a ‘fade away’ she would then resume the activity” (Kounin, 1970, p. 100). Kounin also found that smoothness and momentum were affected by flip-flops during transitions, when a teacher stops one activity, starts another, then returns to the original activity. Momentum is also affected by slowdowns, and slowdowns are, in turn, affected by such behaviors as over-dwelling and fragmentation. Kounin (1970) defines fragmentation as “a slowdown produced by a teacher’s breaking down an activity into subparts when the activity could have been performed as a single unit” (p. 105).

In short, Kounin found that teachers who exhibit behaviors that contribute to smoothness and momentum have fewer student behavior problems and receive the added benefit of keeping students more involved and focused on their work (Eaker & Keating, 2015).

3. Group alerting and accountability; valence and challenge arousal: Teachers who have fewer student behavior problems can keep the class focused on the lesson using a skill Kounin refers to as group alerting, which is the degree to which teachers are able to involve nonreciting students in the recitation task, maintain their attention, and keep them on their toes or alerted. Kounin also cited an additional dimension of maintaining group focus, formatting, which is what other students are required to do when a person or a small group is being called on to perform or recite (Eaker & Keating, 2015).

Kounin found that misbehavior is decreased when students are motivated to engage in lessons and feel appropriately challenged by classroom activities. He grouped these findings into the category of valence and challenge arousal.

4. Seatwork, variety, and challenge: Closely associated with motivating and challenging students are behaviors related to variety. Teachers who plan for variability in lessons have fewer classroom management problems than teachers who have a limited repertoire of instructional approaches and tend to rely on the same approaches time and again (Eaker & Keating, 2015).

The research of Kounin was seminal for me and, I assume, many others. First, it shifted my professional focus from the clinical supervision process to research-based instructional effectiveness. I did not abandon the idea that observing teachers is important; rather, I found Kounin’s findings provided much-needed support for teachers—especially when linked with classroom observation. I began to see that observers could be armed with a toolbox of proven research strategies they could share with teachers to enhance the likelihood of instructional improvement.

Second, Kounin’s work differed from much of the existing research base in that it was practical, focusing on specific teacher behaviors that practitioners could easily grasp and understand. The findings simply made sense in the real world of schools. Rather than focusing on areas of schooling over which teachers had little or no control—for example, the impact of poverty or the lack of parental support—Kounin’s findings focused on specific things teachers could do or avoid doing to improve their classroom management skills and, thus, improve student behavior.

The work of Kounin was part of a rapidly growing body of research exploring new and exciting prospects for improving what happens in classrooms. My interest in Kounin’s research led me to increasingly focus on the correlation between what teachers do and say in classrooms and student achievement. This led me and Jim Huffman to a new area of interest—again rather accidentally—that would have a significant impact on my professional career.

The Consumer-Validated Research Model

As findings from a growing body of research became available, mainly through journals, newsletters, and meetings, Jim and I began sharing findings from the teacher effects research with teachers. We had a significant number of teachers in our graduate courses and in-service programs who were interested in improving their instructional effectiveness, and they saw research findings as a viable resource.

Initially, our work with teachers involved simply sharing a specific research finding and engaging in discussions that included question-and-answer sessions and teachers sharing their personal experiences. These meetings were somewhat helpful. They created an awareness about the research on teaching and enhanced teacher interest in and appreciation for research findings as a helpful tool. The nonthreatening discussions created a climate in which teachers felt comfortable reflecting on and sharing ideas and issues related to the effectiveness—or lack thereof—of teacher behaviors in certain classroom situations.

But our early approach did not result in teachers trying, to any great extent, to implement any of the research findings in their classrooms (Eaker & Huffman, 1980). Simply informing teachers about research findings was not making an impact on their instructional practices. We began to question the larger process of how practitioners acquire and use research findings.

The Limitations of Traditional Dissemination Processes

Even though most teachers genuinely want to improve their instructional effectiveness and value research findings, the impact of such findings on classroom behavior has traditionally been weak. Most dissemination approaches have relied on the expository mode (both verbal and print) to distribute research findings to teachers. Such approaches are only marginally effective, at best.

Although we learn through experience and the use of multiple senses, teachers are often expected to change personal and complex teaching behaviors by simply reading about research findings or listening to someone present the findings. Moreover, the research in question has usually been conducted by college professors. While professors might be best suited to conduct research studies, teachers often perceive many, if not most, college professors as having an unrealistic understanding of real-world preK–12 classrooms, and their findings, as a result, lack credibility with teachers. Further, teachers find many research conclusions to be vague or even contradictory. As a result, they often have difficulty thinking of specific things they can do in their classrooms to benefit from those conclusions (Eaker & Huffman, 1980).

Jim and I sought to develop a new process for disseminating research findings, one in which teachers would actively engage with the findings on effective instructional practices in their classrooms. We felt that higher education’s findings should be validated by those teachers who are intended to use them. Just as Consumer Reports researchers test product claims of effectiveness, teachers would test instructional practices, following the consumer-validation model.

A New Model for Testing and Implementing Research Findings

Using our previous work with teachers in the Murfreesboro, Tennessee, school system, Jim and I decided to create a dissemination model that would go beyond informing and actually affect teachers’ classroom behavior. Our primary source of research studies was the Institute for Research on Teaching at Michigan State University. At that time, the institute was heavily involved in researching such areas as teacher decision making; reading diagnosis and remediation; classroom management strategies; instruction in language arts, reading, and mathematics; teacher education; teacher planning; effects of external pressures on teachers’ decisions; socio-cultural factors; and teachers’ perceptions of student affect. Some of the most highly respected researchers in the United States conducted these studies and many others, such as Lee Shulman, Jere Brophy, Christopher Clark, Andrew Porter, and Larry Lezotte, to name a few. Additionally, we utilized the research from others outside the institute, namely Thomas Good, Barak Rosenshine, and Jacob Kounin.

As we conceptualized this new dissemination model, our goals began to emerge (Eaker & Huffman, 1980).

Increase awareness among the participant teachers of current research findings in the area of teacher behavior and student achievement.

Improve individual teaching skills by having teachers apply research findings in their individual classrooms.

Help teachers become more analytical and reflective about their own teaching behavior.

Help teachers critically evaluate research findings in terms of their applicability to the classroom.

We planned our framework and activities based on certain assumptions. First, we assumed that the quality of interpersonal relations between us and the teachers, as well as within the participant group, would be a key factor in the success of the project. We knew that if our goal was to encourage experimentation, creativity, imagination, and a willingness to try new things, the process would need to be as nonthreatening as possible.

Second, teachers would need to feel secure and confident in their knowledge and understanding of the research. Simply put, we recognized that unless teachers developed a clear and accurate understanding of the findings, implementing them would be problematic and the odds of changing teacher behavior slim.

Third, we recognized that to be effective, the dissemination plan would have to focus on teacher behavior in K–12 classrooms. We would have to shift the focus from the university classroom, where teachers were informed of research findings, to K–12 classrooms, where teachers could apply, use, and test the practicality of specific research findings—an approach that in later years came to be known as action research.

And, finally, we assumed we would need a tool for collecting data from teachers’ experiences as they tried new instructional behaviors:

If teachers were to reflect on the effects of their teaching, and if they were to share information with others, then some sort of format needed to be developed in which teachers’ ideas, activities, insights, criticisms, attitudes, and feelings could be recorded. (Eaker & Huffman, 1980, p. 5)

The plan that eventually emerged contained four types of activities, or steps.

1. Seminars

2. Implementation

3. Classroom visitations

4. Sharing sessions

Seminars

Seminars provided deep, rich discussion around specific research findings related to instructional effectiveness. To avoid information overload, we decided to limit our seminars to four research areas.

1. Planning and organization of classroom activities

2. Student planning and time on task

3. Discipline and classroom management

4. Affective teaching skills

Research findings in each of these four areas were synthesized by myself and Jim and made as clear and concise as possible. Gathering, synthesizing, translating, and discussing research findings so that they could be more easily understood by teachers became a central aspect of the consumer-validation approach (Eaker & Huffman, 1980).

Implementation

Once teachers felt confident they understood the specifics and implications of a set of research findings, they implemented the findings in their day-to-day classroom instruction and recorded their observations and outcomes on a simple form:

• Section One contained a brief description of the specific research findings that the teacher was integrating into classroom instruction.

• Section Two was a blank page with the heading “Description of Classroom Behaviors Engaged in While Implementing the Above Research Findings.” Teachers were asked to list and briefly describe the things they did in their classrooms to implement specific research findings.

• Section Three was another blank page with the heading “Analyze What You Think and Feel About What Happened When You Tried Each of the Behaviors Listed in Section Two.” The purpose of this section was to prompt teachers to reflect on the results or impact of their implementation. We wanted the teachers to analyze what occurred and evaluate the classroom efficacy of specific research findings. Recording their perceptions also enabled teachers to be better equipped to share their experiences with the other participant teachers. (Eaker & Huffman, 1980, p. 7)

After each seminar, teachers worked to integrate the research findings into their regular classroom routines and keep the record form up to date.

Classroom Visitations

We recognized that teachers would have questions during implementation. Some would need simple reassurance, while others would need more technical assistance. To a great extent, success of the consumer-validation project hinged on our classroom visitations to support teachers.

My years of focus on the clinical supervision process with its emphasis on objective classroom observation provided me with a wealth of experience in visiting teachers’ classrooms, and those earlier experiences proved invaluable to me as I now visited classrooms for an entirely different purpose.

The classroom visits that Jim and I made in the consumer-validation project were unlike those that were part of the clinical supervision process. These visitations were not so much observational as they were collegial and assistance oriented. Often, a great deal of time was spent with a teacher in an individual school. On other occasions, one of us met with a small group of teachers in the same school. There was no set way for conducting these visits. The goal was simply to assist teachers during the implementation phase.

Sharing Sessions

After teachers spent two or three weeks (depending on the complexity of the research focus) implementing research findings in their classrooms, they met in post-implementation seminars to share and discuss what had occurred during the implementation phase. The purpose of these seminars was threefold.

First and most obvious, the sessions were geared to engage teachers in sharing how they approached implementing each research finding. Sharing ideas, activities, and materials significantly increased the number and variety of instructional ideas that each teacher learned beyond those tried in his or her individual classroom. For example, as a group, the teachers tried thirty-four distinct ways of improving classroom organization and planning for instruction (Eaker & Huffman, 1980).

The second purpose of these seminars was to share conclusions about the instructional impact of specific research findings. Teachers reported that some ideas had a very positive impact on the effectiveness of their instructional practice, while others had a marginal impact. Some activities and approaches did not work well at all. Teachers learned to evaluate research findings, not after studying those findings, but rather after using those findings in their own classrooms. In this regard, teachers were becoming wise consumers of research findings.

The third purpose of the post-implementation seminars was to improve teachers’ instructional effectiveness through high-quality interactions with other teachers who had experimented with the same set of research findings. In short, the seminars provided a setting in which teachers engaged in rich dialogue with their professional colleagues. One teacher remarked that teachers rarely get to engage in extensive discussions with other teachers about teaching in an organized setting (Eaker & Huffman, 1980). It was interesting to learn that teachers perceived the experiences and perceptions of other teachers who had implemented research findings to be more credible than those of university professors.

We made significant discoveries about teacher perspectives as a result of the consumer-validation experiments (Eaker & Huffman, 1980). First, we learned that teachers valued research findings that focused on classroom instruction, and they believed research findings could have a positive impact on improved teaching. However, teachers often viewed findings to be contradictory, and they did not perceive principals, faculty meetings, supervisors, in-service meetings, or professional meetings as resources providing them with useful research findings that focused on effective classroom instructional practices.

Likewise, they did not believe undergraduate teacher preparation programs provided effective, specific information regarding classroom-related research. (However, they had a more positive view of graduate programs in this regard.) Teachers received most of their information regarding research related to effective teaching practices from professional journals, but there, too, they felt this approach to research dissemination should be expanded (Eaker & Huffman, 1980).

We also learned how to more effectively disseminate educational research findings. For example, we learned there must be a person who has the specific responsibility for studying relevant research, interpreting those findings to small groups of teachers who teach the same or similar content, demonstrating how specific findings can be effectively implemented in classrooms, then monitoring and analyzing with teachers in a collaborative setting the effects of their efforts to improve their instructional practice.

This is a complex responsibility requiring special skills. First, people in these roles must genuinely be interested in and know about research efforts that focus on instructional effectiveness. They do not need to possess the sophisticated skills required to generate research findings, but they must have the skills to accurately interpret such research. In addition, they must be familiar with and appreciate the challenging world of the classroom teacher. Researchers often lack recent preK–12 teaching experience, and this leads to a general lack of credibility with classroom teachers. Finally, effective interpersonal and communication skills are a must. A person who works with teachers to enhance their instructional effectiveness through the implementation of proven research findings must be sensitive and empathetic and possess the skills that can lead to a climate of trust (Eaker & Huffman, 1980).

Our consumer-validation experiments reinforced our belief about the necessity of trust-based collaboration with teachers. The term consumer-validated research implies that the consumers of research findings—classroom teachers—play an important role in validating both the degree to which such findings are workable and relevant to the day-to-day world of classroom teachers and how research findings can be effectively implemented, modified, and improved on. This undertaking is collaborative in nature and, if successfully implemented, can result in a body of research findings that are teacher validated and thus more likely to be useful to other teachers.

Jim and I thoroughly enjoyed our work with classroom teachers as we sought to find ways to effectively implement research findings in classrooms, but perhaps the most professionally significant outcome, like most of the major stages in my professional life, occurred rather serendipitously. Another accidental friendship developed—this time with Judith Lanier, the dean of the College of Education at Michigan State University and co-director, along with Lee Shulman, of the Institute for Research on Teaching. Judy’s friendship was a professional life changer for me.

The Institute for Research on Teaching: Michigan State University

From the mid-1970s through the 1980s, the College of Education at Michigan State University, led by Judy Lanier, was widely considered by many to be the top educational research center in the United States—and justly so. The college was home to the Institute for Research on Teaching, which was funded by a $3.6 million grant from the U.S. Department of Education, in addition to funds from various other sources. Specifically, the focus of the institute was to investigate a broad spectrum of teacher behaviors and their effects on student learning, student behavior, and student motivation.

As mentioned earlier, the institute was home to many of the most highly respected researchers, representing a wide variety of backgrounds and areas of interest and expertise. Additionally, public school teachers worked in the institute as half-time collaborators in research initiatives. Judy Lanier and Lee Shulman were co-directors of the institute, with Larry Lezotte and Andrew Porter serving as associate directors. Larry had additional responsibilities for the areas of communication and dissemination. Although the institute published research reports, notes from conference proceedings, and occasional papers, it was their quarterly newsletter that was the vehicle for my association with the institute generally, and Judy and Larry specifically.

I don’t recall ever requesting the newsletters, but I do recall a newsletter showing up in my mail one day. As I was glancing through it, I became particularly interested in a short column written by Judy. I was impressed by her comments about the important role teachers played in the work of the institute. I wrote a letter to her, thinking, perhaps, she might be interested in knowing of the work that Jim and I were doing to enhance teachers’ instructional effectiveness through helping teachers use research findings in their classrooms. I had no expectation of a response, much less her high degree of interest.

Shortly after receiving my letter, Judy left a message for me to call her. We chatted a bit, and she expressed genuine interest and enthusiasm for our work. Fairly soon afterward, she invited Jim and me to visit the institute in Lansing and share our work at one of the regular faculty convocations. She explained the purpose of these meetings was for faculty to share and discuss their work and their thinking with each other, and occasionally, researchers other than the institute faculty were invited to speak.

Of course, we were excited and accepted the invitation. I clearly recall a great degree of anxiety, too. After all, we were simply two faculty members from a comprehensive mid-level university who had a background in using, rather than producing, original research results. In retrospect, I believe this was what fascinated Judy the most. She was genuinely interested in the question, How can we at the institute make a positive difference in the improvement of classroom instruction beyond merely engaging in and writing about research on teaching effectiveness?

Our initial visit to Michigan State could not have gone better. Judy, Larry, Jere Brophy, and the entire faculty were welcoming and hospitable. In addition to our presentation to the group, Judy had arranged for many one-on-one conversations with faculty. Although they were very busy with their teaching and their research, they were welcoming and engaging, on both a professional level and a personal level. (We learned early on in our conversation with Jere Brophy that he was a country music fan!)

Judy also hosted an informal gathering at her home that evening where we could mix and mingle with the faculty and her friends. It was truly delightful, and I like to think that it was at this gathering that we really began our friendship with Judy and with Larry. Larry drove us back to the airport, and as we chatted, I remember thinking, This is one of the nicest people I have ever met. Little did I know that Larry and I would be working much more closely together in the future as the research on effective schools began to gain national prominence.

I don’t recall how many times we visited the institute, but each time, I felt we became closer friends with Judy. I remember thinking that there was the possibility of a position offer in the works and wondered how I would respond. On a personal level, my wife, Star, and I wanted very much to remain within driving distance of our family in Chattanooga, and I was concerned about the high-pressure culture of a large, research-oriented university. No such offer ever materialized, although I believe Judy did give the possibility some thought.

A Summing Up

My experience with the Institute for Research on Teaching and Judy Lanier had a positive impact on my career (and personal life) in a number of ways. First, I gained a much deeper knowledge of and interest in research-based approaches for improving classroom effectiveness. In retrospect, I think this played a significant part in the emphasis Rick DuFour and I placed on gaining shared knowledge about best practices and on collaborative teams engaging in action research in the Professional Learning Communities at Work process.

Rick and I both saw the potential for connecting the findings from the effective teaching research to the clinical supervision observation process. In his inimitable, witty way, Rick frequently pointed out the problem of the Now what? question in post-observation conferences. Rick remarked that observations of teachers, even if done well and accurately recorded, are of little value if the person conducting the interview has little to offer when a teacher asks, “I see that I need to improve my instruction in certain areas. What do you suggest I do?” Rick, with what Becky DuFour and I came to refer to as his dripping sarcasm, would observe that at this point most principals and supervisors are left to say, “Well, actually, I don’t have a clue about how you can improve your teaching or classroom management practices. You see, my skills are in observing and recording, not in the knowledge of effective teaching practices. Sorry.”

The research findings on effective teaching didn’t just help teachers; they also armed observers with a knowledge base that enhanced the effectiveness of the clinical supervision process. Jerry Bellon quickly saw the value in this emerging field of research and began including examples of the research findings in his consulting work with districts. In fact, within a few years, Jerry coauthored one of the first books that synthesized many of the research findings into a handbook for improving classroom instruction (Bellon, Bellon, & Blank, 1992).

Second, I gained confidence in working with schools and school districts and in speaking to groups about effective, research-based teaching practices. This knowledge and confidence enabled me to work with many more districts across the United States. Interestingly, this also changed my relationship with Jerry Bellon. He had viewed me, appropriately, as an associate who helped him work with districts that were interested in improving teaching through the clinical supervision process. As a result of my association with the Institute for Research on Teaching and my friendships with Judy Lanier and Larry Lezotte, as well as my individual work with districts regarding the teacher effects research, he came to view me as a person with my own area of expertise and, to some extent, a growing national reputation.

Third, my professional reputation was given a huge boost by an interview conducted by Willard Duckett, who was the assistant director of the Phi Delta Kappa Center on Evaluation, Development, and Research, that appeared in the Phi Delta Kappan in 1986. Duckett (1986) began the article by noting that the center and the Kappan were undertaking an initiative to introduce readers to individuals “who make exemplary contributions to research or who make effective, practical applications of research in the administration of public schools” (p. 16). Although I never asked, I always felt the interview was the result of some intervention by Judy Lanier. For this alone, but for much more, I have always been grateful for Judy’s friendship and support during this period of my professional journey.

The article certainly enhanced my national exposure. I think the point that received the most attention was the idea of “legitimizing” research, which Jim Huffman and I had picked up from Herbert Lionberger:

Duckett: By legitimizing, you obviously mean something more than passive acceptance.

Eaker: Exactly, it’s the process of becoming convinced, as opposed to being informed. Being informed doesn’t motivate one to do much. Legitimizing an idea is a process of dispelling fears or inhibitions and coming around to a favorable disposition leading to acceptance. When an idea has been legitimized, one is willing to act on it.

Duckett: Give me an example of how a consumer might legitimize, or go through experiential validation, with regard to research data.

Eaker: Manufacturers often do extensive testing of their products in both the development and the production stages; it is generally to their advantage, in a competitive market, to at least advertise supporting data. General Motors, for example, will have elaborate data on the performance capabilities of Car X. Those data might reveal that Car X can be expected to get between 16 to 31 miles per gallon of fuel (quite an indeterminate range, incidentally). Regardless of how thoroughly Car X was tested, you and I, as consumers, are probably skeptical about how closely “laboratory” conditions at GM match our own local driving conditions. Without questioning the validity of the GM tests, you and I will probably prefer to legitimize or validate such data at the experimental level. So, we ask our neighbors or co-workers about their experiences with Car X. We seek to validate the data in terms of our normal use of a car in everyday driving. To what extent will our driving habits approximate those of the professional drivers at the GM test ranges? How many variables in our environment approximate those at GM? In short, can we expect to average closer to 31 miles per gallon than to 16? If not, why not?

Thus, in seeking to validate the research data from a consumer’s point of view, we do not repeat the manufacturer’s tests, nor do we mount a formal challenge to the methodology. We simply set out to validate by determining the appropriateness of the data for use in our own specific situations.

Duckett: Does the analogy hold for teachers?

Eaker: Yes, we [Jim Huffman and I] think that teachers react to educational research reports in much the same way. Teachers can easily be informed about the research coming from Stanford, the University of Texas, or Michigan State University. But they want to know whether their own classrooms are similar enough to those of the experimental groups to lead them to expect similar results. Their questions go something like this: “Okay, these classroom variables worked well for the research group, but will they work in my classroom?” Our answer is simply, “Let’s check them out. Let’s check the applicability of that research for your situation, your instructional context.” That is what I mean by experiential validation of research. (Duckett, 1986, p. 14)

In addition to the publication of the Kappan interview, in 1986, I accepted the position of dean of the College of Education at Middle Tennessee State University. The university had a large teacher education program (the largest in Tennessee) that educated approximately one-fourth of all the new teachers in the state. My interactions with the people at Michigan State, and especially my observations of Judy Lanier in her role as dean, helped me think of ways the College of Education at Middle Tennessee State could partner with school districts to not only prepare more effective teachers but to also use research findings to improve the instructional effectiveness of classroom teachers.

Last, these experiences enabled me to meet and become friends with people who had a huge impact on me professionally and personally. I developed a close professional relationship and a personal friendship with Lynn Canady at the University of Virginia during this time. I knew of Lynn from my teaching days in Chattanooga. While I was at Brainerd High School, Lynn was the principal at Dalewood Jr. High School in the same suburban area. Under his leadership, Dalewood developed a growing reputation for innovation, particularly in the area of flexible scheduling. Lynn became a faculty member at the University of Virginia and was widely regarded for his work in developing block scheduling, grading, and assisting school districts in school improvement initiatives.

I don’t recall exactly when Lynn contacted me—perhaps it was a result of the Kappan interview—but regardless, he invited me to present to a group of public school administrators and teachers at the University of Virginia. Lynn had created a consortium between Virginia school districts and the university through which the university would assist districts in school improvement efforts, especially by arranging for a variety of presenters on school and instructional improvement topics.

Called the School Improvement Program (or SIP), the basic idea was that it would be more cost effective for each participating district to pay a rather small membership fee and have access to a number of presenters throughout the year than each district bearing the costs of bringing speakers to their individual districts. It was through his efforts, such as the SIP initiative, as well as his willingness to assist districts himself, that Lynn became highly respected in Virginia. And, through this work—and his writing—Lynn developed a highly regarded national reputation.

After my first visit to the University of Virginia, Lynn invited me to work with him and numerous districts in Virginia, particularly in the valley that ran from Bristol, Tennessee, to Roanoke, Virginia. Through my work with Lynn, I consulted in Winchester, Virginia, on occasion. Interestingly, my work in Winchester led me to become acquainted with a former superintendent of the district who had retired and become an executive with one of the largest concrete manufacturing companies in Virginia, the Shockey Precast Group.

We enjoyed each other’s company very much, and he decided the Shockey Precast Group could benefit from the concepts that were the focus of my work in Winchester—building an organizational culture through focused, collaboratively developed mission, vision, values, and goals. I had dinner with the owner of the company and the top executives, which led to my consulting with the company on a number of occasions and eventually being asked to present at the annual meeting of the Virginia Concrete Association.

I mention this not only because it was a great experience and I met some wonderful people but also because it was the first time I realized that the work Rick and I were doing, and the ideas we were talking and writing about, were applicable to organizations in general. Through the years, Rick and I were asked to speak to quite a few corporate executives and business groups.

The larger point is this: these experiences, which broadened my thinking and enhanced my confidence, would not have happened if Lynn Canady had not invited me to work with him at the University of Virginia. More important, Lynn and I would not have met, become friends, and remained so for the next forty years or so. I am much indebted to and thankful for Lynn Canady.

As I’ve noted, through my association with the Institute for Research on Teaching, I became friends with Larry Lezotte. While most of the time I spent at the institute was with Judy Lanier or researchers such as Jere Brophy, Larry usually sat in on meetings, and he was also nice enough to drive Jim Huffman and me to and from the airport. Little did I know that Larry and I would become friends and spend considerable time together in the next phase of my professional journey—more about that in the next chapter.

A Summing Up

Подняться наверх