Читать книгу The Smart Culture - Robert L. Hayman Jr. - Страница 10

1 Introduction Smart People

Оглавление

I’m not sure when I found out that some kids had high IQs. When I did find out, I’m not sure I much cared. When we were kids, we had our own ideas about “smart,” and they had very little to do with IQs. The third-grade boys, for example, had developed their own distinct intellectual hierarchy: it consisted in small part of baseball trivia, in small part of the aptitude for petty crime, and in very substantial part of the skills—cognitive and otherwise—needed for insulting our peers (and, of course, their families). The girls, meanwhile, probably had their own hierarchy, but in the third grade, that was a mystery we boys had no interest in solving.

In the three-part hierarchy in which the boys subsisted, the ability to insult was undeniably the most important branch of intellect. It was also the most elaborate, itself consisting of three developmental stages: the first came with the recognition that curse words could be used as insults; the second was marked by the ability to use some curse words (one in particular) as participles to modify other curse words; and the third arrived with the realization that almost any curse word could be made doubly insulting by adding -face, -head, or -breath as a suffix. Progress through these stages, it seems to me now, was as much art as science: I remember one poor kid whose social fate was sealed the day he called me a “f—ing ass-head.”

There was one insult we used quite a bit, and it was about the only time we showed any interest in the IQ concept. For no specific reason, or at least not for reasons having anything to do with perceptions of intelligence, we found it immensely gratifying to call one another “retards.” We had, of course, no idea who or what a “retard” was, and we were fairly liberal in constructing synonyms: “reject” was thought to convey the same message, as were the more elaborate “mental retard,” “mental reject,” or, less elaborately, “mental.” All we knew about any of these terms was that they had all the ingredients for a good insult: they were apparently somehow demeaning; they had quite a funny sound to them; and no one, as far as we knew, would ever confront us with the embarrassing revelation that what we intended as an insult was in fact an accurate description.

All of this changed sometime in the third grade, when we discovered Mrs. Sweeney’s “special” class. We had wondered for some time why the window in Mrs. Sweeney’s door was covered with cardboard, wondered specifically why we kids weren’t supposed to look in. I suppose it never occurred to us that the cardboard also kept the kids inside from looking out, but then, lots of that kind of stuff never occurred to us. What did occur to us was that Mrs. Sweeney’s kids had to be “special” in some very strange way, strange enough that we had to be prevented from seeing them. Our imaginations ran wild with the possibilities, and we were not at all disappointed the day Dicky Hollins told us that he knew the secret to those kids, that his mom knew the mother of some kid in Mrs. Sweeney’s class, and the kid was, honest-to-god, a retard.

Just what that meant remained a mystery. For all we knew, “retards” were circus freaks or juvenile delinquents or some barely imaginable combination of the two. We deduced that they must be somehow pathetic and perhaps somehow frightening; we knew for sure that they were different from other kids, and that the difference was wildly fascinating.

For months, our school days were preoccupied with the effort to catch a glimpse of the retards. We’d linger outside Mrs. Sweeney’s door at lunch time, knock on her door and hide just around the corner, we’d come to school early in the hopes of seeing the retards arrive and stay late to catch them leaving, and through it all, we never saw more than Mrs. Sweeney’s disapproving frown. And then Mrs. Sweeney failed to show up for school one morning, and we were sure it was because the retards had killed her, and we anxiously awaited the showdown, the cops versus the retards. But she only had a cold, and she was back early the next day, with the cardboard over her window, preserving the great mystery inside.

The spell was broken on a spring morning. We had a substitute teacher that day, and he was either more gullible or more lazy than most, so when we told him that it was physical fitness week, and that instead of geography we were having extended recess in the morning, he dutifully took us outside to play kickball at 10:30 in the morning, a full ninety minutes before our scheduled break. It did not occur to him, nor did it occur to us, that 10:30 might have been the time set aside for some other kids’ recess, and that some other kids might have been on the playground, playing kickball, when we arrived.

But 10:30 was Mrs. Sweeney’s time, and she was there when we got to the playground. So too was her class.

“It’s them.” Dicky Hollins, now our resident authority, made the matter-of-fact pronouncement, and all the boys knew exactly what he meant. We all stood there, transfixed, and watched them play. I recall thinking that some of them looked a little different, but I’m not quite sure how. And that some of them moved a little differently, though again, I could not explain how.

On they played, oblivious, it seemed, to our presence.

We stood silently and watched.

They kicked the ball. They ran. They laughed. They celebrated.

One kid dropped a ball kicked right at him.

We all heard him when he cussed.

And it occurs to me now, as I think about it for the first time, that no other kid called him a name.

Our substitute said something to Mrs. Sweeney, and then, with a very serious look on his face, he said something to our class. The kids in our class started to file back into school, but some of us boys lagged behind, and somebody grabbed me by the arm, and dragged me up the walk to the school, and I kept turning around, just looking. When we all got back to our classroom, the substitute handed out maps of the United States, and he told us to color in the Middle Atlantic states, and when we complained that we didn’t have crayons, he told us to use our pencils. He gave us an hour and a half to finish the exercise, and I spent the last eighty-five minutes drawing pictures of Frankenstein, and football players, and World War II fighter planes. And all the time I was thinking about Mrs. Sweeney’s kids, and I looked around the room at the other boys in the class, and I knew they were thinking about the same thing too.

I don’t remember ever seeing any of Mrs. Sweeney’s kids again. Nor do I remember ever saying a word about them to any of my friends, or hearing a word about them from anybody. It was as if the whole day never happened. Except for one thing: after that day, for some reason, none of us ever called any kid a “retard” again.

Carrie Buck was a retard. That, at least, was the prevailing opinion of her in 1924, when the director of the Virginia Colony for the Feebleminded concluded that the eighteen-year-old resident of the Colony was “feebleminded of the . . . moron class.” Carrie’s mother was also of limited intellect, a moron as well, according to the director. Carrie was born out of wedlock and, it was assumed, had inherited both her mother’s intellectual disabilities and her moral defects: Carrie too, after all, had conceived an illegitimate daughter. For her mental and moral failings, Carrie’s foster family arranged to have the young mother institutionalized in the Virginia Colony in January 1924. That September, the Colony, acting under the authority of a Virginia state law, sought to sterilize Carrie Buck.

The director of the Colony, Albert Priddy, had been the chief architect and sponsor of Virginia’s sterilization law. The law found its scientific support in eugenics theory, still in vogue in 1920s America, but compulsory sterilization depended upon more than the mere belief in the genetic perfectibility of humanity. For that drastic measure, some odd combination of moral and political values was necessary: a bit of social Darwinism, a bit of political Progressivism, some economic conservatism, a little thinly disguised racism, and, for men like Priddy, a certain priggish disdain for the sexual habits of the poor. Armed with this intellectual grab bag, Priddy had won the near unanimous approval of the Virginia legislature for his sterilization law in March 1924.

But his advocacy was not ended. Similar laws had been struck down by courts in other states, some because they did not afford sufficient procedural protection for their subjects, others because they unfairly targeted only the residents of state institutions. But with his counsel and friend, Aubrey Strode, Priddy had carefully drafted the Virginia law to meet these objections; now, they were determined to find the test case that would secure judicial approval. The case they settled on was Carrie Buck’s.

The Virginia law provided for the sterilization of inmates of state institutions where four conditions were met. First, it had to appear that the “inmate is insane, idiotic, imbecile, feeble-minded or epileptic” and, second, that the inmate “by the laws of heredity is the probable potential parent of socially inadequate offspring likewise afflicted.” Third, sterilization must not harm “the general health” of the inmate, but rather, as the fourth and final requirement, must promote “the welfare of the inmate and of society.” Carrie Buck, the young unwed mother, provided an easy case under the terms of this statute, particularly the way the deck was stacked.

Priddy’s petition for the sterilization of Carrie Buck was approved by the Special Board of Directors of the Colony; under the Virginia law, Carrie was entitled to appeal that decision to the Virginia state courts. Her trial was held on November 18, 1924. Aubrey Strode called eight lay witnesses to testify that Carrie was feebleminded and immoral and that her mother and daughter were “below the normal mentally”; he called two physicians to testify to the medical advantages of sterilizing the feebleminded; he called a eugenicist to testify by deposition as to the value of eugenic sterilization as “a force for the mitigation of race degeneracy”; and he called Priddy himself to testify that, for Carrie and society at large, compulsory sterilization “would be a blessing.”

Irving Whitehead, Carrie’s appointed attorney, called no rebuttal witnesses.

The court approved the sterilization order, and the highest court in Virginia affirmed this decision. Carrie’s attorney dutifully appealed to the United States Supreme Court. On May 2, 1927, the Supreme Court, by a vote of eight justices to one, approved the involuntary sterilization of Carrie Buck.

Justice Oliver Wendell Holmes wrote the opinion for the Court. Holmes had already served on the Supreme Court for a quarter century; for twenty years before that, he had been a justice of the Massachusetts Supreme Judicial Court, the last three years as chief justice. He had been educated in private schools, at Harvard College, and at Harvard Law School. He was, by common consensus, a very smart man.

He was able to dispose of Carrie Buck’s claim in a few pithy sentences.

We have seen more than once that the public welfare may call upon the best citizens for their lives. It would be strange if it could not call upon those who already sap the strength of the State for these lesser sacrifices, often not felt to be such by those concerned, in order to prevent our being swamped with incompetence. It is better for all the world, if instead of waiting to execute degenerate offspring for crime, or to let them starve for their imbecility, society can prevent those who are manifestly unfit from continuing their kind. The principle that sustains compulsory vaccination is broad enough to cover cutting the Fallopian tubes. Three generations of imbeciles are enough.

Carrie Buck was sterilized on October 19, 1927. Not long after, she was “paroled” from the Colony into the care of a family in Bland, Virginia, for whom she worked as a domestic servant. She married; she and her husband had, of course, no children. Her husband died after twenty-four years of marriage. Carrie eventually remarried and in 1970 moved back to her hometown of Charlottesville, Virginia. For ten years, she and her husband lived there in a one-room, cinder block shed. In 1980 Carrie was hospitalized for exposure and malnutrition; later, she and her husband were taken to a nursing home where, on January 28, 1983, Carrie Buck died at the age of seventy-six.

Not long before her death, Carrie Buck was interviewed by Professor Paul Lombardo of the University of Virginia. He writes:

Throughout Carrie’s adult life she regularly displayed intelligence and kindness that belied the “feeblemindedness” and “immorality” that were used as an excuse to sterilize her. She was an avid reader, and even in her last weeks was able to converse lucidly, recalling events from her childhood. Branded by Holmes as a second generation imbecile, Carrie provided no support for his glib epithet throughout her life.

Carrie Buck, it appears, was no “imbecile” at all. She was poor, she was uneducated, and these no doubt contributed to her “diagnosis.” But even under the crude categories of the day, under which “imbeciles” ranked below the various grades of “morons” in the grand hierarchy of “feeblemindedness,” Carrie was no “imbecile” and probably was not “feebleminded” at all.

Carrie Buck’s attorney might have known better, might have known that Carrie was no imbecile, was no moron, and was perhaps not feebleminded at all. He might have explained all this to the reviewing courts. But Carrie Buck’s attorney apparently had other plans. Irving Whitehead, it evolves, was a former member of the Board of Directors of the Virginia Colony for the Feeble-minded and a long time associate of Strode and Priddy’s. Indeed, a building at the Colony named in Irving Whitehead’s honor was opened just two months before the arrival of a young mother named Carrie Buck.

Irving Whitehead might also have known the truth behind Carrie’s moral failings. Carrie’s illegitimate daughter was conceived in neither a moral lapse nor an imbecile’s folly; she was conceived when Carrie was raped by the nephew of her foster parents. Carrie Buck was institutionalized not to protect her welfare, but to preserve her foster family’s good name.

In the end, Carrie Buck was a victim not of nature, but of the people around her. The eventual debunking of the sham that was eugenics merely confirmed what should have been obvious all along: the “science” that dictated Carrie’s unwelcome trip to the Colony infirmary was in reality only politics, the cruel politics of inequality.

There is, finally, the matter of Carrie’s daughter, the third of the three generations of imbeciles. Relatively little is known of her life, save this: Vivian Buck attended regular public schools for all of her life, before dying of an infectious disease at the age of eight. And in the next to last year of her short life, Carrie Buck’s daughter earned a spot on the Honor Roll.1

There are no more imbeciles in America, no more morons, no more feebleminded of any type or degree. We eliminated them all, installing in their place people with varying degrees of mental retardation: at first, some were educable or trainable; now their retardation is mild or moderate or severe or profound. And when we determined that we had too many people with mental retardation, we tightened the general definition of the class, eliminating half the mentally retarded population in a single bold stroke that would have made the eugenicists proud.

But some things have not changed. In contemporary America, we still sterilize people with low IQs. When they escape sterilization, we routinely deny them the right to raise their own children. Systematically, too, we deny them the right to marry, to vote, to choose their residence, to live on their own. We have made a history for people with mental retardation that is replete with the normal horrors of discrimination—stigmatization, segregation, disenfranchisement—but we have added to their lot the unique horrors of involuntary sterilizations and psychosurgery. In our words and in our deeds we have been relentless in our efforts to diminish them, to make them lesser people. All of this, because they are not sufficiently smart.2

The remarkable furor that followed the publication of Richard Herrnstein and Charles Murray’s book The Bell Curve tended to obscure the altogether unremarkable thesis of that text. Simply put, its thesis was this: in American society today, smart folks get ahead, and not-so-smart folks don’t. As their critics pointed out, Herrnstein and Murray relied on a whole lot of questionable material to make this point, and stretched the bounds of science to posit a slew of weak correlations among various “biological” traits, “intelligence,” and assorted indicia of “success.” Still, the basic empirical proposition of the text has survived most critical scrutiny: if you are smart, then indeed, you get ahead; if you are not, chances are, you won’t.

This, of course, came as good news to smart people throughout the country, and they were not reluctant to express their satisfaction. For them, it was not merely that the inevitable equation of smartness and success ensured their fortunes; what was more important, rather, was that they could feel downright good about their prospects.

There was, after all, a subtext to The Bell Curve’s simple story that is almost of moral dimensions. The people who have made it have done so because they are smart; they, in a very clear sense, deserve their success. Conversely, the people who have not made it have failed because they are not-so-smart; they, in an equally clear sense, deserve their failure.

Understandably, then, The Bell Curve was not perceived as bringing very good news for the not-so-smart people, who to the extent that they could understand the text’s rather simple message, had to be forgiven for finding it just a bit depressing. For these people, after all, there were to be no smiling fortunes; destiny promised them less wealth, less status, less comfort. The Bell Curve offered to the not-so-smart people little more than a single lesson in civics: hereafter, they should no longer labor under the illusion that smart people were to blame for their misfortunes.

Indeed, the worst news for the not-so-smart people came in the political subtext of the book, and it was this reading that generated some of the most heated debate. For Herrnstein and Murray, there were clear policy implications to their findings. If smart people get ahead, almost no matter what, and if not-so-smart people fall behind, almost no matter what, then it does not seem to make a great deal of sense to devote massive amounts of energy and resources to the pursuit of social and economic equality. From a pragmatic viewpoint, those efforts were simply futile; moreover, if the moral lesson of their work was correct, then such rampant egalitarianism was simply unjust. New Deals, Great Societies, New Covenants and the like would never alter the basic social hierarchy; they would only flatten the pyramid by unfairly limiting the potential of the gifted and unnaturally rewarding the foibles of the inept.

Thus with one brutally simple idea, The Bell Curve, following centuries of “scientific” tradition, undermined the very foundations of the struggle for equality. The preoccupations of welfare state social engineers were no longer justifiable; their emphasis on, in The Bell Curve’s words, “changes in economics, changes in demographics, changes in the culture” and solutions founded on “better education, more and better jobs, and specific social interventions” seemed untenable in the face of this natural order. What mattered instead was “the underlying element that has shaped the changes: human intelligence.”

Not surprisingly, then, The Bell Curve set its sights on what should be easy targets: the practical tools of egalitarians—lawyers and the law. It is law, they suggested, that most clearly embodies our unnatural preoccupation with equality, law that redistributes our resources, levels our opportunities, and reduces our culture to the least common denominator. The Bell Curve challenged the fairness and practical wisdom of the full range of legislative enactments and judicial decisions designed to make America a more equal nation. While acknowledging the central place of equality in America’s political mythology, The Bell Curve called into serious question the realizability of this goal. Antidiscrimination laws are inefficient, desegregation counter-productive, affirmative action unwise, unfair, and perhaps immoral. In the worldview of The Bell Curve, the legal devotion to equality must sit in an uneasy tension with the combined effects of liberalism’s commitment to individual freedom and the immutable differences in human aptitude. The idea is as old as the Federalists, but now it comes with “new” scientific support: all men, it seems, were not created equal after all; it is only the law that pursues this quixotic vision.

Smart people succeed. From this simple empirical proposition emerged a counterrevolutionary policy prescription: law’s egalitarian ideal must invariably accommodate, or yield to, those inexorable commands of nature that distinguish the smart from the not-so-smart. Only smart people should succeed.

But The Bell Curve eluded a vital dilemma that inheres in its marvelously elegant empirical proposition: it is either tautological or wrong. It evolves that this central proposition holds true only because the terms of the equation, “smartness” and “success,” are not just empirical correlates, but definitionally synonymous: the culture rewards smartness with success because “smartness” is, definitionally, the ability to succeed in the culture. And, if any effort is made to imbue the terms with some independent meaning—to define “smartness” without reference to success, or “success” without reference to evidence of smartness—then the whole proposition falls apart: the equation becomes hopelessly confounded by the variables of class and culture, and whatever causal relationship remains between “smartness” and “success” begins to look, at the very least, bidirectional.

As the empirical proposition collapses, so too does the moral and political framework of The Bell Curve’s “natural” order, as well as its regressive critique of the law. It is simply not true that, throughout the history of this nation, law has been the great social equalizer, bucking the tides of natural justice. On the contrary, law has been and remains the great defender of the natural order, protecting the bounty of the “smart” from the intrusions of the “not-so-smart” while eluding all insight into the actual construction of those terms.

The Bell Curve got it backwards: law does not impose an artificial equality on a people ordered by nature; on the contrary, law preserves the artificial order imposed on a people who could be, and should be, of equal worth. Because it is culture, not biology, that makes people different. It is culture, not nature, that generates the intellectual hierarchy. And law maintains rather than challenges the smart culture.

I did pretty well in my early years of school. From the first through fifth grades, I got almost all As, and never anything less than a B +, except for in penmanship, where I tended to get mostly Ds. This last wasn’t for lack of effort, but for the life of me, I just could not master the cursive style. The disorder persists to this day.

When I was eleven my mom remarried, and we moved from our brick rowhouse into a completely detached split-level home with a driveway, a patio, and a backyard that seemed at the time large enough to get lost in. I changed schools at the same time, and got my first experience with what I now know is called academic tracking.

At my new school, the sixth grade was divided into four sections, A through D, with section A being for the really “smart” kids, B for the less smart kids, and so on down the line. Though I had a section A type record, I got assigned to section B; this, I figure, reflected either a skepticism about the academic standards at my old school or an emphasis on penmanship at the new one.

I did not fare well in section B. In section B, we were expected to talk about stuff, and most of the kids—feeling, I guess, at ease among friends—found this activity not the least bit challenging. I was another story. Most of the talk focused on current events, and while I sort of knew what was going on, and think I understood when I was told, the simple fact was that I could not bring myself to say much about the matter. And so I got mostly As on my homework, and even As on the tests, but when called on in class I was completely unresponsive. Day after day the sixth-grade teacher would call on me, sometimes for opinions, sometimes just to repeat the received wisdom of a prior lesson, and day after day I would sit in silence, staring at my desk, waiting for the teacher to move on.

I had lots of conferences with the teacher, and at least two that I can recall with the principal. They were not terribly productive. Yes, I could hear the teacher’s questions; yes, I knew the answers; yes, I knew the importance of sharing the answers with the teacher and the rest of the class. No, I was not trying to embarrass the teacher; no, I was not afraid of being wrong; no, I certainly did not cheat on my homework or on the tests. And no, I was sorry, but I did not know what the problem was, or what anyone should do to fix it.

I guess the principal came up with his own solution, because I spent a couple of days with the kids of section C. The move may have been punitive or it may have been remedial, but, in either event, I loved it. The kids of section C did not bother with current events; our focus was on drawing—and I loved to draw. In science, we drew pictures of solar systems and molecules; in social studies, we drew pictures of historical figures; in math, we drew pictures of numbers, then added anatomical features to convert them into animals or people. Precisely how the kids of section C were expected to contribute to the war against communism I do not know, but I do know our training for service was a heckuva lot more fun than section B’s.

At the same time, it worries me some in retrospect that section C’s drawing lessons were so thoroughly unencumbered by any actual knowledge of the things to be drawn. I don’t remember ever learning anything at all about the physical appearance of molecules or solar systems, let alone anything about what they did or why they were important. And about the only math I remember from my time in Section C is that a 6 is versatile enough to be any animal from a giraffe to a turtle, and a 9 can be the same animal in extreme distress, but a 2 isn’t worth a damn for anything but a snake.

We learned just as much about the historical figures. I remember a Thanksgiving lesson that required each of us to draw a picture of Pocahontas, an easy task for me, I having studied at my old school from a textbook that featured a very nice picture of the Thanksgiving heroine. The image stuck with me—she looked like a movie star, and I think I had a crush on her—and so I finished the assignment with ease, producing a credible rendition of Sophia Loren in buckskins with a feather sticking up out of her head. Some of the other kids at my drawing table—in section C we did not use individual desks—did not know Pocahontas as well as I did, and a couple of the boys drew Pocahontas as a very fierce, and very male, Indian warrior, which certainly would have made Captain John Smith’s story a more interesting one, but was, as far as I know, largely inconsistent with the historical record. But we all got the same grade on the assignment, except for the one kid who drew Pocahontas holding a bloody scalp, an image, I guess, that ran counter to the sentiments of the holiday.

I did not get to stay in section C all that long. I spent half a day with some other principal-type person taking a slew of tests; a week later, I was in section A. In section A, we seldom talked about current events, and we hardly ever drew. Instead, we diagrammed sentences (kind of like turning numbers into animals, but with correct answers), learned the periodic table (there really is a krypton), bisected triangles (with compasses and protractors), argued about who started the War of 1812 (it was the British, of course), and even wrote and performed a play (based on Romeo and Juliet, to every boy’s dismay). We had lots of tests in section A, and some were like the ones I took in the principal’s office, multiple-choice tests with separate answer sheets where you had to be careful not to mark outside the little circles with your number 2 pencil. Sometimes kids would leave sectign A, and sometimes new kids would arrive, and always we kept taking the tests.

I did not do all that great my first few weeks in section A, but I eventually got the hang of things and, with help from my teacher, once again started getting As. I made friends in section A, and some of them would be friends clear through high school. I sometimes missed the kids in section B, and also the kids in section C, but I lost touch with all of them. From time to time I wonder what happened to them, and to the kids in section D, whom I never even knew.

I learned a lot in section A, acquired a lot of new skills, gained a lot of new knowledge. We didn’t get to draw much or talk about current events, but we learned to think and to write, and we learned lots of new concepts and new words and new phrases. Maybe it was in section A that I learned the meaning of “self-fulfilling prophecy.”

George Harley and John Sellers wanted to be police officers. In the District of Columbia, applicants for positions in the Metropolitan Police Department were required to pass a physical exam, satisfy character requirements, have a high school diploma or its equivalent, and pass a written examination. Successful applicants were then admitted into Recruit School, a seventeen-week training course. Upon the completion of their training, recruits were required to pass a written final examination; those who failed the final examination were given assistance until they eventually passed.

The initial examination given to all Department applicants was known as Test 21, an eighty-question multiple-choice test prepared by the U.S. Civil Service Commission. The test purported to measure “verbal ability”; a few sample items follow:

Laws restricting hunting to certain regions and to a specific time of the year were passed chiefly to

a. prevent people from endangering their lives by hunting

b. keep our forests more beautiful

c. raise funds from the sale of hunting licenses

d. prevent complete destruction of certain kinds of animals

e. preserve certain game for eating purposes

BECAUSE is related to REASON as THEREFORE is related to

a. result

b. heretofore

c. instinct

d. logic

e. antecedent

BOUNTY means most nearly

a. generosity

b. limit

c. service

d. fine

e. duty

(Reading) “Adhering to old traditions, old methods, and old policies at a time when circumstances demand a new course of action may be praiseworthy from a sentimental point of view, but success is won most frequently by facing the facts and acting in accordance with the logic of the facts.” The quotation best supports the statement that success is best attained through

a. recognizing necessity and adjusting to it

b. using methods that have proved successful

c. exercising will power

d. remaining on a job until it is completed

e. considering each new problem separately

PROMONTORY means most nearly

a. marsh

b. monument

c. headland

d. boundary

e. plateau

The police department had determined that a raw score of forty on Test 21 was required for entrance into Recruit School; applicants who failed to attain that score were summarily rejected.

George Harley and John Sellers failed to score at least a forty on Test 21 when they took the test in the early 1970s; as a consequence, they were denied admission into Recruit School. Both Harley and Sellers are black, and it turned out that they were not the only black applicants to “fail” Test 21. From 1968 to 1971, the failure rate for black applicants was 57 percent; in the same time frame, by contrast, 13 percent of the white applicants failed Test 21.

In 1972 Harley and Sellers joined a lawsuit challenging the hiring and promotion practices of the Metropolitan Police Department. They contended, among other things, that reliance on Test 21 amounted to discrimination against black applicants in violation of the Constitution and federal civil rights laws. Test 21, they noted, had never been validated as a predictor of job performance: it was true that high scores on Test 21 were positively correlated with high scores on the Recruit School final examination, but neither Test 21 nor the final examination had been validated with reference to the Recruit School curriculum or the requirements of the job. Neither test, in short, bore any necessary relationship to police training or police work.

But the trial judge, Gerhard Gesell, of the U.S. District Court in the District of Columbia, rejected Harley and Sellers’s claim. Judge Gesell ruled, first, that “reasoning and verbal and literacy skills” were significant aspects of work in law enforcement: “[t]he ability to swing a nightstick no longer measures a policeman’s competency for his exacting role in this city.” Gesell then rejected the argument that Test 21 was an inappropriate measure of those skills. “There is no proof,” he wrote, that Test 21 is “culturally slanted to favor whites. . . . The Court is satisfied that the undisputable facts prove the test to be reasonably and directly related to the requirements of the police recruit training program and that it is neither so designed nor operates to discriminate against otherwise qualified blacks.”

It was true, Gesell granted, that “blacks and whites with low test scores may often turn in a high job performance.” But “[t]he lack of job performance validation does not defeat the Test, given its direct relationship to recruiting and the valid part it plays in this process.” The police department, he concluded, “should not be required on this showing to lower standards or to abandon efforts to achieve excellence.”

The U.S. Court of Appeals reversed Gesell’s decision. It was clear, the court first held, that the use of Test 21 did amount to racial discrimination. The statistical disparity was itself enough to establish that claim; moreover, it arose amid a growing body of evidence suggesting that, as a general rule, “blacks are test-rejected more frequently than whites.” “This phenomenon,” the court noted, “is the result of the long history of educational deprivation, primarily due to segregated schools, for blacks. Until arrival of the day when the effects of that deprivation have been completely dissipated, comparable performance on such tests can hardly be expected.”

The court also rejected the suggestion that the use of the test—and its racially discriminatory effects—could be justified by some objective job-related requirements, that, in legal terms, the discrimination was necessary to advance a “compelling governmental interest.” “The assertion of predictive value of Test 21 for achievement in Recruit School is based upon a correlation between Test 21 scores and scores on written examinations given during a 17-week training course,” the court noted. “We think this evidence tends to prove nothing more than that a written aptitude test will accurately predict performance on a second round of written examinations, and nothing to counter this hypothesis has been presented to us.” “As long as no one with a score below 40 enters Recruit School,” the court concluded,

as long as all recruits pass Recruit School, as long as the Department’s actions concede that Recruit School average has little value in predicting job performance, and as long as there is no evidence of any correlation between the Recruit School average and job performance, we entertain grave doubts whether any of this type of evidence could be strengthened to the point of satisfying the heavy burden imposed by [the law].

In 1976, the U.S. Supreme Court reversed yet again, reinstating Judge Gesell’s decision. In an opinion that altered the basic fabric of constitutional law—and impossibly hindered, in some views, the legal struggle for equality—the Court held that racially discriminatory effects were not enough to establish a constitutional violation. Rather, the guarantee of “equal protection of the laws” was abridged only by intentional discrimination. Only “purposeful discrimination” could create the type of inequality that required some compelling justification; discriminatory effects required no justification at all. There was, then, no constitutional inequality when black applicants failed Test 21 at four times the rate of their white counterparts; in the absence of proof that the Metropolitan Police Department intended this result, the Constitution was not implicated at all.

Justice Byron White wrote the opinion for the Court. Justice White was the valedictorian of the class of 1938 at the University of Colorado, a Rhodes scholar, and a graduate with high honors from Yale Law School. He was—and is—a very smart man. But Harley and Sellers’s claim, he wrote, left him befuddled: “[W]e have difficulty understanding how a law establishing a racially neutral qualification for employment is nevertheless racially discriminatory and denies ‘any person . . . equal protection of the laws’ simply because a greater proportion of Negroes fail to qualify than members of other racial or ethnic groups.”

Nowhere in his opinion did White explain how he knew that Test 21 was “racially neutral.”

Near the close of his opinion for the Court, White did explain why evidence of a racially disparate impact could not suffice to establish a constitutional claim:

A rule that a statute designed to serve neutral ends is nevertheless invalid, absent compelling justification, if in practice it benefits or burdens one race more than another would be far reaching and would raise serious questions about, and perhaps invalidate, a whole range of tax, welfare, public service, regulatory, and licensing statutes that may be more burdensome to the poor and to the average black than to the more affluent white.

There are, in short, too many racial disparities for the Constitution to redress without proof of an unlawful intent. The unhappy coincidence that black applicants failed Test 21 at four times the rate of their white counterparts could not alone offend the Constitution: validated or not, Test 21 was “race-neutral” because the Court could not afford to believe otherwise.3

Before the Civil War, every southern state except Tennessee prohibited the instruction of slaves. After a brief period of promise during Reconstruction, black education was effectively suppressed by the violent reactions of Redemption and the gradual entrenchment of the Jim Crow system. Some of the tools of racial hierarchy were legal, some extralegal. As to the former, racial segregation, combined with grotesque disparities in the allocation of educational resources and radical differences in the focus and depth of the curricula, was both pervasive and effective. As to the latter, a relentless scheme of orchestrated violence, directed principally at educated black Americans, achieved for white supremacy what laws alone could not.

Today, America’s white citizens are more likely than its black citizens to receive undergraduate and graduate education, more likely to attend primary and secondary schools in districts with superior resources, and more likely to be enrolled in “advanced” or “college preparatory” courses; its black citizens are more likely to be suspended, expelled, or failed from high school, are more likely to attend overcrowded and underfunded primary and secondary schools, and are more likely to be assigned to remedial education classes, or labeled “mentally retarded.” America’s black citizens are offered fewer math and science courses as primary and secondary school students, are forced to learn with smaller supplies of texts and equipment, materials that are, in any event, more apt to be hopelessly outdated, and are more likely to be led in their educational efforts by underpaid and underqualified “substitute” teachers.4

And white people, for some reason, keep doing better on “race-neutral” tests.

The stories of Carrie Buck and of George Harley and John Sellers are the stories the law usually tells about “smart.” They are not stories of unbridled egalitarianism: no wealth is redistributed, no incompetence rewarded, no unqualified applicant gets the prize, no loser suddenly wins. The stories told by the law are the stories told by the culture at large: the smart people get ahead, the not-so-smart people don’t. The law, truth be told, ensures this result.

This book is about being “smart”—about its meaning and its consequences. It is about attempts to expand its meaning and make it more inclusive, and it is about attempts to preserve its conventional meaning, to maintain its exclusivity. It is a book about the relationship between “intelligence” and “race,” and the way the two phenomena have been created together. It is about the relentless interplay between science and politics in shaping the conventional meaning of both constructs, and the vital role played by law in shielding those conventional meanings from critical scrutiny.

This book, then, is about the deeply rooted cultural myths that surround the concept of smartness: the myths of biology, the myths of merit, and the myths of equality under law. It is about the myths that persuade us, over our better moral judgment, that not all people—and maybe only very few—are smart. It is about, then, the “smart” culture.

The mythology of “smartness” is old: it is an original part of our national fabric. It found full expression during the very founding of the Republic, as a vital part of the effort to reconcile the lofty rhetoric of universal liberty and equality with the undeniable realities of social caste, political exclusion, and chattel slavery. Not all people were in fact created equal, endowed with inalienable rights, and meant to share in the blessings of liberty. What distinguished the included from the excluded were the natural differences in “the faculties of men”: Indians, Africans, women, and the poor all were differentiated by “nature,” and relegated to the lower rungs of the “natural” order.

That was in the beginning. Four score and a few years later, a reconstructed nation abolished slavery and promised all persons the “equal protection of the law.” But the architects of Reconstruction—as a collective whole—were intensely ambivalent, and the promise they offered—of legal equality—was maddeningly ambiguous. Even that promise withered in the face of assertions of natural superiority: separate but equal was in truth only separate, and the inequality was entirely in keeping with the natural order. By the end of the nineteenth century, a new evolutionary science seemed to confirm the inevitability of the American hierarchy: even in a land of unrestrained liberty—and perhaps especially in such a land—only the fittest will thrive. Over a century into the American experiment, social caste and political exclusion remained the general rule, and while chattel slavery yielded to sharecropping and debt peonage and wage labor, the economic order was essentially the same. And whenever it was called upon, the Supreme Court would be there to confirm that it was all perfectly natural.

Another century later, and much finally has changed. Suffrage is now genuinely universal. Public or private discrimination based on race, gender, or disability now violates federal law. The promise of legal equality, at least, is now a reality.

Yet by every social, political, and economic measure, the hierarchies of race, gender, and disability endure. And to explain the reality of inequality in the face of professed equality, we make recourse still to the same old myths:

• The myth of identity: that the salient differences among groups of people—race, gender, disability—are biological.

• The myth of merit: that our social, political, and economic markets are free and neutral, and only occasionally corrupted by the bias of individual discrimination.

• The myth of intelligence: that the unequal outcomes of social, political, and economic competition reflect the inborn inequities of nature.

• The myth of equality under law: that equality can never transcend the empty realm of form, for the law is limited by tradition and powerless in the face of the natural order.

Thus the mythology of smartness endures. And it is all untrue. And the real tragedy is this: by now, we should know better.

We should know that the biological differences among groups of people are trivial, and that the salient differences are generated through the processes of social interaction.

We should know that our markets reflect the preferences of the people who have structured and maintained them, and that these biases—structural and unconscious—constitute the real discrimination.

We should know that unequal outcomes—in education, in employment, and yes, on tests of smartness—reflect the cumulative advantages and disadvantages of centuries of discrimination, and the same biases that pervade all of our culture.

We should know that our laws and traditions are only what we choose to make them, and that equality can be as real as we dare.

All of this we should now know, yet somehow refuse to believe. And in rejecting the liberation offered by contemporary understanding, we have rejected as well the very best of our national heritage. We abandon the egalitarian vision of the people who founded and reconstructed our nation; we embrace instead their tragically flawed mythology.

Smart people do get ahead. They stay ahead. But it is not only natural.

One question haunts this book: for all the talk about “socially constructed this” and “culturally determined that,” for all the critiques of the “natural order” and all the appeals to equality, isn’t it undeniably true that some people—and perhaps some groups of people—are just plain smarter than others?

The answer is simple and obvious: yes, some people—and perhaps some groups of people—are smarter than others.

It’s the explanation that’s complicated. Because the fact is that both the question and the answer are meaningless unless we are clear about what we mean by “smart.” The problem is that we often are not very clear, and we often are not in agreement, and so our assertions about the relative smartness of some people as compared to others are too easily misunderstood, and it becomes far too easy to assume that their profound smartness—and other people’s lack of it—is more natural, more inevitable, and more inherently meaningful than it really is.

So let me try to be clear about what I mean when I say that some people—and perhaps some groups of people—are smarter than others.

Some people are less “smart” than others for identifiable physiological reasons. Neurological disorders often have direct effects on cognitive ability; sometimes these disorders may so affect a cognitive ability that we will say that the person is cognitively impaired. If the impairment is spread among a wide enough range of cognitive abilities, it may be possible to say that—in most cultural contexts—the person will be less smart than the norm. Here, however, a certain note of caution is in order: in some discrete contexts, our cognitively impaired person may be quite smart after all—smart, that is, at some things, if not at most.

Some people don’t do as well as other people on standardized measurements of “intelligence.” Ideally, “intelligence” means the ability to succeed in the culture; standardized measurements of intelligence should thus measure the relative ability to achieve cultural success. Someone with less measured intelligence should then have—if everything goes according to plan—less ability to succeed in the culture. Again, it may be possible to say that in most cultural contexts, the person will be less smart than the norm.

Here, many notes of caution are in order. It is easy to assume that these intelligence differences that we have measured represent natural variations among people, variations that are fixed in the biological makeup of the individual. But that is not necessarily—and probably not often—the case.

Nature, after all, does not dictate which qualities will correlate with cultural achievement. It is for us to decide which aptitudes—which skills and knowledge, talents and abilities, cognitive and affective traits—are valuable and which ones are not. We could exalt formal deduction, or creative analogic reasoning, or practical problem-solving skills, or moral reasoning, or empathie judgment and interpersonal skills. We decide, in other words, what will count as “intelligence.”

Nature does not dictate which people will be afforded the optimal chances to acquire the aptitudes for cultural success. It is for us to decide who will receive the optimal chances—the cultural environment, the formal education, the social opportunity—to acquire intelligence. Research now consistently documents the profound effects of environmental stimulation on cognitive development and the equally profound effects of environmental deprivation. It is a social fact that the probabilities of growing up in comparatively stimulating and deprived environments are not equally distributed among race and class: successful people—smart people—are uniquely situated to perpetuate their advantages. And we keep them there. We decide, in other words, who will be afforded the best chance to get “smart” and stay smart.

Nature does not dictate our response to measured differences in intelligence. We decide whether those differences should be simply ignored, actively countered, or preserved as justifications for the prevailing inequities. In the United States, we long ago stopped talking about regional differences in “IQ,” as well as most ethnic disparities. The gender disparities, meanwhile, we eliminated by modifying the tests. The disparities of race, however, retain a singular legitimacy. We give them that. We decide, in other words, whether we actually like our hierarchies of “smartness.”

All of which is to say that “superior” and “inferior” intelligences are not entirely natural. On the contrary, it is substantially our decisions that make people either more or less “smart.”

There is something concededly counterintuitive about all this. We have come to believe in smartness as an inherent quality, as something people are either born with or not. We have come to believe that it is fairly immutable, that individual limitations are pretty much fixed. And we can hardly be faulted for conceiving of it as something universal; it is hard to imagine choosing other things to count as “smart” beyond the things “we” have chosen. So the suggestion that smartness is “made” strikes us as, well, a not-very-smart suggestion.

But then again, we know that people disagree about smartness, about whether a student or a teacher or a politician or a neighbor is “smart.” Maybe, then, smartness is not entirely inherent; maybe it does require our subjective assessment.

And we know that people can get smart. They learn knowledge, and skills, and even learn how to learn: even the vaunted “IQ” is not stable. Maybe, then, smartness is not immutable; maybe it depends on our efforts, as both teachers and learners.

And we know that some pretty smart people are not universally smart. The most gifted Japanese haiku poet may be unable to write an instruction manual for English-speaking purchasers of Japanese-made VCRs. And no matter how good the manual, the most brilliant American brain surgeon may never master the art of programmed recording. Law students are trained to “think like lawyers”; medical and nursing students, thank goodness, are not. Maybe smartness is not an abstract, universal entity; maybe it depends on the contexts we construct.

So the idea that smartness is partly “made” is not entirely counterintuitive; on the contrary, it actually confirms our practical experience with the concept. Still, something about the notion of a constructed intelligence seems slightly incredible: too fantastic, perhaps too optimistic. We can’t quite shake our skepticism. “Okay,” we might say, “you socially constructed wiseguy, answer me this: If people are really as smart as we make them, then do you mean to tell me that a person with mental retardation can be made smart enough to be, say, a nuclear physicist?”

Well, here’s one honest answer: probably not. I don’t know what it takes to be a nuclear physicist; I don’t know whether it takes the kind of aptitudes that are measured by IQ tests. But if it does, then the person with mental retardation—who, by definition, did badly on an IQ test—has farther to go to be a nuclear physicist than the person who is not mentally retarded. She may, in fact, have farther to go than our patience, our resources, and our skill are capable of taking her. If that’s the case, then she cannot be a nuclear physicist—or, at least, not a very good one.

But here’s the key: not much of this—and maybe not any of it—is natural. We—society, culture, who- or whatever is in charge here—figure pretty heavily in the determination whether a person with mental retardation, or anyone else for that matter, can be a nuclear physicist. Consider:

Being a nuclear physicist is not a natural state: it’s a job that we made, requiring attributes that we define.

Competence in that job is not a naturally defined condition: there are questions of degree and subjective judgments that inhere in the determination whether someone is a “qualified” nuclear physicist (or a lawyer, or a judge, or a vice president of a company, or a vice president of the United States).

Training for that competence is not a natural process: our cultural talents and commitments determine who we will train, and how well.

Even the mental retardation that necessitates special training is not a natural condition: we make “mental retardation”—as we make the intelligence of all people—in the complex interactions between the individual and the society in which she lives, interactions that shape her opportunities, the perceptions of her, and even, we now know, the very physiology of her brain, all in a relentless gestalt of intellectual advantage, or disadvantage.

So maybe she can’t be a nuclear physicist. We just need to acknowledge, even in this most extreme of examples, that it’s at least partly our doing, that with some will or ingenuity, an intervention here, a cultural change there, things might, just might, turn out differently. And as the scenario gets more commonplace—as either the job or her measured intelligence grow closer to the norm—the gaps between what might be and what could be and ultimately what should be grow more narrow, and it becomes increasingly likely that if anything stands in the way of our mentally retarded subject—our neighbor, our friend, our sister—it’s something that we put there, and something that we can remove.

If it all sounds too altruistic, or too Utopian, then it is perhaps important to remember this: not so long ago, we were fairly certain that a woman’s aptitudes did not embrace skills from the political realm. “Race” was a disqualifying characteristic throughout social and economic life, due to the perceived cognitive incapacities of some racial groups. We restricted the immigration of certain ethnic groups—most, in fact, except those from Britain and northern Europe—because of the genetic inferiority of the immigrant stock. Feebleminded people were so inferior that we institutionalized them, and sterilized them, to prevent our being swamped with incompetence. In each case, arguments against the conventional wisdom seemed too altruistic, too utopian.

It seems the conceit of each generation that it has reached the state of ultimate enlightenment: each age is a progressive one, each society the most perfectly egalitarian. I know a husband and wife who had a baby boy; the state took their baby away before they could even leave the hospital. They had done nothing wrong except not be smart enough: they both were mentally retarded. A generation or so ago, they would have been simply sterilized; in their day—in our day—they lost their newborn baby to the state. It’s an odd kind of progress.

But they got their baby back; they became a family after all. They will need help to succeed; their boy will need help. It is hard to know what will happen to him, hard to know how smart he will be. Maybe, in the next generation, sterilization will be back in vogue. Or maybe his daughter will be a nuclear physicist.

There’s one last thing that I think we need to acknowledge, and it’s maybe the most important of all. Even if somebody can’t be a good nuclear physicist, and even if it is somehow due entirely to her own “natural” limitations, it absolutely does not mean that she is not smart. Here, I think, is the greatest danger in the concept, the most insidious aspect of “smartness” and “intelligence” and “IQ” and “mental retardation.” From one perceived inability we induce a general inferiority: someone who doesn’t do well on standardized tests becomes “dumb” or even “mentally retarded,” and that means that not only will they not become very good nuclear physicists, they also won’t become very good citizens, or parents, or people. Being not smart at that one thing means that they are just plain not smart—at anything. And that means that they deserve—in terms of cultural success—nothing.

But it means nothing of the sort, or rather, it should mean nothing of the sort. Because there are many kinds of smartness, and people can be smart in many different ways, and the fact that they are not smart—or are not made smart—in one way does not mean that they cannot be smart in many other ways. Really bad nuclear physicists can be really good nurses; really bad nurses can be really good lawyers; really bad lawyers can be really good auto mechanics; really bad auto mechanics can be really good teachers; and any of them—but not necessarily all of them—can be really good mothers and fathers.

Here too we have made the decisions: to ignore the different kinds of smartness; to collapse it all into one general, abstract concept; and to order all the differences, as matters of degree, as more smart or less, as superior and inferior. Here too, in this final crucial way, we make some people smarter than others, by rewarding the smartness of some people and ignoring the smartness of others. We make some people smart, in short, just by choosing to call them that.

So some people are smarter than others. It would be wrong not to admit it. But wrong too not to admit that in most cases, and in most respects, we made them that way.

The remainder of this book examines in detail the mythology of smartness: as it was initially conceived by the founders of 1787 and the reconstructors of 1868; as it persists today in American science and politics; and as it has been maintained by American law. In the process, it confronts one of the most vicious myths of smartness: the myth of “races” of people that are, by nature, intellectually superior and inferior. That myth, it evolves, is an old myth, but not an ancient one; an outmoded myth, but a durable one. And it has been made durable by American law.

This book also examines a competing vision—one also promised by the founders, adopted by the reconstructors, confirmed by science, and realized, in fleeting moments, in American politics and American law. It is a vision of a nature that blesses all people—and all groups of people—and of a community in which equality is not merely a legal concept but a lived condition. It is a vision of a truly smart culture, one in which “smart” means all of us.

The Smart Culture

Подняться наверх