Читать книгу Endure - Алекс Хатчинсон, Alex Hutchinson - Страница 10

CHAPTER 2 The Human Machine

Оглавление

After fifty-six days of hard skiing, Henry Worsley glanced down at the digital display of his GPS and stopped. “That’s it,” he announced with a grin, driving a ski pole into the wind-packed snow. “We’ve made it!” It was early evening on January 9, 2009, one hundred years to the day since British explorer Ernest Shackleton had planted a Union Jack in the name of King Edward VII at this precise location on the Antarctic plateau: 88 degrees and 23 minutes south, 162 degrees east. In 1909, it was the farthest south any human had ever traveled, just 112 miles from the South Pole. Worsley, a gruff veteran of the British Special Air Service who had long idolized Shackleton, cried “small tears of relief and joy” behind his goggles, for the first time since he was ten years old. (“My poor physical state accentuated my vulnerability,” he later explained.) Then he and his companions, Will Gow and Henry Adams, unfurled their tent and fired up the kettle. It was −35 degrees Celsius.

For Shackleton, 88°23' south was a bitter disappointment. Six years earlier, as a member of Robert Falcon Scott’s Discovery expedition, he’d been part of a three-man team that set a farthest-south record of 82°17'. But he had been sent home in disgrace after Scott claimed that his physical weakness had held the others back. Shackleton returned for the 1908–09 expedition eager to vindicate himself by beating his former mentor to the pole, but his own four-man inland push was a struggle from the start. By the time Socks, the team’s fourth and final Manchurian pony, disappeared into a crevasse on the Beardmore glacier six weeks into the march, they were already on reduced rations and increasingly unlikely to reach their goal. Still, Shackleton decided to push onward as far as possible. Finally, on January 9, he acknowledged the inevitable: “We have shot our bolt,” he wrote in his diary. “Homeward bound at last. Whatever regrets may be, we have done our best.”

To Worsley, a century later, that moment epitomized Shackleton’s worth as a leader: “The decision to turn back,” he argued, “must be one of the greatest decisions taken in the whole annals of exploration.” Worsley was a descendant of the skipper of Shackleton’s ship in the Endurance expedition; Gow was Shackleton’s great-nephew by marriage; and Adams was the great-grandson of Shackleton’s second in command on the 1909 trek. The three of them had decided to honor their forebears by retracing the 820-mile route without any outside help. They would then take care of unfinished ancestral business by continuing the last 112 miles to the South Pole, where they would be picked up by a Twin Otter and flown home. Shackleton, in contrast, had to turn around and walk the 820 miles back to his base camp—a return journey that, like most in the great age of exploration, turned into a desperate race against death.

What were the limits that stalked Shackleton? It wasn’t just beard-freezingly cold; he and his men also climbed more than 10,000 feet above sea level, meaning that each icy breath provided only two-thirds as much oxygen as their bodies expected. With the early demise of their ponies, they were man-hauling sleds that had initially weighed as much as 500 pounds, putting continuous strain on their muscles. Studies of modern polar travelers suggest they were burning somewhere between 6,000 and 10,000 calories per day—and doing it on half rations. By the end of their journey, they would have consumed close to a million calories over the course of four relentless months, similar to the totals of the subsequent Scott expedition of 1911–12. South African scientist Tim Noakes argues these two expeditions were “the greatest human performances of sustained physical endurance of all time.”

Shackelton’s understanding of these various factors was limited. He knew that he and his men needed to eat, of course, but beyond that the inner workings of the human body remained shrouded in mystery. That was about to change, though. A few months before Shackleton’s ship, the Nimrod, sailed toward Antarctica from the Isle of Wight in August 1907, researchers at the University of Cambridge published an account of their research on lactic acid, an apparent enemy of muscular endurance that would become intimately familiar to generations of athletes. While the modern view of lactic acid has changed dramatically in the century since then (for starters, what’s found inside the body is actually lactate, a negatively charged ion, rather than lactic acid), the paper marked the beginning of a new era of investigation into human endurance—because if you understand how a machine works, you can calculate its ultimate limits.

The nineteenth-century Swedish chemist Jöns Jacob Berzelius is now best remembered for devising the modern system of chemical notation—H2O and CO2 and so on—but he was also the first, in 1807, to draw the connection between muscle fatigue and a recently discovered substance found in soured milk. Berzelius noticed that the muscles of hunted stags seemed to contain high levels of this “lactic” acid, and that the amount of acid depended on how close to exhaustion the animal had been driven before its death. (To be fair to Berzelius, chemists were still almost a century away from figuring out what “acids” really were. We now know that lactate from muscle and blood, once extracted from the body, combines with protons to produce lactic acid. That’s what Berzelius and his successors measured, which is why they believed that it was lactic acid rather than lactate that played a role in fatigue. For the remainder of the book, we’ll refer to lactate except in historical contexts.)

What the presence of lactic acid in the stags’ muscles signified was unclear, given how little anyone knew about how muscles worked. At the time, Berzelius himself subscribed to the idea of a “vital force” that powered living things and existed outside the realm of ordinary chemistry. But vitalism was gradually being supplanted by “mechanism,” the idea that the human body is basically a machine, albeit a highly complex one, obeying the same basic laws as pendulums and steam engines. A series of nineteenth-century experiments, often crude and sometimes bordering on comical, began to offer hints about what might power this machine. In 1865, for example, a pair of German scientists collected their own urine while hiking up the Faulhorn, an 8,000-foot peak in the Bernese Alps, then measured its nitrogen content to establish that protein alone couldn’t supply all the energy needed for prolonged exertion. As such findings accumulated, they bolstered the once-heretical view that human limits are, in the end, a simple matter of chemistry and math.

These days, athletes can test their lactate levels with a quick pinprick during training sessions (and some companies now claim to be able to measure lactate in real time with sweat-analyzing adhesive patches). But even confirming the presence of lactic acid was a formidable challenge for early investigators; Berzelius, in his 1808 book, Föreläsningar i Djurkemien (“Lectures in Animal Chemistry”), devotes six dense pages to his recipe for chopping fresh meat, squeezing it in a strong linen bag, cooking the extruded liquid, evaporating it, and subjecting it to various chemical reactions until, having precipitated out the dissolved lead and alcohols, you’re left with a “thick brown syrup, and ultimately a lacquer, having all the character of lactic acid.”

Not surprisingly, subsequent attempts to follow this sort of procedure produced a jumble of ambiguous results that left everyone confused. That was still the situation in 1907, when Cambridge physiologists Frederick Hopkins and Walter Fletcher took on the problem. “[I]t is notorious,” they wrote in the introduction to their paper, “that … there is hardly any important fact concerning the lactic acid formation in muscle which, advanced by one observer, has not been contradicted by some other.” Hopkins was a meticulous experimentalist who went on to acclaim as the codiscoverer of vitamins, for which he won a Nobel Prize; Fletcher was an accomplished runner who, as a student in the 1890s, was among the first to complete the 320-meter circuit around the courtyard of Cambridge’s Trinity College while its ancient clock was striking twelve—a challenge famously immortalized in the movie Chariots of Fire (though Fletcher reportedly cut the corners).

Hopkins and Fletcher plunged the muscles they wanted to test into cold alcohol immediately after finishing whatever tests they wished to perform. This crucial advance kept levels of lactic acid more or less constant during the subsequent processing stages, which still involved grinding up the muscle with a mortar and pestle and then measuring its acidity. Using this newly accurate technique, the two men investigated muscle fatigue by experimenting on frog legs hung in long chains of ten to fifteen pairs connected by zinc hooks. By applying electric current at one end of the chain, they could make all the legs contract at once; after two hours of intermittent contractions, the muscles would be totally exhausted and unable to produce even a feeble twitch.

The results were clear: exhausted muscles contained three times as much lactic acid as rested ones, seemingly confirming Berzelius’s suspicion that it was a by-product—or perhaps even a cause—of fatigue. And there was an additional twist: the amount of lactic acid decreased when the fatigued frog muscles were stored in oxygen, but increased when they were deprived of oxygen. At last, a recognizably modern picture of how muscles fatigue was coming into focus—and from this point on, new findings started to pile up rapidly.

The importance of oxygen was confirmed the next year by Leonard Hill, a physiologist at the London Hospital Medical College, in the British Medical Journal. He administered pure oxygen to runners, swimmers, laborers, and horses, with seemingly astounding results. A marathon runner improved his best time over a trial distance of three-quarters of a mile by 38 seconds. A tram horse was able to climb a steep hill in two minutes and eight seconds instead of three and a half minutes, and it wasn’t breathing hard at the top.

One of Hill’s colleagues even accompanied a long-distance swimmer named Jabez Wolffe on his attempt to become the second person to swim across the English Channel. After more than thirteen hours of swimming, when he was about to give up, Wolffe inhaled oxygen through a long rubber tube, and was immediately rejuvenated. “The sculls had to be again taken out and used to keep the boat up with the swimmer,” Hill noted; “before, he and it had been drifting with the tide.” (Wolffe, despite being slathered head-to-toe with whiskey and turpentine and having olive oil rubbed on his head, had to be pulled from the water an agonizing quarter mile from the French shore due to cold. He ultimately made twenty-two attempts at the Channel crossing, all unsuccessful.)

As the mysteries of muscle contraction were gradually unraveled, an obvious question loomed: what were the ultimate limits? Nineteenth-century thinkers had debated the idea that a “law of Nature” dictated each person’s greatest potential physical capacities. “[E]very living being has from its birth a limit of growth and development in all directions beyond which it cannot possibly go by any amount of forcing,” Scottish physician Thomas Clouston argued in 1883. “The blacksmith’s arm cannot grow beyond a certain limit. The cricketer’s quickness cannot be increased beyond this inexorable point.” But what was that point? It was a Cambridge protégé of Fletcher, Archibald Vivian Hill (he hated his name and was known to all as A. V.), who in the 1920s made the first credible measurements of maximal endurance.

You might think the best test of maximal endurance is fairly obvious: a race. But race performance depends on highly variable factors like pacing. You may have the greatest endurance in the world, but if you’re an incurable optimist who can’t resist starting out at a sprint (or a coward who always sets off at a jog), your race times will never accurately reflect what you’re physically capable of.

You can strip away some of this variability by using a time-to-exhaustion test instead: How long can you run with the treadmill set at a certain speed? Or how long can you keep generating a certain power output on a stationary bike? And that is, in fact, how many research studies on endurance are now conducted. But this approach still has flaws. Most important, it depends on how motivated you are to push to your limits. It also depends on how well you slept last night, what you ate before the test, how comfortable your shoes are, and any number of other possible distractions and incentives. It’s a test of your performance on that given day, not of your ultimate capacity to perform.

In 1923, Hill and his colleague Hartley Lupton, then based at the University of Manchester, published the first of a series of papers investigating what they initially called “the maximal oxygen intake”—a quantity now better known by its scientific shorthand, VO2max. (Modern scientists call it maximal oxygen uptake, since it’s a measure of how much oxygen your muscles actually use rather than how much you breathe in.) Hill had already shared a Nobel Prize the previous year, for muscle physiology studies involving careful measurement of the heat produced by muscle contractions. He was a devoted runner—a habit shared by many of the physiologists we’ll meet in subsequent chapters. For the experiments on oxygen use, in fact, he was his own best subject, reporting in the 1923 paper that he was, at thirty-five, “in fair general training owing to a daily slow run of about one mile before breakfast.” He was also an enthusiastic competitor in track and cross-country races: “indeed, to tell the truth, it may well have been my struggles and failures, on track and field, and the stiffness and exhaustion that sometimes befell, which led me to ask many questions which I have attempted to answer here.”

The experiments on Hill and his colleagues involved running in tight circles around an 85-meter grass loop in Hill’s garden (a standard track, in comparison, is 400 meters long) with an air bag strapped to their backs connected to a breathing apparatus to measure their oxygen consumption. The faster they ran, the more oxygen they consumed—up to a point. Eventually, they reported, oxygen intake “reaches a maximum beyond which no effort can drive it.” Crucially, they could still accelerate to faster speeds; however, their oxygen intake no longer followed. This plateau is your VO2max, a pure and objective measure of endurance capacity that is, in theory, independent of motivation, weather, phase of the moon, or any other possible excuse. Hill surmised that VO2max reflected the ultimate limits of the heart and circulatory system—a measurable constant that seemed to reveal the size of the “engine” an athlete was blessed with.

With this advance, Hill now had the means to calculate the theoretical maximum performance of any runner at any distance. At low speeds, the effort is primarily aerobic (meaning “with oxygen”), since oxygen is required for the most efficient conversion of stored food energy into a form your muscles can use. Your VO2max reflects your aerobic limits. At higher speeds, your legs demand energy at a rate that aerobic processes can’t match, so you have to draw on fast-burning anaerobic (“without oxygen”) energy sources. The problem, as Hopkins and Fletcher had shown in 1907, is that muscles contracting without oxygen generate lactic acid. Your muscles’ ability to tolerate high levels of lactic acid—what we would now call anaerobic capacity—is the other key determinant of endurance, Hill concluded, particularly in events lasting less than about ten minutes.

In his twenties, Hill reported, he had run best times of 53 seconds for the quarter mile, 2:03 for the half mile, 4:45 for the mile, and 10:30 for two miles—creditable times for the era, though, he modestly emphasized, not “first-class.” (Or rather, in keeping with scientific practice at the time, these feats were attributed to an anonymous subject known as “H.,” who happened to be the same age and speed as Hill.) The exhaustive tests in his back garden showed that his VO2max was 4.0 liters of oxygen per minute, and his lactic acid tolerance would allow him to accumulate a further “oxygen debt” of about 10 liters. Using these numbers, along with measurements of his running efficiency, he could plot a graph that predicted his best race times with surprising accuracy.

Hill shared these results enthusiastically. “Our bodies are machines, whose energy expenditures may be closely measured,” he declared in a 1926 Scientific American article titled “The Scientific Study of Athletics.” He published an analysis of world records in running, swimming, cycling, rowing, and skating, at distances ranging from 100 yards to 100 miles. For the shortest sprints, the shape of the world record curve was apparently dictated by “muscle viscosity,” which Hill studied during a stint at Cornell University by strapping a dull, magnetized hacksaw blade around the chest of a sprinter who then ran past a series of coiled-wire electromagnets—a remarkable early system for precision electric timing. At longer distances, lactic acid and then VO2max bent the world-record curve just as predicted.

But there was a mystery at the longest distances. Hill’s calculations suggested that if the speed was slow enough, your heart and lungs should be able to deliver enough oxygen to your muscles to keep them fully aerobic. There should be a pace, in other words, that you could sustain pretty much indefinitely. Instead, the data showed a steady decline: the 100-mile running record was substantially slower than the 50-mile record, which in turn was slower than the 25-mile record. “Consideration merely of oxygen intake and oxygen debt will not suffice to explain the continued fall of the curve,” Hill acknowledged. He penciled in a dashed near-horizontal line showing where he thought the ultra-distance records ought to be, and concluded that the longer records were weaker primarily because “the greatest athletes have confined themselves to distances not greater than 10 miles.”

By the time Henry Worsley and his companions finally reached the South Pole in 2009, they had skied 920 miles towing sleds that initially weighed 300 pounds. Entering the final week, Worsley knew that his margin of error had all but evaporated. At forty-eight, he was a decade older than either Adams or Gow, and by the end of each day’s ski he was struggling to keep up with them. On New Year’s Day, with 125 miles still to go, he turned down Adams’s offer to take some weight off his sled. Instead, he buried his emergency backup rations in the snow—a calculated risk in exchange for a savings of eighteen pounds. “Soon I was finding each hour a worrying struggle, and was starting to become very conscious of my weakening condition,” he recalled. He began to lag behind and arrive at camp ten to fifteen minutes after the others.

On the eve of their final push to the pole, Worsley took a solitary walk outside the tent, as he’d done every evening throughout the trip before crawling into his sleeping bag. Over the course of the journey, he had sometimes spent these quiet moments contemplating the jagged glaciers they had just traversed and distant mountains still to come; other times, the view was simply “a never-ending expanse of nothingness.” On this final night, he was greeted by a spectacular display in the polar twilight: the sun was shaped like a diamond, surrounded by an incandescent circle of white-hot light and flanked on either side by matching “sun dogs,” an effect created when the sun’s rays are refracted by a haze of prism-shaped ice crystals. It was the first clear display of sun dogs during the entire journey. Surely, Worsley told himself, this was an omen—a sign from the Antarctic that it was finally releasing its grip on him.

The next day was anticlimactic, a leisurely five-mile coda to their epic trip before entering the warm embrace of the Amundsen-Scott South Pole Station. They had done it, and Worsley was flooded with a sense of relief and accomplishment. The Antarctic, though, was not yet finished with him after all. Worsley had spent three decades in the British Army, including tours in the Balkans and Afghanistan with the elite Special Air Service (SAS), the equivalent of America’s SEALs or Delta Force. He rode a Harley, taught needlepoint to prison inmates, and had faced a stone-throwing mob in Bosnia. The polar voyage, though, had captivated him: it demanded every ounce of his reserves, and in doing so it expanded his conception of what he was capable of. In challenging the limits of his own endurance, he had finally found a worthy adversary. Worsley was hooked.

Three years later, in late 2011, Worsley returned to the Antarctic for a centenary reenactment of Robert Falcon Scott and Roald Amundsen’s race to the South Pole. Amundsen’s team, skiing along an eastern route with 52 dogs that hauled sleds and eventually served as food, famously reached the Pole on December 14, 1911. Scott’s team, struggling over the longer route that Shackleton had blazed, with malfunctioning mechanical sleds and Manchurian ponies that couldn’t handle the ice and cold, reached it thirty-four days later only to find Amundsen’s tent along with a polite note (“As you probably are the first to reach this area after us, I will ask you kindly to forward this letter to King Haakon VII. If you can use any of the articles left in the tent please do not hesitate to do so. The sledge left outside may be of use to you. With kind regards I wish you a safe return …”) awaiting them. While Amundsen’s return journey was uneventful, Scott’s harrowing ordeal showed just what was at stake. A combination of bad weather, bad luck, and shoddy equipment, combined with a botched “scientific” calculation of their calorie needs, left Scott and his men too weak to make it back. Starving and frostbitten, they lay in their tent for ten snowy days, unable to cover the final eleven miles to their food depot, before dying.

A century later, Worsley led a team of six soldiers along Amundsen’s route, becoming the first man to complete both classic routes to the pole. Still, he wasn’t done. In 2015, he returned for yet another centenary reenactment, this time of the Imperial Trans-Antarctic Expedition—Shackleton’s most famous (and most brutally demanding) voyage of all.

In 1909, Shackleton’s prudent decision to turn back short of the pole had undoubtedly saved him and his men, but it was still a perilously close call. Their ship had been instructed to wait until March 1; Shackleton and one other man reached a nearby point late on February 28 and lit a wooden weather station on fire to get the ship’s attention and signal for rescue. In the years after this brush with disaster, and with Amundsen having claimed the South Pole bragging rights in 1911, Shackleton at first resolved not to return to the southern continent at all. But, like Worsley, he couldn’t stay away.

Shackleton’s new plan was to make the first complete crossing of the Antarctic continent, from the Weddell Sea near South America to the Ross Sea near New Zealand. En route to the start, his ship, the Endurance, was seized by the ice of the Weddell Sea, forcing Shackleton and his men to spend the winter of 1915 on the frozen expanse. The ship was eventually crushed by shifting ice, forcing the men to embark on a now-legendary odyssey that climaxed with Shackleton leading an 800-mile crossing over some of the roughest seas on earth—in an open lifeboat!—to rugged South Georgia Island, where there was a tiny whaling station from which they could call for rescue. The navigator behind this remarkable feat: Frank Worsley, Henry Worsley’s forebear and the origin of his obsession. While the original expedition failed to achieve any of its goals, the three-year saga ended up providing one of the most gripping tales of endurance from the great age of exploration—Edmund Hillary, conqueror of Mount Everest, called it “the greatest survival story of all time”—and again earned Shackleton praise for bringing his men home safely. (Three men did die on the other half of the expedition, laying in supplies at the trek’s planned finishing point.)

Once more, Worsley decided to complete his hero’s unfinished business. But this would be different. His previous polar treks had covered only half the actual distance, since he had flown home from the South Pole both times. Completing the full journey wouldn’t just add more distance and weight to haul; it would also make it correspondingly harder to judge the fine line between stubborn persistence and recklessness. In 1909, Shackleton had turned back not because he couldn’t reach the pole, but because he feared he and his men wouldn’t make it back home. In 1912, Scott had chosen to push on and paid the ultimate price. This time, Worsley resolved to complete the entire 1,100-mile continental crossing—and to do it alone, unsupported, unpowered, hauling all his gear behind him. On November 13, he set off on skis from the southern tip of Berkner Island, 100 miles off the Antarctic coast, towing a 330-pound sled across the frozen sea.

That night, in the daily audio diary he uploaded to the Web throughout the trip, he described the sounds he had become so familiar with on his previous expeditions: “The squeak of the ski poles gliding into the snow, the thud of the sledge over each bump, and the swish of the skis sliding along … And then, when you stop, the unbelievable silence.”

At first, A. V. Hill’s attempts to calculate the limits of human performance were met with bemusement. In 1924, he traveled to Philadelphia to give a lecture at the Franklin Institute on “The Mechanism of Muscle.” “At the end,” he later recalled, “I was asked, rather indignantly, by an elderly gentleman, what use I supposed all these investigations were which I had been describing.” Hill first tried to explain the practical benefits that might follow from studying athletes but soon decided that honesty was the best policy: “To tell you the truth,” he admitted, “we don’t do it because it is useful but because it’s amusing.” That was the headline in the newspaper the next day: “Scientist Does It Because It’s Amusing.”

In reality, the practical and commercial value of Hill’s work was obvious right from the start. His VO2max studies were funded by Britain’s Industrial Fatigue Research Board, which also employed his two coauthors. What better way to squeeze the maximum productivity from workers than by calculating their physical limits and figuring out how to extend them? Other labs around the world soon began pursuing similar goals. The Harvard Fatigue Laboratory, for example, was established in 1927 to focus on “industrial hygiene,” with the aim of studying the various causes and manifestations of fatigue “to determine their interrelatedness and the effect upon work.” The Harvard lab went on to produce some of the most famous and groundbreaking studies of record-setting athletes, but its primary mission of enhancing workplace productivity was signaled by its location—in the basement of the Harvard Business School.

Citing Hill’s research as his inspiration, the head of the Harvard lab, David Bruce Dill, figured that understanding what made top athletes unique would shed light on the more modest limits faced by everyone else. “Secret of Clarence DeMar’s Endurance Discovered in the Fatigue Laboratory,” the Harvard Crimson announced in 1930, reporting on a study in which two dozen volunteers had run on a treadmill for twenty minutes before having the chemical composition of their blood analyzed. By the end of the test, DeMar, a seven-time Boston Marathon champion, had produced almost no lactic acid—a substance that, according to Dill’s view at the time, “leaks out into the blood, producing or tending to produce exhaustion.” In later studies, Dill and his colleagues tested the effects of diet on blood sugar levels in Harvard football players before, during, and after games; and studied runners like Glenn Cunningham and Don Lash, the reigning world record holders at one mile and two miles, reporting their remarkable oxygen processing capacities in a paper titled “New Records in Human Power.”

Are such insights about endurance on the track or the gridiron really applicable to endurance in the workplace? Dill and his colleagues certainly thought so. They drew an explicit link between the biochemical “steady state” of athletes like DeMar, who could run at an impressive clip for extended periods of time without obvious signs of fatigue, and the capacity of well-trained workers to put in long hours under stressful conditions without a decline in performance.

At the time, labor experts were debating two conflicting views of fatigue in the workplace. As MIT historian Robin Scheffler recounts, efficiency gurus like Frederick Winslow Taylor argued that the only true limits on the productive power of workers were inefficiency and lack of will—the toddlers-on-a-plane kind of endurance. Labor reformers, meanwhile, insisted that the human body, like an engine, could produce only a certain amount of work before requiring a break (like, say, a weekend). The experimental results emerging from the Harvard Fatigue Lab offered a middle ground, acknowledging the physiological reality of fatigue but suggesting it could be avoided if workers stayed in “physicochemical” equilibrium—the equivalent of DeMar’s ability to run without accumulating excessive lactic acid.

Dill tested these ideas in various extreme environments, studying oxygen-starved Chilean miners at 20,000 feet above sea level and jungle heat in the Panama Canal Zone. Most famously, he and his colleagues studied laborers working on the Hoover Dam, a Great Depression–era megaproject employing thousands of men in the Mojave Desert. During the first year of construction, in 1931, thirteen workers died of heat exhaustion. When Dill and his colleagues arrived the following year, they tested the workers before and after grueling eight-hour shifts in the heat, showing that their levels of sodium and other electrolytes were depleted—a telling departure from physico-chemical equilibrium. The fix: one of Dill’s colleagues persuaded the company doctor to amend a sign in the dining hall that said THE SURGEON SAYS DRINK PLENTY OF WATER, adding AND PUT PLENTY OF SALT ON YOUR FOOD. No more men died of heat exhaustion during the subsequent four years of construction, and the widely publicized results helped enshrine the importance of salt in fighting heat and dehydration—even though, as Dill repeatedly insisted in later years, the biggest difference from 1931 to 1932 was moving the men’s living quarters from encampments on the sweltering canyon floor to air-conditioned dormitories on the plateau.

If there was any remaining doubt about Hill’s vision of the “human machine,” the arrival of World War II in 1939 helped to erase it. As Allied soldiers, sailors, and airmen headed into battle around the world, scientists at Harvard and elsewhere studied the effects of heat, humidity, dehydration, starvation, altitude, and other stressors on their performance, and searched for practical ways of boosting endurance under these conditions. To assess subtle changes in physical capacity, researchers needed an objective measure of endurance—and Hill’s concept of VO2max fit the bill.

The most notorious of these wartime studies, at the University of Minnesota’s Laboratory of Physical Hygiene, involved thirty-six conscientious objectors—men who had refused on principle to serve in the armed forces but had volunteered instead for a grueling experiment. Led by Ancel Keys, the influential researcher who had developed the K-ration for soldiers and who went on to propose a link between dietary fat and heart disease, the Minnesota Starvation Study put the volunteers through six months of “semi-starvation,” eating on average 1,570 calories in two meals each day while working for 15 hours and walking 22 miles per week.

In previous VO2max studies, scientists had trusted that they could simply ask their subjects to run to exhaustion in order to produce maximal values. But with men who’ve been through the physical and psychological torment of months of starvation, “there is good reason for not trusting the subject’s willingness to push himself to the point at which a maximal oxygen intake is elicited,” Keys’s colleague Henry Longstreet Taylor drily noted. Taylor and two other scientists took on the task of developing a test protocol that “would eliminate both motivation and skill as limiting factors” in objectively assessing endurance. They settled on a treadmill test in which the grade got progressively steeper, with carefully controlled warm-up duration and room temperature. When subjects were tested and retested, even a year later, their results were remarkably stable: your VO2max was your VO2max, regardless of how you felt that day or whether you were giving your absolute best. Taylor’s description of this protocol, published in 1955, marked the real start of the VO2max era.

By the 1960s, growing faith in the scientific measurement of endurance led to a subtle reversal: instead of testing great athletes to learn about their physiology, scientists were using physiological testing to predict who could be a great athlete. South African researcher Cyril Wyndham argued that “men must have certain minimum physiological requirements if they are to reach, say, an Olympic final.” Rather than sending South African runners all the way across the world only to come up short, he suggested, they should first be tested in the lab so that “conclusions can be drawn on the question of whether the Republic’s top athletes have sufficient ‘horse-power’ to compete with the world’s best.”

In some ways, the man-as-machine view had now been pushed far beyond what Hill initially envisioned. “There is, of course, much more in athletics than sheer chemistry,” Hill had cheerfully acknowledged, noting the importance of “moral” factors—“those qualities of resolution and experience which enable one individual to ‘run himself out’ to a far greater degree of exhaustion than another.” But the urge to focus on the quantifiable at the expense of the seemingly abstract was understandably strong. Scientists gradually fine-tuned their models of endurance by incorporating other physiological traits like economy and “fractional utilization” along with VO2max—the equivalent of considering a car’s fuel economy and the size of its gas tank in addition to its raw horsepower.

It was in this context that Michael Joyner proposed his now-famous 1991 thought experiment on the fastest possible marathon. As a restless undergraduate in the late 1970s, Joyner had been on the verge of dropping out of the University of Arizona—at six-foot-five, and with physical endurance that eventually enabled him to run a 2:25 marathon, he figured he might make a pretty good firefighter—when he was outkicked at the end of a 10K race by a grad student from the school’s Exercise and Sport Science Laboratory. After the race, the student convinced Joyner to volunteer as a guinea pig in one of the lab’s ongoing experiments, a classic study that ended up demonstrating that lactate threshold, the fastest speed you can maintain without triggering a dramatic rise in blood lactate levels, is a remarkably accurate predictor of marathon time. The seed was planted and Joyner was soon volunteering at the lab and embarking on the first stages of an unexpected new career trajectory that eventually led to a position as physician-researcher at the Mayo Clinic, where he is now one of the world’s mostly widely cited experts on the limits of human performance.

That first study on lactate threshold offered Joyner a glimpse of physiology’s predictive power. The fact that such an arcane lab test could pick the winner, or at least the general gist of finishing order, among a group of endurance athletes was a tantalizing prospect. And when, a decade later, Joyner finally pushed this train of thought to its logical extreme, he arrived at a very specific number: 1:57:58. It was a ridiculous, laughable number—a provocation. Either the genetics needed to produce such a performance were exceedingly rare, he wrote in the paper’s conclusions, “or our level of knowledge about the determinants of human performance is inadequate.”

By Day 56, the relentless physical demands of Henry Worsley’s solo trans-Antarctic trek were taking a toll. He woke that morning feeling weaker than he’d felt at any point in the expedition, his strength sapped by a restless night repeatedly interrupted by a “bad stomach.” He set off as usual, but gave up after an hour and slept for the rest of the day. “You have to listen to your body sometimes,” he admitted in his audio diary.

Still, he was more than 200 miles from his destination and already behind his planned schedule. So he roused himself that night, packed up his tent, and set off again at ten minutes after midnight under the unblinking polar sun. He was approaching the high point of the journey, slogging up a massive ice ridge known as the Titan Dome, more than 10,000 feet above sea level. The thin air forced him to take frequent breaks to catch his breath, and a stretch of sandy, blowing snow bogged his sled down and slowed his progress for several hours. By 4 P.M., having covered 16 miles in 16 hours, he was once again utterly spent. He had hoped to cross from the 89th degree of southern latitude—the one closest to the South Pole—into the 88th, but he was forced to stop one mile short of his goal. “There was nothing left in the tank,” he reported. “I had completely run empty.”

The next day was January 9, the day that Shackleton had famously turned back from his South Pole quest in 1909. “A live donkey is better than a dead lion, isn’t it?” Shackleton had said to his wife when he returned to England. Worsley was camped just 34 miles from Shackleton’s turnaround latitude, and he marked the anniversary with a small cigar—which he chomped with a gap-toothed grin, having lost a front tooth to a frozen energy bar a few days earlier—and a dram of Dewar’s Royal Brackla Scotch whiskey, a bottle of which he had hauled across the continent.

Of the many advantages Worsley had over Shackleton, perhaps the most powerful was the Iridium satellite phone he carried in his pack, with which he could choose at any moment to call for an air evacuation. But this blessing was also a curse. In calculating his limits, Shackleton had been forced to leave a margin of error due to the impossibility of predicting how the return journey would go. Worsley’s access to near-instantaneous help, on the other hand, allowed him to push much closer to the margins—to empty his tank day after day, after struggling through the snow for 12, 14, or 16 hours; to ignore his increasing weakness and 50-pound weight loss; to fight on even as the odds tilted further against him.

Eventually, it became clear that he wouldn’t make it to his scheduled pickup. He’d been trying to log 16-hour days to get back on schedule, but soft snow and whiteouts combined with his continuing physical deterioration to derail him. He contemplated a shorter goal of reaching the Shackleton glacier, but even that proved out of reach. On January 21, his seventieth day of travel, he made the call. “When my hero Ernest Shackleton stood 97 [nautical] miles from the South Pole on the morning on January 9, 1909, he said he’d shot his bolt,” Worsley reported in his audio diary. “Well today, I have to inform you with some sadness that I too have shot my bolt. My journey is at an end. I have run out of time, physical endurance, and the simple sheer ability to slide one ski in front of the other.”

The next day, he was picked up for the six-hour flight back to Union Glacier, where logistical support for Antarctic expeditions is based, and then airlifted to the hospital in Punta Arenas, Chile, to be treated for exhaustion and dehydration. It was a disappointing end to the expedition, but Worsley appeared to have successfully followed Shackleton’s advice to remain a “live donkey.” In the hospital, though, the situation took an unexpected turn: Worsley was diagnosed with bacterial peritonitis, an infection of the abdominal lining, and rushed into surgery. On January 24, at the age of fifty-five, Henry Worsley died of widespread organ failure, leaving behind a wife and two children.

When avalanches claim a skier, or sharks attack a surfer, or a puff of unexpected wind dooms a wingsuit flier, it’s always news. Like these other “extreme” deaths, Worsley’s tragic end was reported and discussed around the world. There was a difference, though. There had been no avalanche, no large, hungry predator, no high-speed impact. He didn’t freeze to death, he wasn’t lost, and he still had plenty of food to eat. Though it may never be clear exactly what pushed him over the edge, he seemed, in essence, to have voluntarily driven himself to oblivion—a rarity that added a grim fascination to his demise. “In exploring the outer limits of endurance,” Britain’s Guardian newspaper asked, “did Worsley not realize he’d surpassed his own?”

In a sense, Worsley’s death seemed a vindication of the mathematical view of human limits. “The machinery of the body is all of a chemical or physical kind. It will all be expressed some day in physical and chemical terms,” Hill had predicted in 1927. And every machine, no matter how great, has a maximum capacity. Worsley, in trying to cross Antarctica on his own, had embarked on a mission that exceeded his body’s capacity, and no amount of mental strength and tenacity could change that calculation.

But if that’s true, then why is death by endurance so rare? Why don’t Olympic marathoners and Channel swimmers and Appalachian Trail hikers keel over on a regular basis? That’s the riddle a young South African doctor named Tim Noakes posed to himself as he was preparing to deliver the most important talk of his life, a prestigious honorary lecture at the annual meeting of the American College of Sports Medicine, in 1996: “I said, now hold on. What is really interesting about exercise is not that people die of, say, heatstroke; or when people are climbing Everest, it’s not that one or two die,” he later recalled. “The fact is, the majority don’t die—and that is much more interesting.”

Endure

Подняться наверх