Читать книгу Inside Intel - Tim Jackson - Страница 9
ОглавлениеGREAT CHANGES WERE HAPPENING in the world during the long, hot summer of 1968. In Paris, students set up barricades against their college professors and the government. In Chicago, demonstrations against the war in Vietnam turned into riots that marred the Democratic National Convention. But the foundation of Intel, which has had a bigger effect on the lives of people in the world than either of these, took place without being noticed outside the narrow confines of the Bay Area’s electronics industry.
That was just how Gordon Moore liked it. The very opposite of a self-publicist, he instinctively avoided talking about his achievements until they were complete.
When Moore was asked by one of the local papers at the beginning of August what kind of devices he and Robert Noyce intended to build at their new company, his response was about as uninformative as it could be: ‘We particularly are interested in the product areas that none of the manufacturers are supplying,’ he said.
Come on, Gordon, that’s hardly news. You wouldn’t be setting up a new company just to copy something already on the market, would you?
When asked to elaborate, Moore would offer only two further thoughts. First, the new company would avoid direct government business. Second, the company would keep to the industrial end of the business rather than trying to develop products for end consumers.
From anyone else, this apparent vagueness might have indicated indecisiveness. But Moore hadn’t spent nine years at the helm of Fairchild’s extensive R&D operation for nothing. His principle was straightforward: there was no need to give away any information, however trivial, to potential competitors. If the people who had backed him and Noyce to the tune of several hundred thousand dollars each could do without detailed information on their technical plans, then so could the rest of the world.
This was no surprise to those who had known him at Fairchild. Moore was always hard to pin down on technical detail, even when he was giving a speech in public. When he was once sitting on a panel at a technical conference devoted to semiconductor materials, someone in the audience raised a question about the silicon nitride layers that Moore’s technical team were rumoured to have been experimenting with. What had the results been?
‘We got exactly the effect we were predicting,’ he answered. There was just one piece of crucial information he withheld from the audience. The engineering team had come to the conclusion that silicon nitride deposits would have no useful technical effect, and some careful tests had proved them right. But Moore thought that if it had taken his own researchers weeks to discover that the technology was a dead end, there was no reason to tell the world. Let the competition waste some time, too.
Behind the veil of vagueness, however, Noyce and Moore knew exactly what business they were going to go into. They would build memory devices. Across America, companies were buying mainframe computers to manage their accounting systems or payroll or medical records. In every case the computer needed a place where programs and data from work in progress could be stored, and then retrieved at high speed. Yet while integrated circuits were increasingly being used in the logic devices that carried out the calculations themselves, memory storage was struck in the pre-transistor age. The cheapest form of computer memory was ‘magnetic core’, a tiny magnetic doughnut which stored information in the form of ones and zeros depending on the way it was magnetized. If only a way could be found to integrate memory cells on to the circuits that Noyce had pioneered, then computer memory could become far more compact and speedier to operate. Once one computer company began to use integrated-circuit memory, the rest would follow suit. Then a virtuous circle of increasing volumes and falling costs would follow, ending up with the complete replacement of core memory with the new semiconductor devices. The potential market was millions upon millions of units a year.
A good reason for keeping so quiet about Intel’s plans was that Moore and Noyce were not alone in seeing this potential. The idea that core memory was eventually going to be replaced by semiconductors wasn’t a wild-eyed obsession held only by eccentrics. It was the received wisdom inside the industry. Route 101, the highway that ran south from San Francisco through the Valley and then onwards to Los Angeles, was dotted with laboratories racing to be the first to develop semiconductor memory devices.
Gordon Moore, though, had a head start. Shortly before his departure from Fairchild, a gifted young Italian semiconductor scientist in his department named Federico Faggin, had invented a new variation on the standards integrated-circuit manufacturing process, known as metal oxide on silicon. By 1968 the new technique, called ‘silicon gate’, was working stably in the laboratory, but it was still far from being a commercial product. With $2m in the bank, and a team of good engineers behind them, Moore knew that he and Noyce could make as good a stab as anyone else in the world at taking this technology and developing it to the point where memory devices could be mass produced at low cost.
But silicon-gate MOS was only one of three promising approaches to the problem of building integrated memory circuits. Another was to build multichip memory modules; a third was to use a process known as Schottky bipolar. Moore and Noyce decided that they would pursue all three simultaneously – and sell whichever they were able to mass-produce first. (Since Schottky proved easily replicable by competitors and multichip modules too hard even for Intel, Moore came to refer to the decision to pursue three lines of research simultaneously as the ‘Goldilocks’ strategy. Like the bowls of porridge left by the bears, only silicon gate turned out just right.)
The scientists left behind at Fairchild after the departure of Noyce and Moore would later respond with hurt pride to the news that their former boss and his new company were trying to commercialize an invention made in their laboratory. They put up a large placard, visible to all comers to the R&D department, emblazoned with the words SILICON GATE WAS INVENTED HERE. One former employee of both companies, looking back, put the accusation baldly: ‘Intel was founded to steal the silicon gate process from Fairchild’. Another was more forgiving: ‘What [we] brought with us was the knowledge that [we] had been some built, and the knowledge of the device physics … We didn’t bring with us recipes, mask sets, device designs, that sort of stuff … What we brought was a lot of knowledge’.
In any case, Fairchild Semiconductor had only itself to blame for the loss of one of its key secrets to a new competitor. For a couple of years now, the best technologies developed in its research labs in Palo Alto had not been making it to Fairchild’s Mountain View manufacturing facility. Instead, they seemed to be attracted by some osmotic principle to Charlie Sporck’s manufacturing aces at National Semiconductor.
This was partly because the rules were different in the 1960s. The days had not yet arrived when technology companies would use patents, trade secrets and other forms of intellectual property as commercial weapons. Scientists were happy to assign to their employer the rights in any patents they earned, in return for a token dollar and a framed copy of the first page of the patent. Why should they be any less generous when it came to scientists in other companies? After all, these were exciting times. Trying to hold back the spread of information at a time when things were moving ahead so fast was not only self-defeating, since any competing technologist worth his salt could design his way around a patent. It also felt unsporting.
Every Friday night, engineers from different companies would assemble at the Wagon Wheel, a local watering hole, to exchange gossip. Not just who was sleeping with whom, but also who was working on what and who was having which problems with which designs and which processes. Prominently displayed on the wall of the bar was a huge enlargement of the innards of an integrated circuit, created by popping the top and using an industrial-strength camera to record the secrets inside. The image served almost as a religious icon, looking down with approval as scientists threw their employers’ secrets across the table as casually as they would pay for a round of drinks.
The selection process Noyce and Moore used in assembling their team was simple. The pair asked everyone they respected, particularly in the electronic engineering departments of universities, for the names of the brightest research scientists they knew. Noyce or Moore would make contact with a phone call, and the candidate would be invited over for a chat – either at Noyce’s house or at some modest local restaurant like the International House of Pancakes. They would chat over a lunch or a breakfast, the candidate sitting on one plastic banquette, the Intel founders sitting opposite on the other, and then Noyce and Moore would make their decision. In addition to being a brilliant engineer, you had to pass two tests to get a job at Intel. You had to be willing to come to work for Bob and Gordon for no more than your current salary with your existing employer – and sometimes, if they thought you were overpaid, for 10% less. In return, you’d be promised stock options, which you would have to trust the two founders would be adequate compensation for a pay rise forgone. Also, you had to be willing to take a demotion. If Intel was going to grow as fast as its founders hoped, its first round of hires would soon be responsible for running much larger teams of people. In the meantime, they would have to spend a few months doing work that was actually more junior than in the job they had come from. An engineer who was currently running an entire division with 5,000 staff to order about and sales of $25m a year would find himself moving to a new job at Intel in which he was once again managing a single fabrication plant, or ‘fab’, and where the big issue of his day might be a maladjustment of a single machine.
The consolation was the strong sense that things would not stay this way for long. Ted Hoff, a brilliant postdoctoral researcher at Stanford who was recommended to Noyce by a professor in his department, reminded the Intel founder during his interview that there were more than half a dozen other new companies already in the market trying to develop semiconductor memory. Was there any need for another semiconductor company? What were the chances of success?
Noyce’s reply exuded quiet confidence. ‘Even if we don’t succeed,’ he said, ‘the founders will probably end up OK.’
Intel’s new hires found that this confidence was equally shared by people outside the company. Gene Flath, a product group general manager hired in from Fairchild to a senior job in the fledgling company’s manufacturing operation, decided to spend the week’s holiday he was owed by his former employer down in Los Angeles looking over new chip manufacturing equipment at a trade show on behalf of Intel. When a couple of pieces took his fancy, it seemed only natural to put in an order for the equipment then and there. And it seemed equally natural that the vendors, hearing that Flath had signed up with Noyce and Moore, were willing to give him immediate credit. Noyce and Moore? That’s OK. They’ll have the money.
There was something infectious in the evident confidence of Noyce and Moore. As their first working space, they chose an old Union Carbide plant, 17,000 square feet on Middlefield Road in the town of Mountain View, an hour south of San Francisco. When the deal was signed, Union Carbide hadn’t quite moved out. Intel got the front office of the building immediately, with the right to hang a big sign outside bearing its logo – the company name, printed in blue all in lower-case Helvetica letters, with the ‘e’ dropped so that its crossbar was level with the line. The idea was that the lower-case letters showed that Intel was a modern, go-ahead company for the 1970s; the dropped ‘e’ was a reminder to its customers that its name was a contraction of ‘integrated electronics’. Some employees, but not all, took that ‘e’ to mean that the word Intel should be pronounced with the emphasis on the second syllable.
Over the succeeding weeks Union Carbide cleared more equipment from the back of the building and Intel brought more people into the front, until one day late in the fall of 1968, Intel Corporation found itself at last the sole occupant of a large industrial shell, ready plumbed for the heavy-duty power, water and gases that were essential to the process of making silicon chips.
Fabricating silicon chips was the modern world’s answer to medieval alchemy, the turning of base metals into gold. Except here, the raw material was sand, which was turned into crystalline silicon which arrived at the fab moulded into a long sausage, two inches in diameter. The silicon would then be sliced into thin ‘wafers’ a fraction of an inch thick. By a series of secret, almost magical processes, each wafer would be coated with scores of identical miniature circuits, neatly stepped in rows and columns. Then the wafers would be scored with a diamond-cutter, and the individual chips would be sawn away from their neighbours and wired individually into black ceramic packages, often with a line of metal pins down each side. It was impossible to convey to your children what an achievement those circuits represented; when one engineer showed the completed chips in their packaging to his kids, they referred to them as ‘Barbie combs’. But if you were in the industry, you knew that each one could sell for a dollar, or ten dollars, or even more, depending on what was inside.
It was the guy given the job of laying out the floor design for manufacturing who was the first to realize the scale of the ambitions of Intel’s two founders. When he asked what capacity the fab should plan for, the figure he was given was 2,000 wafer starts a week. Two thousand clean silicon wafers, each one starting its way through the production process. Each one etched with 100 or more circuits on its surface. Two hundred thousand circuits a week; 10 million a year. Of course in those days you’d be lucky if 10% of them came out right. But for a startup, which had not yet developed either a circuit design or a process to build it with, such investment in capacity was unheard of. Even Fairchild, which had become the world’s leading semiconductor manufacturer, could handle only five times as much. Who did Noyce and Moore think they were?