Читать книгу Timeline Analog 2 - John Buck - Страница 8
1. The race for CMX
ОглавлениеIn January 1970 a Pan American World Airways flight (above) from New York to London became the Boeing 747’s first commercial flight. So began the Seventies.
Eric Peters was the first person from his home town of North Vassalboro to attend an Ivy League school. Peters wanted to complete two degrees at Cornell University, in engineering and computer sciences, but Cornell didn’t teach the latter. Peters recalls his advisor’s reaction
He said, ‘I guess we will have to put something together for that because we don't have a computer sciences program right now.' Cornell was very flexible and accommodating and created one.
Peters graduated with both degrees and started as a software engineer at Digital Equipment Corporation (DEC) in the Small Systems Engineering Group. DEC, or Digital as it was often called, was a major American company in the computer industry from the 1950s to the 1990s. It eventually became second only to IBM.
It was an intensely creative and challenging and open working environment, I guess somewhat like Google is today, DEC was back then. My relatives asked me where I worked and I would say, ‘Digital’ and they would say, ‘Oh you make watches!’ It was the only thing they knew that was digital, a digital watch.
While it became famous for the VAX line of computers, Digital's financial success was built on the PDP (Programmed Data Processors) minicomputers. It created the PDP-1 in 1959 and then iterated faster more capable versions through the next decade.
I worked on a number of real time operating systems projects and was part of the four-man team that created RT-11, the real time (RT) operating system for DEC’s PDP-11 microcomputers. I gained a very good background in real time programming which turned out to be very important for editing!
Which is certainly real time to the extreme because there aren't many applications that are that hard as playing real time media.
Eric Peters was a key contributor to Avid in future but with the release of the PDP-11/20 in January 1970, he had helped create the platform for the world's first nonlinear computer editing system.
The joint venture between Memorex and CBS Television, CMX Systems, transitioned from analog to digital. Group manager Ken Taylor hired more staff to design the control systems, digital circuitry, disk pack switching and user interface for the unnamed editing system. Jerry Youngstrom recalls:
Obviously we needed some sort of program to ‘run’ the system. Memorex had put in an IBM 360 computer to aid the hardware group but the only person in the media group that knew how to program it was a statistician. For that matter in the wider engineering community there was almost nobody writing code. Programming was a brand new discipline. We were exceptionally lucky to find Dave.
David (Dave) W. Bargen worked at the Medical Diagnostics Operation of Xerox Corporation on N. Halstead Street, Pasadena, California. He recalls:
A friend heard about the new joint venture between CBS and Memorex and told me before he left Xerox to work for Memorex.
Bargen started at CMX in May 1970 and among his first were to choose a computing platform.
I selected the DEC PDP-11 computer.
The PDP-11 computer, released just a few months earlier, had 16,000 words of core memory (16 bit), no hard drive and ran at 3 to 5 microseconds per instruction. (about 1000 times slower than a typical PC today).
The PDP-11 was a new computer model. It was 16 bit but also handled 8-bit bytes efficiently, which would be helpful for hardware control. It had good performance for the money. The original CBS system had used a DEC PDP-8, but that it was at the end of its model life-span.
Bargen went to Bill Butler and Ken Taylor with his recommendation. He recalls:
At the time it was the preferred mini-computer. DEC was the original mini and a huge success until the advent of the Apple and PC. The PDP -11 had adequate power and speed, and supported all the peripherals we needed, including a punch paper tape reader/writer and the graphic interface which was quite new at that time.
Bargen's next decision, software coding.
Assembly (language) was used because of the need for speed and the limited memory capacity of the day.
He now needed software coders, who were uncommon in the Bay Area.
James (Jim) C. Adams Jr. had moved from Fairchild Semiconductor to Link General Precision. Link had created flight simulators for the Apollo Lunar Lander (above) and then the F-111 fighter aircraft.
I was using an Xerox Data Systems (XDS) Sigma 5, developing the radar simulator software for the F-111. With this, and all of my previous jobs, I was sent to the factory schools to learn the software development process as well as the underlying hardware philosophy of the specific computer.
DEC for the PDP-8, Scientific Data Systems for the SDS 930, Xerox for the SIGMA 5, as well as others such as Control Data Corporation (CDC)for their systems and Texas Instruments for analog/digital circuit design. At XDS the key take-away for me was how to process interrupts. An interrupt is a signal sent by a device that it needs attention. This is very different from a processor interrogating a device periodically to see if it either has data or is able to receive data.
Adams soon left Link.
I answered an ad in the San Jose Mercury-News for a programmer 'to work on a new innovative system'.
Adams became the first outside hire by CMX. He adds:
Once onboard I met my immediate manager, Dave Bargen, I was introduced to the concept of video tape editing. Specifically, the process of many camera takes, selected parts of each take which were to be linked together to create a deliverable package.
The older technology was to cut the desired part out of the original take and glue it to another piece cut from another piece. The newer scheme involved using a frame counting code recorded on the tape and recording a number of frames from that point to another tape.This required two machines able to synchronize to the correct frame code for each machine.
This concept had been carried to a computer to perform this task, rather than a dedicated hardware device that was quite operator intensive.
Adrian Ettlinger’s proof-of-concept system, employing a PDP-8, could perform a single recording, but only the one segment buried in the computer program. CBS wanted to create a two-part system, the first of which was to determine the segments and sequence of the original takes at low resolution, what we now call Offline editing, and the second part was to copy the edit decisions onto a blank tape at high resolution for review and eventual delivery to the broadcast stations. What became known as Online editing.
Bargen adds:
The first product priority was development of the low resolution offline editing system that became know as the CMX 600, because that was the most innovative, and had the most unknowns.
Adams continues:
From a control viewpoint, the offline process was pretty straight forward. Each frame of the original material had a frame code associated with it; this code tallied hours, minutes, seconds and frames associated with the time at which it was originally recorded. This frame count information was penciled into the program script such that any segment of tape could be quickly located from the script notes.
The frame count codes came in two variants: straight 30 frames per second (fps) for the black and white low resolution video and a somewhat different method to accommodate the color video at 29.97 fps.
The complexity came with the desire to display the selected frames at other than 30 fps; faster, slower and frame-by-frame. A further complexity was that only the first field of each frame was on the disk, the second field for interlaced video came from an auxiliary disk which recorded the first field and delayed in a line-time to provide the second field.
Dave Bargen wrote:
King Anderson did much of the digital hardware design, for control and interface. Later, he headed the manufacturing of the products.
Jim Adams continues:
Dave had selected the PDP-11 as the computer to use in this system and I believe that turned out to be the best decision made in the entire project.This computer had many attributes of larger systems and the machine assembly code was extremely versatile.
Add-on (interface) boards for the PDP-11 were designed to link the necessary control features of the disk players, video switching circulatory and light pen position to the processor.
I was provided with a high-speed paper-tape reader and punch and an ASR-33 teletype for my software development.The printing speed of the ASR-33 is 10 characters per second.After a couple of months, I was able to persuade the company to get a Memorex printer which was about ten times as fast, but I still edited my programs on the ASR-33.
The analog team were still dealing with issues as Cal Strobele recalls:
I spent many days, in a fog, trying to figure out how to apply what we already knew about compression and recording to this project. In the end I came up with an idea to skip every other video field, in black and white and to record the audio on the back porch of the video for the missing field.
The ‘backporch’ is a term referring to the signal time between the horizontal sync pulse and the next active portion of the video signal. This is the area where the audio signal was stored.
The Memorex disc pack was typically driven at 1,800 rpm so that each revolution took 1/30th of a second. Since it took one-sixtieth of a second to generate one video field, exactly two fields of video were recorded on a disc corresponding to one revolution of the disc pack.
In skipfield recording only one field of video is recorded per frame. When replayed, each field is duplicated (and interlaced) to produce a fairly accurate reproduction of the original video sequence.
Gene Simon continues:
Because bandwidth was such a premium, only every other video field was recorded on the Memorex drives and they were only low bandwidth monochrome images. A separate single platter “skip field” disk drive was used to synthetically create the missing fields by creating a one-field delay.
Yves Faroudja recalls:
This method was the same as what Sony used with their helical scan tape machines at the time. There are ways of course to extrapolate better data and see the beauty of disk images with more heads but that was more complicated and more expensive than CMX wanted. The video was more or less preserved with the skip field method to achieve a way to edit in the most expedient way.”
Strobele recalls:
The sync coming from the original video feed then drove the servo and we could treat the whole system as a closed circuit. Then we could destroy the sync pulse, the back porch and remove all of that and insert our signal in there with only a narrow leading pulse of the leading edge.
The skip field method had solved the storage issue by enabling more video to be crammed onto the disk platters but it created another problem. Bargen, Strobele, Anderson, Adams, Youngstrom, Faroudja, Scaggs and Eppstein cited it in their subsequent patent:
… the visual quality of a reproduced skipfield recording at the editor's monitors is sufficiently high that editing functioning is possible. However, while it is satisfactory to eliminate one-out-of-two or more fields of video, the same is not true for audio. In fact, it has been found that for satisfactory audio reproduction, the audio samples associated with each field of video must be retained and ultimately reproduced.
Jerry Youngstrom recalls:
Every field of audio had to be retained and kept in sync with the skip field video.
Faroudja adds:
Tony Eppstein and Lee Scaggs designed a very smart scheme to keep the audio in sync and at high quality.
Gene Simon continues:
The missing fields of audio could not synthetically be created, so both fields of a single audio channel were recorded on the back porch of each video line (where color burst would have resided) with a pulse amplitude modulation scheme. The skip field disk was used in both record and playback to delay every other field of audio so it could be reconstructed.
Since the video line rate is 15,750 the line rate amplitude modulated audio pulses could be filtered to become reasonable sounding audio.
The system of placing audio signals, into video control pulses in a manner which did not disturb the control function to be carried out by these control pulses, had been devised and patented by 3M engineer Fred Hodge in 1965.
Strobele recalls:
The pulse amplitude modulation scheme had a great side benefit because when it was displayed in video, an editor could ‘see’ the audio. The sound was displayed for the editor to work with and while it was never intended to be a feature, once we showed CBS editors this, they loved it.
Gene Simon adds:
I’m not sure if this was by design or not, but having still frame audio turned out to be a great feature because edit points during silence (e.g., between words) could easily be found when frame by frame jogging.
On the other hand, if you were not the editor trying to do actual creative work, the buzz of still frame audio will drive you crazy. The hardware and media development was critical but a significant amount of software development was needed to drive the system.
Jim Adams adds:
As time went on, it became apparent to me that the hardware side of the 600 was far more difficult that had been originally surmised. Adapting the digital disk drives to handle analog video, sound and frame count signals, as well as converting the drive motors from standard AC motors to servo-controlled DC motors was very challenging to the electronic design engineers.
Then another audio issue arose. The previous method to reduce noise introduced along a transmission path, be that a coaxial cable, telemetry link or videotape, was to use pre-emphasis and de-emphasis circuitry. However such an approach caused audio distortion, image streaking and could not guarantee equal time delays for signals.
Strobele recalls:
Because it was an FM signal and we had to ensure that the bandwidth of the two signals didn’t impinge on each other, we had to come up with a technique of limiting the level. The answer was to create a hard limit with a slow decay but we didn’t limit the sound limit except for the first cycle.
Lee Scaggs created, and patented, a new method that logarithmically amplified low level signals and logarithmically de-emphasized the original input signals. Then further video issues as Strobele recalls:
There were some image issues with artefacts when using 24fps footage. You would see it particularly on footage with zooms clashing with key stoning from the lens, but because we knew this was going to be purely for offline editing, and it was a technical limitation of the field and frame conversion that it was impossible to fix, we let that go.
Faroudja recalls:
My job was to get decent pictures for the two black and white monitors that were planned for the system. The picture had to be vaguely recognizable but not use too much storage space. Through trial and error we discovered that with about one quarter of the original camera data being used you could edit well.”
Strobele’s work on defining the analog signal process and its circuitry was almost done.
We had made sure the electronics played RF to RF, and the RF reproduction of the disks was good enough. Once Tony (Eppstein) could reliably run the packs at 3600rpm, we could put the first field’s audio on and the second field’s audio on and then combine them and do all of that processing and have a data stream ready to put onto the disks.
The digital team was now in a position to improve and finesse the process. Better flying heights for the heads, make vertical recording possible for increased bandwidth.
Faroudja continues:
Remember this is 1970 and I was fascinated by how easily you could click and click, decide this shot or that and then you would see the shots on the screen as an edited version.
Across America, Harvey Dubner was out of work.
He had graduated as an Electrical Engineer in 1949 and was then employed by companies to work on military projects during the Cold War.
In 1958 I was involved with a missile system that required extensive optical calculations so that we rented a new thing called a digital computer. To make a long story short, that changed my life.
Dubner joined Computer Applications Inc., then the largest software company in the East, and became a Vice President in charge of all special hardware projects.
All my projects included a computer, programming the computer, and designing and building peripheral equipment for the computer. As an example, we designed and built Ticketron, a ticket reservation system (1966) that could service 1200 ticket selling devices in real time.
Dubner's Ticketron work, done with Joseph Abate, defined a generational change in using microcomputers (CDC 1700) to solve problems previously assigned to mainframes. Dubner explained his frugal 5000 lines for the entire TIcketron system as a system where sophistication and generality must be traded for economic and reliability.
Their work was seminal: "Ticketron - A Successfully Onerating System Without an Operating System."
Dubner continues:
Unfortunately, CAI was forced into bankruptcy in November, 1970. Without planning it I suddenly found myself starting a new company because several of the companies for which I was doing projects, wanted to continue with me.
This was the start of Dubner Computer Systems (DCS).