Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now


Recycling the Past

It is that time of year when we seem to look forward and backward in equal measure.

It is that time of year when we seem to look forward and backward in equal measure. As part of this transitional time, I decided to clean out my studio after 17 years of “organic growth.”

by Wayne Cole

I ended up taking four computer systems to the recycle yard, two of which controlled A/B Roll linear editing systems, and two which supported NLEs that have disappeared from the marketplace. While taking these journeys through the history of desktop computing and video (and to the local recycling yard), I began to think of how “new ideas” in our field seem more like re-makes with a technological twist.


I remember when 3D was a big deal in the 1950s monster movies, comic books and View Master “reels.” Now it has come back to movies with a vengeance.

More than 40 3D titles are scheduled for release during the next three years. Some of the first “nouveau 3D” movies have hit Blu-ray already, and are expected to drive an increase in the sale of 3D TVs in the coming year. And several companies have sprung up that offer to make 3D content from existing 2D media.

The primary difference today is that digital 3D technology no longer requires two synchronized projectors and two reels of film. In the “analog” days, this process was very tricky, expensive, and produced 3D with a gimmicky unnatural appearance, so the fad quickly faded. Current proponents suggest that 3D in the ‘50s was an idea that was tried before the right technology existed. Having seen some of the new 3D that uses one projector and one “reel,” where each frame contains the simultaneous view for both left and right eye, I have to admit that it is much more natural looking. Used properly it can be a valuable story-telling tool. One can even make the case that stereoscopic 3D could provide a boon to video battlefield and surveillance systems once glasses-free displays become economically feasible.


Movie film was initially shot at 12 fps, and later at 16 fps before analog electro-mechanical technology allowed the industry to settle on 24 fps. The goal was to get as high a frame-rate as possible to eliminate visible flicker and judder in the serial display of individual “stop motion” frames produced by movie cameras. Animators also had the tricky and expensive job of setting up, or drawing, then applying one frame of motion to one frame of film or video. Television fixed on 30 interlaced frames per second as a means of analog compression that also provided the best motion artifact filtering for such “slow” frame rates.

Avatar has elevated 3D to blockbuster status. Now, even though we easily have the capability to display 60 fps digital video, users pressured the industry to provide ways to “dumb-down” video capture and display devices to mimic the 24 fps “look” as if it were something good. At the same time, movie makers like George Lucas have eschewed film altogether for the freedom, quality, and artistic look provided by high-frame-rate, high-resolution video. High-speed scientific and industrial cameras are now common in broadcast advertising and Hollywood movies for those ultimate slow-motion shots. Meanwhile, consumer TV manufacturers have upped display frequencies to as much as 240 fps to eliminate motion smearing, yet even with the advent of digital theater projection, “24-P” just won’t die.


On the computer side of things, people got excited when teletype, punched paper tape and card decks gave way to the time-sharing terminals for I/O and program execution. No waiting in line to punch and submit your card deck, or for the night shift to run your deck in one-at-a-time sequence with other “jobs.” With time sharing, large mainframes could process multiple jobs seemingly at once.

But then systems became saturated, the input-response cycle bogged down to unacceptable levels. The PC promised to change all that by putting the actual computational engine right on your own desk. But then it too became saturated both in terms of storage and computational power. Multiple CPUs in a single box, and the introduction of local area networks (LANs) tried to distribute the loads to speed up processing.

Then the Internet happened, and now the old time—sharing idea is back with a twist and a new name—”cloud computing.” In this model the mainframe is replaced by the collection of servers connected to the internet where various applications reside. Clients from all over the world can access the same application server at the same time, and like the old time-sharing model, the server will run and respond to client “jobs” concurrently. Some videocentric developers like Maximum Throughput (now owned by Avid) and Maximo have dabbled on the fringes of the cloud with online video editing and 3D character rigging applications.

FAST FACTS More than 40 3D titles are scheduled for release during the next three years. PAY AS YOU GO

Instead of licensing users, software vendors began implementing “copy protection” that locked applications to one specific dongle or computer. At the same time video rentals and music sales went to online downloads, while cable and satellite providers pushed pay-per-view services.

Cloud computing is the technology that will allow content providers and software developers to make “pay-per-use” the dominant movie, music and application consumption model. Software can be used without exposing the code to pirates outside secure corporate firewalls.

I believe that we will see more “cloud” applications for content production and distribution over the next few years. Licensing will be based on a yearly fee for unlimited use, or on measured usage and will require nearly constant connection to the Internet to use those applications. It’s like the old party-line telephone where you are billed by the time and distance of the call. It seems the future is made by simply adding a twist of new to a cup of old. Cheers!