In the nearly a century since the birth of electronic moving pictures, the process of getting them from one place to another has gone from being a science experiment to an industry to a device that slips into a pocket. Even a decade or two ago, it was science fiction to propose transmitting pictures across continents from something as affordable and pocket-sized as a cellphone. Worse, it would have been tempting to foresee a downside. Might common access to global video transmission make live pictures from faraway places seem somehow unspectacular?
Of course, the opposite is true: new markets for streamed media have emerged, with commercial newsgathering now using much the same infrastructure as someone streaming a video game. As an example of efficient technical convergence, this is hard to object to, but the advent of mass citizen broadcasting raises political issues as well as engineering ones. We use friendly terms like “democratization” to describe the situation, but there are limits on what the public and the law will tolerate, as well as limits on what the infrastructure, and the businesses that maintain it, can facilitate.
The avalanche of data produced by a camera’s sensor is difficult to handle, regardless of what you want to do with it.
Real-time transmission of media dates back to the 1920s and George Owen Squier. Among other inventions, Squier patented an approach for sending music long distances without using radio, which was expensive and unreliable at the time. The outcome was the Muzak company, which transmitted music over wires. Initially positioned to provide entertainment to consumers, it was squeezed out of the home market by radio in the 1930s. Muzak then changed its approach, providing music for, yes, elevators, and also factories, where the rhythm and instrumentation were designed to increase worker productivity. It’s perhaps surprising that the company name lasted until as recently as 2013.
Streaming as we currently know it is a child of the early 1990s—a time when the hardware to make it really practical was still a way off. With home users still on dial-up modems capable of receiving a few tens of kilobits per second, network infrastructure was naturally a concern, although CPU horsepower was also a controlling issue. Networks were gigantically outpaced by emerging CD-ROM media at a comparatively healthy 1200 Kb/s, but it hardly mattered if the computer couldn’t handle the decompression fast enough.
In 1992, SuperMac Technologies released the Cinepak video compression system, which quickly became part of Apple’s then-new QuickTime architecture. Cinepak was primitive by 2016 standards, and not suitable for streaming, but it had been designed as much to minimize CPU load as to maximize quality. It solved the problem, enabling full-motion video on CD-ROM, but consumer-targeted live streaming video wasn’t feasible until the late 1990s, with improved network bandwidth and RealNetworks’ use of the H.263 compression standard. Intended for teleconferencing, H.263 is, as the name suggests, a precursor to the modern H.264, and requires vastly more of the host computer’s resources than the likes of Cinepak.
The improved performance of modern electronics is as important to streaming as improved network bandwidth.
Many current applications of streaming highlight this need for both processing power and bandwidth. A radio data modem talking to the cellphone system, as supported by companies such as JVC in cameras including the GY-HM650, is an attractive option for broadcasters eager to avoid the prodigious expense of satellite uplinks. It doesn’t offer the same transcontinental coverage as a satellite, of course, but if a local network affiliate needs to cover a city, it can work incredibly well. As well as the cellphone network, it’s reliant on the camera’s ability to take the enormous, uncompressed bulk data produced by its sensor and compress it into packets small enough to fit in the pipe provided by the radio link. The HM650 can actually do this twice: once for the internet, and once again, at a different data rate, for recording.
The compromise is in picture quality, but it’s a matter of degree. Twelve megabits per second, representing the maximum capability of the HM650’s wireless feature, pushes the real-world capabilities of many 4G LTE cellphone data connections to their limits. In only slightly congested circumstances, even this bandwidth often can’t be maintained. For comparison, some broadcasters require in-camera recordings of at least 50 Mb/s for new material. Bit rates below a quarter of that, encoded by a small, battery-powered, cost-constrained camera, are likely to have problems visible to laypeople, particularly where fast motion is involved.
To understand the willingness of journalists to tolerate that compromise to get the scoop, consider that the first attempts at video streaming involved ISDN satellite phones. These enjoyed international coverage but offered a microscopic 64 Kb uplink, on the order of a late-1990s modem. Pictures made small enough to fit down a pipe that narrow were visibly compromised, with motion and sound artifacts that made the results suitable only for the rawest edge of current affairs. Technology has improved, and practically all video streaming now uses the advanced H.264 codec. Even in the most favorable of conditions, though, it’s still not possible to wander the streets with a Handycam and stream images that can unequivocally be called “broadcast quality.”
The rise of desktop computers, tablets and phones as consumer media devices has been meteoric.
Updates to the cellphone network, and maybe even better mathematics, will soon change that. Useful as it is, though, internet contribution to broadcast television is not a technology you’ll find disruptive unless you own a communications satellite. There’s not enough material to slow things down in terms of the global internet, and functionally, it’s just a cheaper, easier way to do an old job. That’s not to disparage the efforts of companies like JVC, with their cameras and network receivers, or of companies like Zixi that offer supporting services. Their technologies might well change the world of live news. To change the world as seen by the audience, we must apply the internet not just to acquisition, but to distribution.
It seems almost redundant to mention the big players. Netflix and YouTube have, between them, occasionally accounted for more than half of all internet traffic in North America. That just two companies can have such an effect is an indication not so much of their individual success, considerable though it is, but of the sheer weight of video data. That’s where concerns arise over just how well internet economics ensure that those who profit from the huge reach and capability of the network pay their way.
While industrial-scale data handling is generally billable per unit data transferred, most consumers expect access to a home internet connection that’s either unmetered or metered only up to a fairly generous maximum. Because of this situation, the most expensive part of the network, the intricate web of hardware that connects few telephone exchanges to many, many homes, has least access to the funds of the people who are making the greatest demands. This is only one of the many arguments advanced by companies who would like to charge differently for passing different kinds of data.
Streaming in the modern context essentially means the real-time transport of media over the infrastructure of the internet.
Some types of data-specific charging were outlawed in the United States in June 2015 after the FCC reclassified broadband as what the relevant legislation refers to as a “common carrier.” Legal arguments persist, but various other countries have also made rules designed to prevent infrastructure providers from holding people who would like to use that infrastructure to virtual ransom. Ultimately, anti-competition issues and concerns over the funding of network infrastructure can both be valid at once. It’s a massively complex argument that’s beyond the scope of this article, but it’s an issue that’s been raised, in the main, by streaming.
Politics aside, probably the biggest changes made by streaming are sociological, not technical. Adoption of the internet as a carrier for professionally produced content has finally made video-on-demand possible, but the approach of Netflix is still fundamentally that of a supplier distributing to a consumer, operating as the mass media always has. Organizations such as Dailymotion and Amazon Instant Video, YouTube and Netflix, or Vimeo and the BBC’s iPlayer do many technically similar things.
The difference is the mass consumption of user-generated content. Unlike any broadcast medium that has ever existed, the internet goes both ways, and with cellphones offering instant upload of video to sharing sites, it does so at the touch of a button. If YouTube’s staggering success wasn’t sufficient evidence, Amazon’s high-end program-making ambitions and its $970 million acquisition of the Twitch game streaming service in 2014 make it clear that the industry would like to play both sides.
Studio installations are likely to benefit significantly from video over IP.
Crucially, it doesn’t seem to be a zero-sum game. User-generated content can get deep into a subject in a way that would make traditional media nervous about mass appeal. Traditional production, conversely, can achieve results that most YouTube auteurs can’t match. There’s room for both in the market, even if there might not be room on an increasingly congested internet. Technically, it seems likely that processing power and bandwidth will continue to improve, so all these easy things will become easier still.
In the end, radio didn’t kill books. Cinema didn’t kill radio, and television didn’t kill cinema. There can be more than one art form in the world, and more than one way of making art. It seems unlikely that a new media delivery technique, even a global video uplink box that’ll fit in a back pocket, is going to destroy more than it creates.