New Thinking About Networks: Considering the Benefits of Hardware as a Service

Many tasks, such as video compression, could be sent out for cloud processing quite easily, and potentially without user intervention.
Author:
Publish date:

Today, someone made a valiant attempt to show me a piece of video shot on an iPhone. It depicted—or at least it apparently depicted—the impenetrable fog of North Atlantic weather between an aircraft window and a spectacular view of the Icelandic landscape. However, we'll have to take that description on faith because said iPhone had decided the video would be much better if it were removed from onboard storage and sent to ... somewhere. As ever with cloud storage, it isn't particularly clear where these things go, but the video data in question was more than far enough away that it couldn't be retrieved in time for prompt playback.

The thing is, that's not so much "cloud computing" as "cloud storage," and it's such a gold rush at the moment that it's causing us to take our collective eyes off the ball. Cloud storage is the model under which quite a lot of services currently promoted under the cloud banner really work, however they describe themselves. Anyone with a webmail account has effectively been using cloud e-mail since the mid to late 1990s, and what that really represents is the remote storage of e-mail and its transmission to a local terminal for display. With metaphorical propeller-topped cap firmly in place, we might point out that this represents a return to the mainframe-terminal architecture that existed in decades past.

Image placeholder title

Big racks full of hard disks still need to exist, but companies can avoid having to physically own them.

It's a model that's being pursued enthusiastically, and it's easy to see why: moving data off of our phones and onto the network fits our current use preferences incredibly well. We want small, cheap devices with long battery lives, and we want to watch amazing audio-visual content on them. Far better, then, to offload the storage to a location with more space and more power, where costs can be amortized over a lot of clients. The thing about this model is that it makes consumers of us all, and in so doing, it completely overlooks the main event of the cloud: not remote storage, but remote processing. Consumers just get given stuff. Creators build things.

Putting an iPhone video on a remote server is nothing more than a convenience, one pursued because flash memory is vastly more expensive than the spinning disks still present in most server farms. We might wonder—in fact, we might gape incredulously—at the fact that it's actually more convenient to send data over a mind-bendingly complicated transnational radio data pipeline than it is to put more storage in the phone, but that's a discussion for another day. The point is that the remote server isn't often doing very much with that video other than sending it back to the phone.

Image placeholder title

Cellphone networks are beginning to make highly mobile cloud applications possible.

It's a much more powerful idea to offload computational work to remote locations. Under current circumstances, this makes sense mainly in a workstation environment, since most people aren't doing jobs that involve massively heavy lifting, in terms of the mathematics involved, on highly portable devices. Despite slick videos from Microsoft predicting a technological utopia where we'll all work from tablets, the keyboard and mouse paradigm seems reasonably entrenched. Projects involving fluid dynamics simulation, global illumination for graphics and other seriously long-winded tasks are still built on desktop computers. Those are the jobs best suited for dispatch to cloud rendering.

This approach is already common in fields such as visual effects, where business models are very much job-based and the requirement for computing power can oscillate wildly. Yes, cloud-based computing power can seem like it costs more than a render farm in the back room on a per-hour basis, but that calculation is reliant on the assumption that the back-room render farm is in constant use. Cloud computing offers a massively convenient opportunity to offload the costs of building and running the computational resource to a third party.

Image placeholder title

Demanding processing (such as encoding a DCP, here with Fraunhofer’s software) can move to the cloud, given enough bandwidth.

As we've said, this sort of workflow is mainly carried out on large render jobs. The distinction between those tasks that can reasonably be farmed out to the cloud and those that can't is largely semantic. To indulge in some blue-sky thinking, we can easily imagine an operating system with the ability to move a task—any task—to the cloud as required. Cloud is a high-latency environment from which we might need to wait several seconds to retrieve and view a rendered frame. There's certainly no immediate prospect of being able to play uncompressed 4K video streams via the internet, at least outside the most specialized environments.

Many tasks, though—such as video compression—could be sent out for processing quite easily, and potentially without user intervention. Get really good at this and we can all have a simple slider trading off cost per minute against compute performance. Get the network good enough, and improve operating systems to the point at which work can be handed off to remote CPUs with zero user intervention, and suddenly anyone can have more power for any task at any time. It's an expansion of the rent-a-render farm approach that's existed for a while, usually revolving around a small suite of specific pieces of software, but it's definitely progress, and tools to manage it are becoming standardized.

Image placeholder title

Software such as Blackmagic DaVinci Resolve requires high-bandwidth access to media as well as lots of processing power, and isn’t an obvious choice for the cloud.

We're some way, though, from Microsoft Windows being able to arbitrarily move our already-running software to execute in some remote data center. It's technically plausible that it could be done. Whether it's a good idea or not is a somewhat more complicated issue. Yes, it's enormously convenient, but things distant are intrinsically less accessible than things nearby. The iPhone video example was a connectivity problem, and those are common; radio outages are ultimately impossible to completely control, but you know you're in trouble when an industry becomes this attached to prefixing all advertised bit rates with the phrase "up to." It's always easy to quote author and futurist Arthur C. Clarke in these situations—"Any sufficiently advanced technology is indistinguishable from magic"—but users transported here from mere decades ago would be forgiven for assuming that the process of watching a YouTube video on a cellphone must involve ritual sacrifice and the casting of bones. It's practically witchcraft, and it's no surprise that it occasionally breaks.

Still, network technology continues to improve, bringing with it ever-greater improvements in latency and the acceptability of cloud computing for more time-critical tasks. If there's a real problem, it's that deceptive difference between capital and revenue expenditure. Complaints over the idea of software such as Windows itself as a service are widespread, principally because people are unused to being aware of a cost for the operating system at all. Paying a monthly fee for an OS is apparently less palatable to users than paying a monthly fee for other software, although the same concerns and benefits attend both situations. That's software as a service, though. What we're talking about here is the entire computer, other than a very lightweight terminal, potentially becoming hardware as a service. It's far from impossible.

Image placeholder title

Download the August 2017 issue of Digital Video magazine

Related