Talking about what's coming up always feels dangerously like creating a technology buyer's guide. Giving advice about the current situation is bad enough in an industry where equipment can go out of date in less than a year, but it's even worse when we're literally trying to predict the future. Still, just as a surfer needs to start paddling at the moment the water begins to rise in front of a wave, making the best out of tech requires us to start working on the problem before it hits the mainstream. The difference is that it's usually pretty clear to a surfer that there's a wave coming, and how big it is. That, and surfing doesn't involve a five-figure investment in boxes covered in blinking lights.
Many recent developments revolve around streaming, which we could broadly define as sending video over networks that weren't ever really intended to carry video. In distribution, that's old news, and the swing from traditional terrestrial and satellite broadcasting toward the on-demand platforms has been accelerating for long enough now that nobody will be surprised if it keeps on going. In production, the situation is slightly different. The standards process required for high-end video over IP networks has started to come together recently, and it certainly seems inevitable that the sheer flexibility and commodity hardware pricing of the IT world will eventually make IP interconnection almost universal. Even so, it's hard to imagine it happening to any great extent over the next year. New installs might do it, but otherwise it's likely that few people will be ripping out SDI for Ethernet in 2018. Even if they do, the difference to anyone but the engineering staff will be firmly in the details.
Improving public internet infrastructure could begin to replace satellite links.
Photo by Al Powers
What's more widely applicable is the fact that cellphone networks, at least in the more developed parts of the world, are starting to become a practical means to get video back to base. 4G LTE is theoretically capable of 50 Mb/s upload, with the more advanced versions capable of up to 150, and more is planned. Real-world situations achieve more like 25 Mb/s upload, but that's still sufficient for reasonably respectable HD video, and companies such as JVC have been offering cameras with network connectivity over Wi-Fi and 4G LTE for some time. This functionality is mainly of interest to news crews, who are embracing cellular transmission to the detriment of satellite operators. Still, there are plenty of places on the planet where a cellphone won't work well enough to transmit video, and that's likely to be the case for years yet.
The increasing capability of mobile communications networks is driven by the huge amount of work that's going into cellphones and related technology. In context, it's no great surprise that portable technology—principally that relying on ARM processors—is receiving the lion's share of attention. In some ways that's actually an alarming development given the importance of workstations in postproduction; ARM processors, to date, lack the sheer punch of Intel's most powerful options. It's possible that the balance is shifting, with AMD's Ryzen and particularly the Ryzen Threadripper range of processors moving back into a zone of realistic competition, but it seems possible that the lull has allowed developers using ARM designs to steal a leap on the market.
NewTek's TriCaster supports its NDI protocol, making computer networks part of a studio install—but that's not really streaming as most people understand it. (Pictured, NewTek TriCaster at TwitchCon.)
Just a few weeks ago, the first 64-bit ARM server CPUs were shipped by Qualcomm under the Centriq name. The reason this is happening is pretty clear: cloud computing is growing at an amazing pace. If we want cloud services, the computing work has to be done somewhere, which means big data centers full of server racks. The cost of the servers is one thing; a significant ongoing cost is that of power, and the promise of ARM was always that it would deliver significantly better performance per watt (or do the same work for less electricity). To be fair, it hasn't always been clear that this would actually be the case, but there's keen interest in the new chips on the assumption that they really will do more work for fewer electrons.
Whether they're useful for workstations is another matter. The Centriq line is intended to do a lot of tasks at once, a capability that is more useful for servers than workstations. There are already problems writing well optimized applications for multi-threaded computers; while work is underway to make it easier to do so, the effort is still theoretical. If ARM CPUs are to displace Intel in markets other than servers, there's a lot more work to be done.
Big racks full of hard disks still need to exist, but companies can avoid having to physically own them.
Returning to ARM's primary domain, tablets and smartphones, that market is naturally a lot more mature. It's no longer any surprise that phones do quite advanced things that have nothing to do with telecommunications. With the iPhone X, Apple introduced Face ID 3D facial scanning technology as a way to secure the phone, which is a great achievement for a consumer product. Even more impressive, the information collected for the Face ID depth map can be used for live facial tracking. Apple uses it to drive the animation of characters called Animoji, though others have experimented with the tech and have been able to capture both the shape and color of a face and make it available to the 3D graphics application SideFX Houdini. Facial scanning has been done before using consumer equipment: the Kinect sensor accessory for the Xbox was used in the same way, and there are pretty visible limits on the resolution of the 3D model. Still, if it's not enough that our phones are now used for communication, navigation, photography and all the other things we do, look out for ever-broadening horizons in the world of 3D scanning.
Speaking of horizons, the VR world seems poised to act on the fact that VR products can't establish a foothold in the consumer market at a $500+ price point. Oculus, the Facebook company that is perhaps most responsible for the current surge in interest in VR, has announced a $200 headset, Oculus Go, to be released early next year. The company isn't entirely abandoning the high end; a commercial version of its Santa Cruz prototype, which has more and better head position tracking options, should appear at some point in the next 12 months, price currently unknown.
A streaming studio setup at the NAB Show.
A price around $200 seems more reasonable for establishing a sustainable consumer market. With a 2560 x 1440 resolution, the new device is also an example of very high density TFT-LCD displays. With phone displays beginning to exceed the optical resolution of the human eye, at least at normal viewing distances, there was very little demand for more resolution until smartphone-based VR headsets came along. Whether the headset market alone can provoke and sustain development of these very high resolution displays remains to be seen. It could be, or perhaps has been, something of a chicken-and-egg situation, with sharper displays required to create sales, but sales required to fund the required R&D.
Still, with its significantly beyond-HD resolution and reasonable price, the Oculus Go is promising. What's far less certain is whether VR will escape the niche of computer games. It'll be no disaster if it doesn't; games are a ready-made application for VR, which has always been primarily associated with simulated environments. Games need very little modification to work well as VR applications, and the experience is widely enjoyed.
For a complete transition to IP, a wide variety of common devices, such as this multiviewer, must be updated for IP compatibility.
Alternative VR content such as 360° live video and pre-rendered experiences has been demonstrated convincingly. The Raid, a tie-in to the FOX series 24, was demonstrated at the 2017 NAB Show. In this 360° experience, the viewer accompanies a military unit on a night operation, with firefights and gunplay galore. Despite several high-profile successes, the relative difficulty of producing 360° material and the almost completely untried nature of the art form have left it playing second fiddle to the world of interactive entertainment.
Finally, while it's difficult to point to any recent advance in artificial intelligence that might have propelled it to prominence, IBC this year was one of a few places where AI-based technologies were talked about (a lot) and shown (a little). The fundamentals of neural networks are a bit too much to go into here, but their key advantage is that they allow us to break certain rules, or at least bend them, by bringing in outside information. For instance, it's impossible to sharpen a poorly focused shot because the blur is a problem of missing information; the high frequency detail of the shot has been lost forever. An artificial intelligence can, in theory, solve this problem and clear up the image based on its knowledge of what the world should look like, just as a human retoucher could.
At the 2017 NAB Show, the immersive virtual reality experience Tree from Microdose VR was on display at StudioXperience.
Photo by Al Powers
Real-world examples tend to be more prosaic—the prototypical example is handwriting recognition—although AIs capable of solving problems have begun to emerge. Recently, a team of researchers from Disney showed an application capable of generating photorealistic clouds in a fraction of the time required by existing mathematics. Clouds are difficult. They're made of billions upon billions of water (or ice) droplets that may reflect and refract light in complex ways, bouncing and bending it many times before it exits the cloud. Current approaches involve approximating that diffusion, extending the number of refractions or reflections that occur until the cloud's appearance is acceptably realistic. It's very slow, so the new approach solves a problem for Disney.
While artificial intelligence and neural networks may be buzzwords, they are, unusually, buzzwords with real capability. Whether any of these developments is likely to trigger fundamental changes to working practices is another matter. The key capabilities in film and television work are subjective, artistic and skill-based, and all of these tools are really aimed at making it easier for skilled humans to apply their abilities. The technology most likely to make big changes is AI; since it represents such a huge computational load, it'll likely improve as computers and cloud networks improve. Given the inexorable improvement in computers, then, we may be welcoming our whirring, beeping robot overlords sooner rather than later. Here's hoping it doesn't take all the fun out of filmmaking.