Let’s start off with a product that isn’t going to define the 2017 technology market: cameras.
Everyone loves cameras. You can throw one on your shoulder, put an oversized and unnecessary mattebox on it, and everyone will know you’re really important. The problem is that cameras have become so good and distribution methods so diverse that phrases like “broadcast quality” have become essentially meaningless. Well, that’s not quite fair—there’s always more dynamic range to capture, and we can’t decry the pursuit of sheer imaging capability until every camera in the world can shoot 4K at 1,000 frames per second. Frame rates beyond 60 at resolutions near 4,000 pixels across are still somewhat hard to come by, and it’d be nice to see more 4K high frame rate options. In conventional circumstances, though, quality is so high that it barely differentiates products anymore.
One factor that might put a damper on camera work is the price of flash storage. The last two quarters of 2016 saw some alarmed quotes from organizations such as DRAMeXchange, a memory and storage technology market intelligence firm, which had noticed that the supply-and-demand situation for NAND flash was tilting in the direction of increased prices and reduced availability. This trend is most noticeable in the IT sector, and it isn’t clear how much the problem is concentrated in the high performance part of the market. Nonetheless, it’s a concern given that the fastest flash chips are the very things that are used in camera storage—and that are often alarmingly expensive to begin with. CFast, for instance, seems to be the digital world’s attempt to make 35mm film stock seem affordable, and though it does make 4K raw recording possible on tiny flash cards, I (and every Blackmagic URSA Mini owner) would love to see a significant reduction in price during 2017. In fact, I’d have liked to have seen it long before now, but any prospect of it happening seems more remote than ever.
The supply-and-demand situation for NAND flash is tilting in the direction of increased prices and reduced availability.
On a more positive note, it’s almost inevitable that there will be more and better HDR monitoring options at all levels. Right now, there are a handful of high-end products, and only one affordable approach: Atomos’ Shogun Inferno recorder. Demos of these devices at trade shows can be eye-searingly spectacular. They look good in isolation; we don’t need to perform a side-by-side comparison with conventional pictures in order to make the differences visible. The problem is that the instant, undeniable, ultimately sellable punch only really comes from the top end: the ferociously expensive 4,000-nit Dolby displays or the 1,000-nit, incredibly contrasty Sony BVM-X300 4K OLED monitor. Domestic displays, or, as we see too often, old technology dressed up with nothing but software tweaks, generally look far less good.
Mainly this is an issue of sheer practicality. Conventional Rec. 709 displays, which notionally includes TVs, are only really supposed to be 100 or so nits in brightness, though many are actually far brighter for better storefront appeal. Even greater brightness is consequently difficult. Beyond that, the phrase “high dynamic range” implies not only higher brightness but also higher black-to-white contrast. No liquid crystal display can ever achieve full black, and the contrast capabilities of common types have not significantly improved for years. Making new models both bright and contrasty enough for a convincing HDR image is therefore a significant engineering challenge that might ultimately demand improvements in (and broader deployment of) OLED display panels. It’s hard to make this a prediction, but it would be nice to see better HDR displays at both the domestic and professional levels in 2017.
In Shogun Inferno, Atomos offers an affordable HDR monitoring option.
Having considered production and exhibition, let’s talk about post. Tools for simple editing—even quite advanced motion graphics, for that matter—are now sufficiently mature that the technology is difficult to get excited about, and the main factor driving change is the ever-expanding role of the network. I’ve deliberately chosen the word “network” here for its broadness—between IP video in studios, the soaring performance of cellphone connectivity, and the internet itself, the ability to move professionally acceptable media around quickly on commodity hardware is recent and growing fast.
Simply sending video down network pipes is perhaps the least interesting application, as it primarily involves outside broadcast and studio installation situations, which are peripheral to most people. It’s worth keeping an eye on, though, because if there’s any genuine tendency for SDI connectors to be supplanted by Ethernet ports, it will emerge from this vector. This September’s IBC convention had a particular focus on the field, and it should start to edge the studio-based mainstream in the next year or so.
Another huge convenience for broadcast professionals is the expanding application of cellphone networks to outside broadcast transmission. It’s been nascent for a year or three, with the limitations primarily being those of reliability and bandwidth. Current incarnations require the best possible cellphone service to work well, which implies working in urban areas, in good radio conditions, and with reasonably uncongested providers. Still, as the infrastructure improves (and looking toward 5G around 2020), decent video will start to represent a smaller proportion of maximum performance. That should lead to more reliability, and one day, possibly not in the next year but probably in the next five, satellite providers may well find themselves being blown out of their current financial orbit for at least some types of work by gigantically cheaper cellphone networks. Not needing a space rocket does save a bit of pocket change.
Sony’s BVM-X300 4K OLED offering is an incredibly effective HDR monitor.
The biggest next big thing in networking doesn’t really have anything directly to do with film and TV; neither does it really have anything to do with networks: it’s what’s on the other end of the network. The world’s data centers, vast warehouses full of computers, were not created to serve the film industry. Movies aren’t nearly a big enough deal that such a vast infrastructure could ever have been built just to support their production, but it turns out that it doesn’t matter. Data center computing power is available to anyone, for a fee.
It’s a truly joyous development for post houses, where work is largely project-based, with downtime between jobs and sudden spikes in demand. So far, cloud computing has been largely secondary to cloud storage, but actually buying computational horsepower on a per-hour basis is really starting to ramp up. Perhaps it’s a return to the model of mainframes and terminals from the early days of computing, but it works well for a certain part of the business. The idea of being able to buy time on someone else’s computer is far from new, but the degree to which it is now available, and the sheer scale of it, has prompted some people to propose that it may soon be possible to build an advanced postproduction house, capable of high-end visual effects work, without owning more than the most elementary equipment. That could happen in 2017.
Naturally, nobody should lay bets (or buy stock) on the basis of any of these thoughts. Maybe some brand new camera technology will emerge that revolutionizes film production and I’ll have a meal of hat en croute to force down. This seems unlikely, though. Things have progressed alarmingly quickly ever since the introduction of HD, and it’d be a very welcome development if the current period of more measured advancement were to continue. My final thought is not a prediction, therefore, but a hope: perhaps we can all spend 2017 getting really good at using the gear we’ve got, which is likely to make a lot more difference than another few pixels.