Immersive 360-video is coming fast to viewing audiences everywhere. Facebook recently announced support for 360-degree video in its news feed while YouTube has been bringing 360-degree experiences to users of the Google Chrome browser and Android devices for the better part of a year. But it was Adidas’ move to put a 360-degree camera into a soccer ball at the last World Cup that showed off just how exciting this new viewing experience can be. It completely transforms the way we see content.
Imagine a basketball game or NASCAR race where users watching a live broadcast can pan and zoom around complete 360-degree images in real time. Or a VR display that puts you right in the middle of the action. Everyone can have the best seat in the house.
While digital video professionals have been touting the potential for 360-degree video for some time, until recently two technical factors have limited adoption; the physical camera capable of recording the video and a standard technology for streaming, sharing and viewing the video. To understand where 360-video is going it’s important to look back at its beginnings and the progress that’s been made to date.
The grandfather of the modern 360-degree camera is the “fish-eye” lens first developed in the 1920s. This type of lens captured a wider angle of view, but suffered from significant visual distortion. Other early approaches to creating an immersive video experience relied on panoramic photo techniques, multi-camera setups and cameras fitted with mirrors to increase their angle of view. New multi-camera rigs, such as one introduced by GoPro at the Google I/O conference in May 2015, simultaneously capture video and images on an array of cameras and digitally stitch the images together to create a 360-degree perspective. This takes up significantly more bandwidth than regular video and the setup is quite expensive.
One of the key developments for modern 360-degree video has been the panomorph lens, innovated by ImmerVision. This type of lens is designed to use optical distortion to magnify zones of interest within the image and increase resolution of those areas. The video or image is then run through software that “un-warps” it. This results in a higher-resolution final product (no stitching required) that can be adjusted for each application whether broadcast, security or mobile. For example, a camera for video conferencing could be designed to increase resolution on people’s faces and decrease resolution on areas of the ceiling or floor.
Along with advances in camera technology, new methods of encoding and transcoding video make it possible to stream high-resolution 360-degree video to mobile devices, PC browsers, and VR headsets. This is essential for broad adoption. Mobile viewing for video of all kinds has been steadily increasing for the past few years and data from ZenithOptimedia predicts that over half of all online video views will be on mobile devices by 2016. If 360–degree video is of poor quality or has severe device limitations, it has little chance of taking off. In order to succeed, operators must use 4K or 6K capture devices together with high-density transcoding solutions such as those offered by Vantrix to stream 360-degree video to a wide variety of devices in high resolutions and in real time.
With the continued advancements of several technologies – from lenses, to processors to software – the pieces are all in place to make 360-video a reality today. And the future looks even brighter.
Steve Sklepowich has more than 20 years of experience in software product management. Steve spent 14 years at Microsoft, where he was worldwide director of product marketing, responsible for Silverlight, Windows Server Smooth Streaming, Expression Encoder, codecs and related technologies, now available as Azure Media Services. In previous roles, Steve was also VP Marketing at thePlatform (acquired by Comcast), and in media product marketing at Apple in Cupertino.