Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now


Saving Bandwidth Before and After Compression

Pre- and post-processing is gaining favor to complement file size reduction.

When it comes to preserving bandwidth for multichannel distribution of video, much of the discussion has centered on new compression algorithms (H.264 AVC and H.265 HEVC) that reduce bits and maintain image quality. That’s because less data enables faster transfers, requires less storage space and allows lower operation costs while providing the opportunity to add more streams and channels for higher revenue.

Several content providers are working with technology suppliers to develop workflows that process content before (Digital Rapids and Faroudja Enterprises) and after (Cinova) the compression stage. These approaches, they say, realize an even greater bandwidth reduction by identifying redundant bits and making adjustments in software accordingly. Perhaps the best part, the technology works with all existing infrastructures, so wholesale equipment upgrades (set-top boxes and other decoding devices) are not necessary when migrating to the latest compression tools.

Hardware-Based Pre-Processing

As experts know, the same content, encoded with the same codec at the same data rate, can result in varying output quality depending on the encoding system used. The key to delivering consistently high-quality compressed video is properly preparing the source video before handing it off to the codecs for compression.

Based in Frisco, Texas, Imagine Communications is a media software and video infrastructure solutions provider that serves more than 3,000 broadcast, multichannel video programming distributor, government and enterprise customers. The company offers hardware-based video pre-processing technology that includes advanced video pre-processing features. In addition to improving the quality of video, this hardware-based pre-processing reduces the amount of work that the software compression engine needs to do, which means that customers can process more video and audio streams simultaneously. According to the company, this pre-processing delivers the highest possible quality video (up to 30 percent more efficient to compress) to the codecs, ensures optimal quality in the output media stream and enables the most efficient use of bandwidth in the compressed result.

Motion Adaptive Deinterlacing

Deinterlacing is a critical function of pre-processing interlaced source video for viewing on progressive displays. Properly processing the video from its native interlaced form factor to high-quality progressive-scan data is extremely important to the overall quality of the resulting image. Not only will any deinterlacing artifacts be visible, but they increase the work that the codec must do to compress the image, resulting in lower quality at a given data rate. Imagine Communications’ SelenioFlex Ingest encoding systems feature the company’s custom capture and pre-processing (hardware) technology that leverages motion adaptive deinterlacing capabilities.

Bob and Weave

Common forms of deinterlacing include linear temporal (meshing two fields together to create a single frame, also known as “weave”) and linear spatial (“bob,” or discarding one field and interpolating the result back to full resolution). The weave method creates a lot of motion artifacting but works well in scenes with little to no motion. The bob method avoids motion artifacts, but at the cost of considerable vertical detail.

Another method of deinterlacing, vertical temporal (VT) filtering, discards one field (like bob), but rather than simply interpolating the remaining field back to full resolution, it uses the discarded high-frequency information to recover missing edge data. VT is content-adaptive, meaning that it will adapt its video processing approach (between bob and weave style methods) based on the content of the entire frame (whether it contains any motion). Vertical temporal filtering may result in artifacts in areas of high motion, with an effect similar to trails (of the high-frequency data) or motion blur (if VT is applied to both fields).

According to Imagine Communications, motion adaptive deinterlacing combines the best aspects of both bob and weave by isolating the deinterlacing compensation to the pixel level. Spatial and temporal comparisons are performed to decide whether an individual pixel has motion. While other methods affect the entire frame of video, motion adaptive deinterlacing (as implemented on the SelenioFlex Ingest pre-processing hardware) processes each pixel independently, resulting in the highest quality image possible. Areas with no motion are statically meshed (weave) and areas where motion is detected are treated with a proprietary filtering technique resulting in high-quality progressive-scan images.

The company says that the term “motion adaptive” is occasionally colloquially used to describe some algorithms (like VT) that adapt their processing based on whether an entire frame has motion, but the SelenioFlex Ingest pre-processing hardware dynamically adapts down to the pixel level, resulting in full motion adaptive deinterlacing with individual pixel analysis.

Same Image Quality at Half the Bit Rate

With Faroudja Enterprises’ F1 Video Bitrate Reducer technology, pre- and post-processing with the inherent bit rate savings allows users with video sources up to 4K to deliver lower-resolution parallel feeds at no extra bandwidth cost.

Faroudja Enterprises, located in Los Altos, Calif., and specializing in video processing, has introduced F1 Video Bitrate Reducer technology, which is employed at the front and back end of a video processing and video delivery workflow, before and after compression. According to application notes on Faroudja’s web site, the technology potentially delivers a bit rate reduction of 35 to 50 percent with any existing compression system via the use of novel pre- and post-processing without affecting perceived quality. The process is compression standard-agnostic; it is applicable to all existing standards, from MPEG-2 to HEVC, and reduces the bit rate in all cases.

The Faroudja technology provides concurrent lower-resolution versions without requiring additional bandwidth. For example, additional benefit (before and after compression is applied) is gained with 4K sources by delivering a 1080p (or other formats) parallel feed at no extra bandwidth cost, according to the online notes.

Company founders include image enhancement pioneer Yves Faroudja, his wife Isabell and Dr. Xu Dong. The founders say they have realized that their expertise in video technology is readily applicable to video compression.

The company says it develops pre-processors and post-processors, used before and after compression encoding/decoding, to achieve a lower bit rate and better image quality with existing codecs. It does not perform compression coding or decoding, but instead develops pre- and post-processing schemes to be used before compression at the headend and after decoding at the rendering device.

Faroudja Enterprises’ technology gives content providers either a bit rate reduction with the same perceived image quality as the original, or the ability to increase image quality while retaining the original’s bit rate. In either case, compression artifacts are significantly reduced. A company rep says significant results have already been achieved: five patents have been granted, with more in the works. The processing is suited to a wide range of video, from teleconferencing and videophones to SD, HD and UHD/4K applications.

In an online discussion, Faroudja engineers explain that the efficiency of digital video compression systems often can be improved through the use of Faroudja pre-processing (prior to compression) and post-processing (after-compression decoding). The workflow includes the use of a support layer in parallel with the conventional compression path (see diagram). The Faroudja scheme complements conventional compression standards (MPEG-2, MPEG-4, H.264, VP9 and HEVC) and does not require modification to the standard codec.

Results are accomplished, the engineers continue, through the Faroudja support layer, which helps provide full-resolution video at reduced bit rates. It can further be configured as a transcoder to help convert video between existing compression formats, such as MPEG-2 and H.264, to and from Faroudja’s support layer format to save bandwidth, bit rate, or file size in the cloud without sacrificing image quality.

“Demand for video bandwidth is doubling every three years, yet network compression schemes’ efficiency doubles every 10 years,” says Yves Faroudja. “It is clear that a fundamental change must be made in how networks operate. Our new technology provides a solution for this today, easily integrates into existing systems, and is fully compatible with future compression schemes” such as HEVC, VP9 and others.

Mimicking the Human Visual System

Cinova’s Crunch software is applied to compressed files, making real-time results possible.

Cinova, located in Mountain View, Calif., also aims to improve current encoding schemes by working with files after they have been compressed. The company was founded three years ago by Dr. Anurag Mendhekar, now CEO, who worked previously at the Xerox PARC technology think tank and Yahoo. Cinova uses a proprietary technique it calls “Perception Optimized Processing” (POP) as the basis for a software product called Crunch.

Mendhekar says his team has devised a set of parameters that describe the human visual system. In POP, these parameters are applied to every macroblock of every frame of an encoded video. From the results, Crunch can compute a visual sensitivity index. Based on that index, the user settings dictate how aggressively (or not) to transform that macroblock. By eliminating only data that the human eye can’t detect, Crunch reduces the size of the video stream without compromising visual quality.

The result is either the same visual quality at a lower bit rate (less bandwidth) or higher quality in the same target bit rate. Mendhekar claims an overall bandwidth saving of 20 to 50 percent.

“We came at it from the angle that videos are viewed by human beings, so let’s take a more explicitly human visual systems-based approach,” Mendhekar says. “What we’ve come up with are parameters that we feel accurately describe the human visual system. We reduce bit rates by up to 50 percent and we guarantee not to negatively harm a frame of video. If I wanted to increase the savings even more, I could say, ‘Let’s adjust the video quality to get where you need to be in terms of bit rate.’

“If a content provider wants to set 6 Mb/s as a house format, there’s probably a dozen different parameters on their encoders to get to that 6 Mb/s,” he continues. “If they know that Crunch is coming after the encoder, they may reset the operating parameters on the encoder to a higher bit rate, knowing they’ll get the bandwidth savings at the other end with Crunch. This ensures good picture quality.”

“We think HEVC, and its promised 50 percent bandwidth reduction, will take at least five years to be widely adopted,” Mendhekar says. “We’re delivering that kind of result today. Instead of ripping out your H.264 encoders, Cinova’s technology can give you the 20 to 50 percent promised with HEVC today, without changing out your infrastructure.”