The growth of streaming online video is nothing short of remarkable, with a plurality of American households now subscribing to at least one streaming service. However, with great enthusiasm come great expectations. Viewers increasingly demand an experience that, at the very least, matches what they receive from their television signal–and they’d prefer something better.
This is where Quality of Experience (QoE) comes in and why it has become the great battlefield upon which the war for viewers will be fought. QoE itself is a combination of three primary factors: how long it takes for a video to start (startup time); how good the picture looks (picture quality); and how much the stream maddeningly pauses to catch up (buffering).
In order to deliver world-class QoE, streaming services will need to ensure they have:
- Predictive signals, from Real User Measurements (RUM)
- A multi- or hybrid-CDN infrastructure
- Algorithm-driven intelligence that stitches the two together and delivers a smooth and consistent consumer experience
Quality of Experience Is More Than an On/Off Switch
In the early days of online video, the primary measurement of success was simple: does the video play? For services lucky, or good, enough to be able to answer that question in the affirmative, there was an opportunity to leap ahead. MLB Advanced Media, for instance, has successfully delivered a subscription-supported video service since 2002. Getting over that initial hump provided significant separation from the pack.
As time went by, most media outlets realized the need to get into the streaming game, albeit with an initial sense that this was more of a hobby than anything else. The $400B TV market approached streaming video as a technology unlikely to disrupt its high-end offering with grainy pictures on a laptop screen. Like the newspaper industry, their assumptions were about to prove disastrously wrong.
Well-documented failures in the provision of streaming have amplified viewers’ demands: underwhelming Super Bowl broadcasts, unsatisfactory Academy Awards celebrations, and outages for mainstream services routinely make the news. The question today is not how to respond to diminished QoE: but instead how to prevent issues before they become crises.
Hybrid CDN: The New Normal of Streaming Video Networking
To say that the Internet is chaotic is something of a cliché. Suffice it to say; however, that traffic between a streaming video service and its viewers will cross several independent networks, which are stitched together in what are often unevenly designed, and never fully centrally managed, connections.
Consider that cable television runs through a fully-controlled network–it’s as though all the content is delivered on box cars attached to a train that runs the whole route on company-owned tracks. By contrast, video distributed over the Internet is more like cargo sent by a truck that crosses half a dozen different countries, many of which have only barely functioning diplomatic relations. The Internet was designed as a ‘best effort’ network; one where outages are expected to be regular events and whose whole design is regulated by that fact.
From the beginning, knowing for sure that data could make it, predictably and consistently, from origin to consumer was a tricky proposition. And as if there weren’t enough to worry about, because the web doesn’t generally take geography into account, there was the matter of sheer distance. Data may travel quickly across the web, but it is subject to the same laws of physics as anything else, meaning that more distance means more time to deliver.
For all these reasons, Content Distribution Networks (CDNs) came into existence. In 1998 Akamai – still the largest CDN today – was formed. Its premise was simple: it would replicate all the data on a website, or web-connected service, onto servers around the world, and then deliver content to end users from whichever server was the nearest and most likely to get it there quickest. Much as a network of warehouses makes it quicker for large supermarket chains to ensure each individual store is fully-stocked, the CDN promise was to have content waiting in a convenient spot for quick delivery.
Unsurprisingly, CDNs grew quickly and delivered great value for a while – but were so successful that they became a part of the problem. Their users ran into two fundamental problems:
- Because they pay their CDNs for all the data that cross their networks, streaming services cannot achieve economies of scale: each incremental byte of data passing over the CDN represents a cost.
- The CDNs grew to such a size that they became, rather than a solution for congestion, a contributing cause. When one CDN is attempting to deliver the data of hundreds to thousands of services, it can itself become congested.
In order to offset this challenge, the leading streaming services opted to enter into contracts with multiple CDNs. This allowed them to arbitrage costs, paying fees appropriate to different geographies and delivery patterns, rather than a single, centrally-negotiated tariff. It also allowed them to split their traffic logically based on geographical coverage, selecting the right provider for the right set of users.
While these were both benefits, they did not solve the other problem: an inability to access economies of scale. For this reason, among others, many services that were starting to see significant traffic opted to build out their own internal CDN infrastructures. Perhaps the most well-known of these is Netflix, which famously places its own hardware within the physical locations of large Internet Service Providers (ISPs) to ensure that their traffic has a dedicated path from origin to consumer. This is a trend that is experiencing rapid acceleration, as companies see the benefits of having full control over their own network, and can finally see the cost-per-byte reduce as their own internal CDNs deliver more traffic at a fixed cost.
Outside of the very largest organizations, very few companies can invest adequately to fully serve their consumers through a self-administered CDN. As such, the modern infrastructure combines the best of all possible worlds: an internal CDN where possible, handing off to a range of CDN providers to ensure efficient delivery across all geographies.
Faster Than the Eye Can See
Moving to a hybrid-CDN infrastructure has many benefits, but delivers a new challenge: how to direct traffic efficiently across the different CDN options, so that customers receive world-class QoE, at an optimally efficient cost to the service.
Conditions across the Internet change quicker than the eye has any way of seeing. So, while a base set of conditions can be defined for directing traffic across the best CDN routing, these will not take into account real-time changes in congestion and outages. And even the fastest system administrator cannot update routing rules as quickly as conditions change.
A range of load balancing services have burst onto the scene, seeking to solve this problem. Most of them focus primarily on the server side: seeing which machine (or sub-network of machines) is the least busy and routing traffic through it. What they tend not to do, as a result, is make decisions based on the actual experience being delivered to the end user. While this is good for cost-maintenance at the operational end, it is an inadequate solution for maintaining superior QoE–which can immediately impact consumers, leading to a reduction in customer preference, and, ultimately to consumers of the service.
Returning to the discussion above, what streaming services need is a system that can provide early warnings of potential QoE impacts, and then re-direct traffic according to sophisticated algorithms that balance cost and QoE. In other words, as congestion starts to threaten QoE across one path, the system automatically resets the routing tables and re-directs traffic across smoother routes, so long as the economic benefit is clear. To be clear, this is not a case of delivering wonderful experiences whatever the cost, but rather ensuring that consumers get the best experience possible within the economic model of the streaming video service.
Bringing it All Together
The ideal model, then, for streaming video services, is a hybrid CDN infrastructure, connected to an intelligent algorithm that directs traffic in real-time, based on predictive signals from a broad set of real-time community RUM data.
In this scenario, services are able to manage their costs, while maintaining the highest economically-viable QoE, with routing decisions based on real data, and real science. Combining the baseline data of thousands of community members (properly anonymized and aggregated), and spanning all kinds of web content, ensures that each and every decision is based on the broadest, deepest insight available.
Based on Cedexis data, the potential for QoE improvement using such a solution is significant. In a recent review of one streaming service’s data, the following improvements were observed:
- Eighteen percent reduction in start up time
- Sixty-six percent reduction in video startup failures
- Fifty six percent reduction in buffering
Moreover, companies using this kind of configuration have significantly greater control over their networks, and flexibility in dealing with partners and service providers. This results in a superior opportunity to build out great services in an economically sound manner.
Pass the RUM
Global audiences are demanding better and better QoE. They reward those who provide it, and punish those who cannot by simply departing and spending their time and money elsewhere.
The chaotic nature of the Internet makes it difficult to deliver a consistent experience. But, by federating traffic over multiple CDNs (including those maintained in-house), and algorithmically routing traffic based on the actual customer experience, leading streaming services are meeting and defeating the challenge.
Those with the best outcomes are using community RUM data, from thousands of participating companies, to confidently generate predictive signals that solve for problems before they happen.
Steve Lyons is Video Product Manager at Cedexis.