Leaked Pricing Details Licensing Costs Per Channel For Live OTT Services

Recently, one of the major live OTT services shared with me their licensing costs broken out per channel. I agreed not to disclose which live platform this comes from and the pricing listed doesn’t mean this is what every live TV company pays or that the platform carries all of these specific channels. There are variations based on length of contract and number of subscribers. Most of the pricing listed is for 2 years, but some are 5 years. For this specific unnamed platform, ESPN pricing is 3 years. Also, a few of the content providers require an ad split, while others don’t. The costs are per subscriber, per month.

The channel pricing listed gives a good insight into one of the major costs of running a live OTT platform and when you add in the distribution costs, and all the technical pieces of the workflow, on top of the content costs, it’s not possible to run a profitable streaming live TV business. Even at scale, I don’t know of any live OTT service that isn’t losing money which is why all of the major services are owned by MVPDs, ISPs, or others that can afford to lose money on the service, since their offering is part of a bigger product ecosystem.

Some of the live TV platforms have told me they think that with the growth of their service they can push back on TV network licensing costs, or drop unwanted networks. But no MVPD to date has been able to do this, so I don’t see the streaming platforms having any better leverage. With content licensing costs rising, it’s the main reason why nearly all of the live streaming TV services raised the cost of their packages by $5 last year. Like it or not, live TV streaming is only going to get more expensive for consumers.

Sponsored by

The Advantages of Handling Manifest Manipulation at the Network Edge to Personalize Video Experiences

The ability to personalize viewing experiences at a granular level is one area in which online video separates itself from traditional TV. Consumer expectations are rising, be it for the quality and value of the service or the relevance of the ads they’re seeing. Meanwhile, content providers need to manage regional rights restrictions, secure the content and also monetize it with appropriate ads.

As we all know, content providers don’t always apply the most optimal methods and tools to best meet these expectations and requirements. Efforts to personalize content today are often inefficient, struggle to scale, add unnecessary rigidness and costs into video workflows and storage while limiting what can be personalized at an individual level.

Alternatively, many content providers are using manifest manipulation to personalize the video experience for each viewer. When a stream is requested, video and audio segments are accompanied by a manifest file that acts as a playlist and determines the playback order. The ability to change or customize the manifest dynamically, at a per user level, opens up numerous opportunities to tailor the viewing experience. In the case of live video, a new manifest is delivered with each segment of video requested, allowing adjustments to be applied dynamically as viewing conditions change. Executing this function at the edge, allows content providers to offload complexity from early on in the content preparation process and dynamically apply advanced logic for an individual viewer just prior to delivery. Meanwhile, enhancing scale to reach larger audiences.

Examples of functions that can be handled by manifest manipulation include:

  • Monetization: CDNs can work in tandem with ad decision systems to enable dynamic ad insertion on a per-user basis.
  • Personalized Streaming: Optimize video playback quality (bitrate selection) based on user, device type, network conditions or geography
  • Content Security: Protect content against unauthorized viewing using scalable session-level encryption, access control and watermarking
  • Content Localization: Apply regional variations including audio tracks, closed captioning and subtitles
  • DVR and Clip Creation: Dynamically create highlight clips, DVR windows and time shifting
  • Content Rights Compliance: Program Replacement – Adhere to content rights restrictions by replacing content in restricted regions

When it comes to content providers personalizing content for their viewers, there are three main challenges they face.

Challenge 1: The One-Size-Fits-All Approach to Content Preparation
Delivering personalized content to fragmented audiences is difficult and costly when preparing video in a cookie-cutter fashion. This often entails creating large sets of manifest files with similar bitrate ladders across a wide range of supported formats, codecs and ad copy with additional localized variants. This approach not only places unnecessary complexity on the content creation phase, it adds cost by expanding the content library storage footprint.

When delivering the content to an end-user, this approach can fail to deliver a truly authentic, personalized experience as it’s not actually targeted at an individual. The amount of permutations created in the content preparation phase are finite, the content is precast to the end-user and rarely appears relevant from the perspective of the viewer.

Challenge 2: Customizing Manifests at Origin
Content providers that host and originate their content in the cloud sometimes attempt to process manifests at origin. Doing so can be extremely compute intensive and cause cloud costs to rise quickly. Creating personalized manifest files at origin is also inefficient as the static manifest is first created, then passed to a CDN for delivery, but can’t be cached efficiently by the network. When a content provider is delivering live or on demand content to large audiences, the compute-heavy method is intensified by the additional strain it places on the origin from additional end-user requests, which ultimately drives additional cost. This method can add undue latency and impact delivery performance.

Challenge 3: Reliance on Clients
Those that rely heavily on their client implementation to personalize viewing are limited to the capabilities of their client stack which are often inconsistent across platforms. Even though many clients can support a range of the desired personalization capabilities such as localization, targeted ads and more, scale can be an issue. Client side dynamic ad insertion implementations can struggle to handle programmatic decisioning quickly and at scale. Content security measures are often reliant upon the server-side logic to enforce policies. Meanwhile, managing implementations across the landscape of supported devices that require frequent updating creates a fragmented environment that’s hard to maintain. Depending on the end-user client also places additional burden upon a technology stack that is often heavy and reliant on third parties. Choosing a client strategy to handle personalization functions pushes added complexity onto an already brittle environment where errors can significantly impact playback.

Because of these challenges, many content owners are realizing the advantages of handling manifest manipulation at the network edge. Utilizing manifest manipulation at the edge of the internet enables personalization at greater scale that can be achieved using a client side approach. Using a widely distributed network can reach large audiences with customized manifests across the landscape of devices and browsers without concerns around player updates and heavy reliance on third-party software. For a diagram on how this work, check out a blog post Akamai did on this topic here.

It can also be less costly than creating manifests at origin. Executing at a per session level provides more flexibility than the one size fits all approach by handling requests dynamically at the edge as opposed to placing undue complexity and rigidity earlier in the in content preparation phase. In addition, manifest manipulation offers a graceful way to handle certain issues that happen upstream, for example within the ad decisioning workflow. In a client side approach, an error where the ad copy isn’t delivered in time could result in a blank screen, whereas a server-side implementation could replace the absent ad with other content to avoid the impact on the user experience.

Handling manifest manipulation at the edge offers many advantages around scale, flexibility and intelligence and seems to be the route content owners are moving towards industry wide. I’d love to hear from others in the comment section on how you are addressing this problem, or helping clients address it.

Video: Delivering Incredible End-User Experiences Using Containerized Microservices Closer To Endpoints

At my recent EdgeNext Summit event, one of the key talks was how to allow developers to deliver interactive experiences to end users while ensuring low application response times, leading to higher engagement and happier users. Haseeb Budhani, CEO of Rafay Systems did a presentation on how the company’s platform enables developers to deliver these highly engaging user experiences by running containerized microservices closer to endpoints. Video link: https://www.youtube.com/watch?v=iNyazklVTQw&t=17s

Right Now, It’s Not About VR and Autonomous Cars: See Which Edge Applications and Edge Platforms Are Ready To Go Today

With all the hype of edge computing and edge cloud, everyone is now claiming to have products that are focused on the “Edge”. This branding and edge washing has created a lot of confusion around “What is the Edge”, “Where is the Edge” and “Who owns the Edge”?

Furthermore, the futuristic view around autonomous cars, virtual reality and remote surgery is over used: is this the best we can do for use cases for edge? At my recent EdgeNext Summit event, Yves Boudreau, VP of Partnerships and Ecosystem Strategy for Edge Gravity by Ericsson, presents what they have learned in the 12 months on real edge computing and which applications are likely to exemplify the near team user of the edge. Video link: https://www.youtube.com/watch?v=Vxm9mpltXv8

Understanding Packet Loss and Its Impact On Mobile Content Performance

The transient nature and pervasiveness of packet loss, jitter and other performance problems occurring over a wireless “last mile” are often poorly understood, and hardly ever quantified. At my recent EdgeNext Summit event, Subbu Varadarajan from Zycada illustrated the impact of packet loss on performance over a wireless connection. Based on the analysis of 100+ billion transactions, he demonstrated the scope of packet loss, explained best practices to measure its impact, and showed a live demo of Zycada’s packet loss mitigation over the wireless last mile.

https://zycada.wistia.com/medias/fdrrrxbagn

The “Fortnite Effect On Networks and How ISPs are Working To Improve the eSports Experience

Online multiplayer gaming is no longer the preserve of a small number of avid fanatics. It’s gone mainstream and is attracting a populist wave as manifested by the “Fortnite effect”. At my EdgeNext Summit in October, Haste, a network service that improves network performance and user experience for gaming, discussed how Fortnite and other online multiplayer gaming platforms have become leading edge applications. In this video from the show, Adam Toll, Founder and Board Director at Haste, presents how their technology and Ericsson’s Edge Gravity platform, and ISPs, are working together to deliver a superior gaming experience to players.

Edge Computing Helps Scale Low-Latency Live Streaming, But Challenges Remain

At my recent EdgeNext Summit, there was a lot of discussion about how edge computing can be used to deliver next generation OTT viewing experiences with low latency. Traditional HTTP streaming formats such as HLS and MPEG-DASH have gained wide acceptance, making it relatively easy to reach viewers on almost any viewing device at global scale. However, these formats have significant limitations when used in traditional live streaming workflows, including slow startup, stream latencies, and latency drift over time. By utilizing edge compute resources, new methods and formats are emerging to address these limitations and improve the experience for viewers and content distributors.

Live streaming has some inherent challenges that impact infrastructure requirements differently from traditional VOD delivery. Demand for live events can grow quite quickly, requiring instant scalability. For example, the recent online trivia craze has seen demand for online streams grow from zero to over one million viewers in just a matter of minutes, creating massive challenges in ramping to meet this instant demand. Unlike traditional on-demand workflows where popular content can be pre-cached in multiple locations to reduce bottlenecks, live content must be ingested, packaged, and pushed to edge locations as it is created. And with growing viewer frustration around poor user experiences when it comes to streaming live events, caching and buffering of live content to gain scalability and ensure reliable playback becomes a greater challenge.  

Traditional cloud service providers offer hyper-scale data centers with computing resources, but they are in relatively few locations and often not located in densely populated areas where viewers may reside. Sending source video streams from these centralized cloud data centers to viewers in distant locations requires extensive middle-mile capacity and can create peering bottlenecks as viewership grows. To viewers, this often means slow or inconsistent startup of video streams, video quality degradation as more viewers watch the event, or even the inability to join popular live streams. These quality, reliability and scalability challenges are impeding the way consumers think of live streaming as a true replacement to cable TV.  

Edge computing can help alleviate these bottlenecks by sending source streams to edge servers that perform stream splitting and replication functions. By locating these edge servers in more locations across the globe, edge computing provides the ability to scale up to meet the demands of even the largest live events. By scaling at the edge, traditional bottlenecks are avoided and transit costs are reduced by serving streams from locations that are closer to viewers. And by distributing this capacity in metropolitan areas where viewers are located, it ensures lower latency and higher QoE by reducing the distance the packaged stream must travel to the viewer, reducing transit costs and helping to eliminate peering bottlenecks. The result is greater scalability to provide higher-quality viewing experiences.

One of the live streaming technologies that many think is ideally suited for edge computing is WebRTC. Source video feeds can be ingested through local ingest locations using traditional low-latency streaming formats such as RTMP. The source video can then be pushed to edge servers around the globe where the RTMP feed is converted to WebRTC through edge compute instances running in locations close to viewers. Unlike traditional HTTP live streaming formats, WebRTC uses UDP instead of TCP. UDP streaming takes advantage of modern fiber-based IP infrastructures that utilize hardware-based switches and routers to deliver higher sustained average bandwidth and picture quality to viewers. 

WebRTC also promises to open up new interactive workflows by allowing viewers to simultaneously watch live streams at the same time. With the growing popularity of live online gaming such as Fortnite and the legalization of sports betting in the U.S., the ability for content distributors to deliver synchronized interactive online live streaming will help drive the increased consumption of live content and the need for better edge computing functionality and scale.  

One of the first CDN companies that has implemented this approach to using edge compute for scalable live video streaming is Limelight Networks. And with their recently announced partnership with Ericsson to leverage its UDN Edge Cloud Platform to provide edge computing and delivery within carrier networks should provide even greater scalability and capacity for live video streaming at the edge. This will become increasingly important as 5G begins to be rolled out, as I noted in my recent blog Here’s Why Today’s Video Infrastructure Is Not Ready For 5G, And How Edge Technologies Can Help.

All CDN vendors are talking about ultra-low latency video delivery, but not a single one will tell you it’s easy to do at scale. It takes a lot of resources to deploy and adds overhead cost to operating their service. Aside from the technical challenges in scaling, CDNs also have to target specific customers who see the value and are willing to pay more for the improved user experience. In a recent survey I did of over 100 broadcast and media customers, 80% of them said they wanted ultra-low latency functionality, but were not willing to pay more for it. Many expect the functionality to be part of a standard CDN delivery service. 

By utilizing edge computing capacity within carrier networks as well as in globally distributed data centers to scale and distribute live video streams, content distributors want to reduce costs while ensuring the highest quality live viewing experiences on both mobile and fixed devices. Edge infrastructure will help make this a reality, and it’s going to take time for the business model to be figured out, but it will happen. I’m excited to see what new applications will be enabled in the live video world from the edge and with low-latency. They are coming.