Podcast Episode 3: Latest HBO Max Numbers; Debating the Growth of vMVPDs; Fire TV and Google TV Data from CES


Podcast Episode 3 is live! This week Mark Donnigan and I discuss: The Latest HBO Max Numbers; Debating the Growth of vMVPDs; Fire TV and Google TV Data from CES. (www.danrayburnpodcast.com)

This week we breakdown the latest HBO Max/HBO subscribers numbers just released by AT&T and question how the industry should define sub growth. While 73.8M sounds like a big number, the service had 61M at the end of 2020 and won’t have day-in-date releases this year. We also debate some of the latest “estimates” released on vMVPD subscriber growth and highlight the challenges faced with vMVPD packages now averaging $65 a month. We also touch on some of the latest numbers given out at the CES show around Five TV device sales and Google TV adoption.

Companies and services mentioned: HBO Max, Fire TV, Peacock TV, Sling TV, Hulu, Disney, Netflix, Discovery, WarnerMedia, Apple, Roku, Samsung, TCL, Google TV, YouTube TV, fuobTV, Philo, DIRECTV STREAM, PlayStation Vue, ESPN+, Comcast, AT&T, Fastly, Cloudflare, Brightcove, Limelight Networks, Haivision, Kaltura, Qumu, Vimeo, Twitter, Akamai.

Executives mentioned: Jay Utz, Noah Levine, Robert Coon, Guy Paskar, David Belson.

Sponsored by

Over 58% of OTT Services Surveyed Have “Limited” to “No” Insight Into The Main Reasons for Churn

Across the streaming media industry, we read and talk a lot about the video stack including encoding, metadata, APIs, delivery, playback, and QoE. While all of these are important elements in building out video services at scale, the end result of these technical elements must come down to one simple business outcome – to reduce churn and increase retention. Nothing is more important than having the right content strategy and proper data collection methodology to know why subscribers sign up and more importantly, what streaming services can do to keep users on their platform.

With so many video services in the market to choose from and most services offering only month-to-month billing, with no cancellation fees, it’s easy for consumers to jump amongst services. This is one of the main reasons why almost no OTT service, even those that are public companies, break out any churn numbers in their earnings reports. Some streaming services don’t even disclose how many of their users are paying subscribers versus customers on a free trial. To help answer a lot of these unknown questions around churn and retention of OTT services, I held conversations with and collected data from just over 100 streaming media services globally.

Through a new consulting relationship with Salesforce, I also spent a lot of time talking to and looking at some of the data (anonymized) from Salesforce’s Subscription Lifecycle Management platform, built specifically for publishers, broadcasters and OTT service providers. The results of the findings, which I plan to release shortly for free to the industry, are pretty interesting and include real numbers directly from OTT companies. While I’ve seen other reports released with estimates on churn and retention percentages across the streaming OTT industry, I can tell you from speaking to a lot of publishers directly, including some of the largest OTT platforms in the world, that the numbers released to date are not even close to being accurate.

One of the main reasons for the inaccurate churn data we see being released by third-parties is because it’s based off the wrong metrics, like tracking app downloads that can’t tell a new user, a current paying user, or a former user returning. Another problem is that many don’t define “churn” using an agreed upon definition. For instance, some third-party companies define churn as a user whose account was put on hold due to an expired credit card, while others don’t include them in their numbers at all, since I’m told by streaming services that most update their credit card and stay on the platform. There is no industry standard for reporting churn methodology across services which makes comparing these numbers from one service to another difficult, especially when it comes to how some video services are bundled with other telecom or mobile services.

A few years back, Hulu stopped reporting the number of subscribers they had to Hulu + Live TV because the company said users were turning the service on and off so many times throughout the year that the numbers given out each quarter wouldn’t give an accurate representation of the business. Since Disney acquired the majority stake in Hulu, Disney does break out the total number of subscribers each quarter, but not with any churn or retention data included. Some OTT services break out the number of hours per month each user streams video, while others don’t share numbers at all. So even trying to measure and compare something like “engagement” as a metric, across all platforms, is very difficult.

There are three main reasons why streaming services don’t share their churn and retention numbers publicly. The primary reason is that OTT services view their churn and retention data as competitive intelligence about their business that they don’t want others services to know. A second reason is that some streaming services don’t know the proper methodology to define churn and can’t compare their formula to any industry standard. If a user cancels a service, but comes back three months later, how do you define that behavior? One user, churned twice, but it’s the same account. Some services count churn based on the month, quarter or calendar year, while others only count churn based on the lifetime value of a customer, over a set period of time that they define.

The last reason why services don’t share any churn data is that many of them struggle to measure where the churn took place across their platform and for some, they can’t even detect the root cause of churn. Did a user do ten searches in a row for content they couldn’t find and then cancel? Or did they try and play a few different videos that had playback issues and cancel as a result of a technical problem? I won’t mention services by name, but many I have spoken to privately discuss how little they know about the reasons why consumers churn off their platforms.

Within the streaming media industry many argue about poor video quality startup times as the main reason users churn, when in fact, based on the data I collected, poor video quality isn’t even in the top three results. Of the 101 OTT services I surveyed, lack of content choices, poor customer service and pricing, bundling and yearly service discounts were the top three responses. And of those I surveyed, 58.5% of them said the tool(s) they use provide “limited” or “no” insight into the main reasons for churn, let alone where the consumer churns out of the platform.

The top result of churn wasn’t platform support, startup times, or 4k options. It all came down to content, discovery and poor customer service as the top three responses by streaming services. The content windowing strategy of each platform and how their content is released, the schedule, ability to binge watch etc. were all talked about by the platforms as the biggest driver. This is why companies are spending so much money on content and continue to spend more each year. No one should be surprised by this since we know that content is king and is the number one driver of new subscriber sign ups and with the right content, keeps user from canceling. But going forward, simply adding new content won’t be a fix by itself. That tactic will need to be used in combination with a host of other improvements that streaming services need to make.

Another problem is that too many vendors are trying to sell churn and retention platforms to OTT providers by looking at churn across only a few data points like QoS, personalization or recommendation engines. But it’s not about one or two data points. It’s about having a complete subscriber lifecycle management platform that properly measures not only the reason for churn but also user’s engagement. They have to go hand-in-hand, like what Salesforce offers with not only managing subscribers but also looking at how engaged they are. And finally, to be successful, these platforms have to be proactive, not reactive. Far too many vendor solutions I have looked at don’t allow customers to make accurate behavior predictions, so they can reduce churn before it happens. Personalization is NOT a churn and retention tool.

I’ll be releasing all the results from my conversations and survey about streaming video churn and retention shortly, (anonymized) and it can be used anyone as they like. It was not sponsored by any vendor or company, but rather a project I undertook to give the industry some real data since most of the info given out on churn is not accurate. I hope the data sparks a bigger public conversation within the industry and it’s a topic I’ll be covering a lot on my new podcast and at the NAB Show Streaming Summit.

Podcast Episode 2: Poor Video Advertising Experiences; Growth of AVOD Services; Potential Content Roll-up in 2022 (Starz, AMC, ViacomCBS)

Podcast Episode 2 is live!. This week Mark Donnigan and I discuss: Poor Video Advertising Experiences; Growth of AVOD Services; Potential Content Roll-up in 2022 with Starz, AMC, ViacomCBS. (www.danrayburnpodcast.com)

We cover how the industry gushes about video advertising growth, without really addressing the problems around personalization, measurement, formats and low CPM rates. We also discuss the growth of AVOD services and question how many can exist when they are offering essentially the same thing — a bundled offering of free networks with a lot of old movies and TV shows and syndicated programming. We also debate which content companies might get acquired in 2022 including Starz, AMC and ViacomCBS.

Companies and vendors mentioned: Tubi, Pluto TV, Starz, AMC Networks, ViacomCBS, TikTok, Noggin, Comcast, Sony Pictures, MLB TV, Amazon IMDb TV, Roku, Sinclair Broadcast Group, Samsung TV, LG, Amazon, MGM.

Executives mentioned: Darren Lepke, Stephen Condon, Yueshi Shen, Andy Beach.

Video Advertising Experiences Got More Annoying and More Frequent in 2021

Another year down, another year of many poor video advertising experiences for consumers. As an example, CBS Sports (and many others) auto-play two videos, on the same page, AT THE SAME TIME! How is a viewer supposed to watch and listen to two videos concurrently? Common sense says that’s a bad idea.

We’ve also seen way more pop-up ads in 2021 when it comes to overlays. And many times, the overlay ad on the website covers up part of the pre-roll ad in the video window. Dueling advertisements?! And yet for the most part, the streaming media industry doesn’t discuss these ad problems. Instead, many simply gush about all the growth in AVOD, without really addressing what matters most – personalization, measurement, formats, lack of CPM growth.

In 2007 I wrote a post entitled “The Five Biggest Technical Issues Hurting The Growth Of Online Video Advertising” and sadly, 13 years later, the industry is still struggling with these issues. In 2009, I wrote a post entitled “Is There A Shortage Of Online Video Advertising Inventory?“, highlighting the fact that viewers were getting the same ad delivered to them over and over again. And yet 11 years later, for many of the games I watched this year on MLB.TV I got the same ad inserted into the stream more than 25x during a single game. And that’s when the ad worked because almost 30% of the time, the ads didn’t trigger properly at all. In 2020, I wrote how I watched 43 videos on YouTube across 3 different channels and got a Microsoft Teams pre-roll commercial 28x, and a State Farm commercial 15x. Video advertising issues from as long as ten years ago are still many of the same issues we are struggling with today.

Ad platforms, ad exchanges, content aggregators, publishers etc. almost never share any actual data around what CPMs are and where they are trending, based on the type of ad, type of content and platform. They also don’t discuss any real details around ad personalization, percentage of ads skipped, best adoption of ad formats on specific platforms, ads that don’t trigger properly, ads formatted wrong based on device/platform or all of the other technical issues we have as an industry. Instead, it seems people just want to talk about the “growth” in AVOD, without even sharing many metrics around how growth is defined.

As an industry, we need real data shared so we can discuss video advertising problems, work towards solutions, set proper expectations and have a way to measure success, based on actual methodology. Without the industry coming together to do that in 2022, the new year is going to continue to bring a lot of poor video advertising experiences for us all.

My New Podcast Is Live! – Episode 1: Metrics, Churn and Retention; Unrealistic Vendor Valuations

I’m excited to announce the launch of my new weekly podcast. Curating the streaming media industry news of the week that matters most, in 30 minutes. Unvarnished, unscripted and providing you with the data and analysis you need, without any hype. With co-host Mark Donnigan. www.danrayburnpodcast.com

This week we discuss the problem with Nielsen’s reporting and how the industry should define churn and retention; the importance of HBO Max and Disney defaulting to a single video stack; how some vendors are overestimating their valuations; and why Roku and Google’s dispute isn’t over.

Companies and vendors mentioned: Roku, Google, Disney Streaming Services, Hulu, HBO Max, NFL, ESPN, Amazon, NFL, YouTube, WarnerMedia, Discovery, Nielsen, SiriusXM, Vudu, Hive, Firstlight Media, Vimeo, Mux, Hopin, Brightcove, Fastly, Panopto. Executives mentioned: Joseph Inzerillo, Jonathan Stock, Rick McConnell, Mike Green, Nathan Veer, Doug Castoldi.

Note: The podcast has been submitted to all the platforms, might take time to show up. Few audio blips in Episode 1 we’ll have worked out for the next show.

With Thursday Night Football Broadcast Exclusively to Amazon Next Season, What’s The Impact to ISPs?

With Thursday Night Football broadcast exclusively to Amazon next season, one has to wonder what the impact could be to ISP networks. I’ve heard Amazon has the desire to stream in 4K, but will Amazon’s CloudFront CDN along with the many third-party CDNs they plan to use be able to handle 4K video at scale? And what volume of traffic is expected? Will the stream max out at 1080p instead?

If I understand the deal correctly, Amazon’s exclusivity doesn’t extend to local markets, so viewers will still be able to get the game OTA in local markets. Some will still opt for that route over streaming which will impact the volume of streams, but to what degree is unknown. I know some companies are already working with Amazon doing traffic studies and estimates of what it’s going to take from a network standpoint, but it’s something the industry should start looking at and discussing as we’ve never seen this type of distribution deal for the NFL’s content.

Some may point to the Super Bowl as guide of what we can expect, but that’s not a good comparison since the majority of people watch the game on TV. The 2021 Super Bowl peaked at 5.7 million “viewers per minute”, which puts the actual concurrent simultaneous streams count lower than that. It’s going to be interesting to watch how Amazon approaches the season from a capacity standpoint across CDNs and ISPs and while I can’t share some of the numbers they are already discussing with CDNs, they are quite large from a capacity standpoint.

With API Growth, Customers Demanding CDNs Offer API Acceleration at the Edge

Once considered just part of the “nuts and bolts” of application infrastructure, APIs have moved swiftly into a leading role in driving digital experiences. For the CDNs that handle this API traffic, this is creating high expectations for performance and reliability, as well as expanding security challenges. The worldwide focus on digital transformation is driving increased adoption of microservices architectures and APIs have quickly emerged as a standard way of building and connecting modern applications and enabling digital experiences where connection speeds are measured in milliseconds.

We use these services—and the APIs that enable them—every day, within all applications. Things like interactive apps for weather, news and social media; transactional apps like those for commerce and banking; location services; online gaming; videoconferencing; chat bots…all rely on APIs. With new microservices coming online daily, expect the proliferation of APIs to continue. Indeed, recent surveys revealed that 77% of organizations are developing and consuming APIs and 85% indicated that APIs are critical to their digital transformation initiatives.

API traffic has some specific characteristics that can make them tricky to manage. Transactions are small and highly dynamic, yet they can also be quite compute-intensive. They are sensitive to latency, often measured in milliseconds, and prone to spikes. These realities, together with the proliferation of APIs, create some significant challenges for delivering content and services. APIs also represent the most common attack vector for cyber criminals. It has been reported that 90% of web application attacks target APIs, yet API endpoints are often left unprotected due to the sheer number of APIs and the limited resources available to police them. The challenge of policy enforcement is especially complex in organizations with several autonomous development teams building and deploying across hybrid cloud environments.

Organizations expect API response times in the tens of milliseconds, particularly for public-facing APIs that are critical to user experience. This can be difficult to manage given the highly dynamic nature of API traffic, which is often compute-intensive and difficult to cache. Many APIs are in the critical path for applications and if they are not delivered, it can render the application unusable. That explains why 77% of respondents in a recent survey pointed to API availability as their top deployment concern. Ensuring that availability can be challenging because API traffic volumes tend to come in waves or spike quickly when an end-user request triggers a series of requests to third-party services. Large online events can also drive up request volumes, creating even greater availability challenges.

Any or all of these issues can significantly impact applications. When public-facing APIs aren’t delivered within fast response times or have poor reliability, the user experience suffers. And if APIs are not secure, they represent serious cyberattack vulnerabilities. The potential results, with equal a poor user experience, leads to a loss of revenue and damage to brands. To minimize that risk, companies should start with the fundamental step of API discovery. After all, you can’t manage, secure and protect what you can’t see. With developers launching new APIs left and right, it is likely that there are many undiscovered API endpoints out there. So it’s critical to discover and protect unregistered APIs and identify errors and changes to existing APIs.

Content owners also need to think about where application functionality is taking place. While public clouds have emerged as the “go-to” for all kinds of application workloads, they do present some limitations when it comes to handling API transactions. One leading cloud provider achieves response times around 130ms (measured using Cedexis Radar – 50th percentile – Global community), yet many microservices require API response times of less than 50ms. Edge computing offers an attractive alternative to the cloud. Moving application functionality to the edge benefits from closer proximity to end users, minimizing latency and maximizing performance. Making the edge an extension of your API infrastructure can also help unify your security posture, improving operational efficiency. Load balancing traffic at the edge can improve availability while simplifying management. And moving computing to the edge can improve scalability, allowing customers to serve users globally with more network capacity. Additionally, executing code at the edge gives developers the ability to move business logic closer to users, accelerating time to value.

Of course, like cloud providers and CDNs, not all edge compute platforms are equal. It’s important to look at how many points of presence there are, how globally distributed they are and the proximity to users. Does their network allow you to easily deploy microservices in multiple edge locations? These factors have a direct impact on latency. You also want to make sure the network is robust enough to handle the spikes that are common with APIs. Finally, is the network secure enough to mitigate the risk posed by bad actors targeting API endpoints? The API explosion is far from over. That reality presents a compelling case to view the edge as the logical extension of your organization’s API infrastructure, ensuring your users get the experience they expect, whenever and wherever they want it.