Sponsored by

    Search

    Subscribe to Blog Posts


Podcast Episode 4: Netflix’s Price Hike; New Churn and Retention Data; Latest Sports Licensing Deals (Sinclair, ViacomCBS)


Podcast Episode 4 is live! This week Mark Donnigan and I discuss: Netflix’s Price Hike; New Churn and Retention Data; Latest Sports Licensing Deals (Sinclair, ViacomCBS) https://www.danrayburnpodcast.com

We highlight industry challenges with churn and retention and some of the major problems when it comes to the methodology being used to compare services. We also discuss Netflix’s recent price increase and the potential impact it could have on their subscriber growth. We cover the recent sports licensing deals with the news that ViacomCBS secured the exclusive rights to stream English Premier League soccer; Sinclair announced a new distribution rights agreement with the NBA and we’ve read rumors of Apple talking to MLB. We also highlight new subscriber numbers from fuboTV, the growth of IMAX’s revenue in 2021 and our take on the attendance numbers from the CES Show.

Companies and services mentioned: Netflix, ViacomCBS, Paramount+, Discovery, Sinclair Broadcast Group, fuboTV, IMAX, Vimeo, Roku, NBA, Apple, MLB, V-Nova, Disney, Salesforce.

Podcast Episode 3: Latest HBO Max Numbers; Debating the Growth of vMVPDs; Fire TV and Google TV Data from CES


Podcast Episode 3 is live! This week Mark Donnigan and I discuss: The Latest HBO Max Numbers; Debating the Growth of vMVPDs; Fire TV and Google TV Data from CES. (www.danrayburnpodcast.com)

This week we breakdown the latest HBO Max/HBO subscribers numbers just released by AT&T and question how the industry should define sub growth. While 73.8M sounds like a big number, the service had 61M at the end of 2020 and won’t have day-in-date releases this year. We also debate some of the latest “estimates” released on vMVPD subscriber growth and highlight the challenges faced with vMVPD packages now averaging $65 a month. We also touch on some of the latest numbers given out at the CES show around Five TV device sales and Google TV adoption.

Companies and services mentioned: HBO Max, Fire TV, Peacock TV, Sling TV, Hulu, Disney, Netflix, Discovery, WarnerMedia, Apple, Roku, Samsung, TCL, Google TV, YouTube TV, fuobTV, Philo, DIRECTV STREAM, PlayStation Vue, ESPN+, Comcast, AT&T, Fastly, Cloudflare, Brightcove, Limelight Networks, Haivision, Kaltura, Qumu, Vimeo, Twitter, Akamai.

Executives mentioned: Jay Utz, Noah Levine, Robert Coon, Guy Paskar, David Belson.

Over 58% of OTT Services Surveyed Have “Limited” to “No” Insight Into The Main Reasons for Churn

Across the streaming media industry, we read and talk a lot about the video stack including encoding, metadata, APIs, delivery, playback, and QoE. While all of these are important elements in building out video services at scale, the end result of these technical elements must come down to one simple business outcome – to reduce churn and increase retention. Nothing is more important than having the right content strategy and proper data collection methodology to know why subscribers sign up and more importantly, what streaming services can do to keep users on their platform.

With so many video services in the market to choose from and most services offering only month-to-month billing, with no cancellation fees, it’s easy for consumers to jump amongst services. This is one of the main reasons why almost no OTT service, even those that are public companies, break out any churn numbers in their earnings reports. Some streaming services don’t even disclose how many of their users are paying subscribers versus customers on a free trial. To help answer a lot of these unknown questions around churn and retention of OTT services, I held conversations with and collected data from just over 100 streaming media services globally.

Through a new consulting relationship with Salesforce, I also spent a lot of time talking to and looking at some of the data (anonymized) from Salesforce’s Subscription Lifecycle Management platform, built specifically for publishers, broadcasters and OTT service providers. The results of the findings, which I plan to release shortly for free to the industry, are pretty interesting and include real numbers directly from OTT companies. While I’ve seen other reports released with estimates on churn and retention percentages across the streaming OTT industry, I can tell you from speaking to a lot of publishers directly, including some of the largest OTT platforms in the world, that the numbers released to date are not even close to being accurate.

One of the main reasons for the inaccurate churn data we see being released by third-parties is because it’s based off the wrong metrics, like tracking app downloads that can’t tell a new user, a current paying user, or a former user returning. Another problem is that many don’t define “churn” using an agreed upon definition. For instance, some third-party companies define churn as a user whose account was put on hold due to an expired credit card, while others don’t include them in their numbers at all, since I’m told by streaming services that most update their credit card and stay on the platform. There is no industry standard for reporting churn methodology across services which makes comparing these numbers from one service to another difficult, especially when it comes to how some video services are bundled with other telecom or mobile services.

A few years back, Hulu stopped reporting the number of subscribers they had to Hulu + Live TV because the company said users were turning the service on and off so many times throughout the year that the numbers given out each quarter wouldn’t give an accurate representation of the business. Since Disney acquired the majority stake in Hulu, Disney does break out the total number of subscribers each quarter, but not with any churn or retention data included. Some OTT services break out the number of hours per month each user streams video, while others don’t share numbers at all. So even trying to measure and compare something like “engagement” as a metric, across all platforms, is very difficult.

There are three main reasons why streaming services don’t share their churn and retention numbers publicly. The primary reason is that OTT services view their churn and retention data as competitive intelligence about their business that they don’t want others services to know. A second reason is that some streaming services don’t know the proper methodology to define churn and can’t compare their formula to any industry standard. If a user cancels a service, but comes back three months later, how do you define that behavior? One user, churned twice, but it’s the same account. Some services count churn based on the month, quarter or calendar year, while others only count churn based on the lifetime value of a customer, over a set period of time that they define.

The last reason why services don’t share any churn data is that many of them struggle to measure where the churn took place across their platform and for some, they can’t even detect the root cause of churn. Did a user do ten searches in a row for content they couldn’t find and then cancel? Or did they try and play a few different videos that had playback issues and cancel as a result of a technical problem? I won’t mention services by name, but many I have spoken to privately discuss how little they know about the reasons why consumers churn off their platforms.

Within the streaming media industry many argue about poor video quality startup times as the main reason users churn, when in fact, based on the data I collected, poor video quality isn’t even in the top three results. Of the 101 OTT services I surveyed, lack of content choices, poor customer service and pricing, bundling and yearly service discounts were the top three responses. And of those I surveyed, 58.5% of them said the tool(s) they use provide “limited” or “no” insight into the main reasons for churn, let alone where the consumer churns out of the platform.

The top result of churn wasn’t platform support, startup times, or 4k options. It all came down to content, discovery and poor customer service as the top three responses by streaming services. The content windowing strategy of each platform and how their content is released, the schedule, ability to binge watch etc. were all talked about by the platforms as the biggest driver. This is why companies are spending so much money on content and continue to spend more each year. No one should be surprised by this since we know that content is king and is the number one driver of new subscriber sign ups and with the right content, keeps user from canceling. But going forward, simply adding new content won’t be a fix by itself. That tactic will need to be used in combination with a host of other improvements that streaming services need to make.

Another problem is that too many vendors are trying to sell churn and retention platforms to OTT providers by looking at churn across only a few data points like QoS, personalization or recommendation engines. But it’s not about one or two data points. It’s about having a complete subscriber lifecycle management platform that properly measures not only the reason for churn but also user’s engagement. They have to go hand-in-hand, like what Salesforce offers with not only managing subscribers but also looking at how engaged they are. And finally, to be successful, these platforms have to be proactive, not reactive. Far too many vendor solutions I have looked at don’t allow customers to make accurate behavior predictions, so they can reduce churn before it happens. Personalization is NOT a churn and retention tool.

I’ll be releasing all the results from my conversations and survey about streaming video churn and retention shortly, (anonymized) and it can be used anyone as they like. It was not sponsored by any vendor or company, but rather a project I undertook to give the industry some real data since most of the info given out on churn is not accurate. I hope the data sparks a bigger public conversation within the industry and it’s a topic I’ll be covering a lot on my new podcast and at the NAB Show Streaming Summit.

Podcast Episode 2: Poor Video Advertising Experiences; Growth of AVOD Services; Potential Content Roll-up in 2022 (Starz, AMC, ViacomCBS)

Podcast Episode 2 is live!. This week Mark Donnigan and I discuss: Poor Video Advertising Experiences; Growth of AVOD Services; Potential Content Roll-up in 2022 with Starz, AMC, ViacomCBS. (www.danrayburnpodcast.com)

We cover how the industry gushes about video advertising growth, without really addressing the problems around personalization, measurement, formats and low CPM rates. We also discuss the growth of AVOD services and question how many can exist when they are offering essentially the same thing — a bundled offering of free networks with a lot of old movies and TV shows and syndicated programming. We also debate which content companies might get acquired in 2022 including Starz, AMC and ViacomCBS.

Companies and vendors mentioned: Tubi, Pluto TV, Starz, AMC Networks, ViacomCBS, TikTok, Noggin, Comcast, Sony Pictures, MLB TV, Amazon IMDb TV, Roku, Sinclair Broadcast Group, Samsung TV, LG, Amazon, MGM.

Executives mentioned: Darren Lepke, Stephen Condon, Yueshi Shen, Andy Beach.

Video Advertising Experiences Got More Annoying and More Frequent in 2021

Another year down, another year of many poor video advertising experiences for consumers. As an example, CBS Sports (and many others) auto-play two videos, on the same page, AT THE SAME TIME! How is a viewer supposed to watch and listen to two videos concurrently? Common sense says that’s a bad idea.

We’ve also seen way more pop-up ads in 2021 when it comes to overlays. And many times, the overlay ad on the website covers up part of the pre-roll ad in the video window. Dueling advertisements?! And yet for the most part, the streaming media industry doesn’t discuss these ad problems. Instead, many simply gush about all the growth in AVOD, without really addressing what matters most – personalization, measurement, formats, lack of CPM growth.

In 2007 I wrote a post entitled “The Five Biggest Technical Issues Hurting The Growth Of Online Video Advertising” and sadly, 13 years later, the industry is still struggling with these issues. In 2009, I wrote a post entitled “Is There A Shortage Of Online Video Advertising Inventory?“, highlighting the fact that viewers were getting the same ad delivered to them over and over again. And yet 11 years later, for many of the games I watched this year on MLB.TV I got the same ad inserted into the stream more than 25x during a single game. And that’s when the ad worked because almost 30% of the time, the ads didn’t trigger properly at all. In 2020, I wrote how I watched 43 videos on YouTube across 3 different channels and got a Microsoft Teams pre-roll commercial 28x, and a State Farm commercial 15x. Video advertising issues from as long as ten years ago are still many of the same issues we are struggling with today.

Ad platforms, ad exchanges, content aggregators, publishers etc. almost never share any actual data around what CPMs are and where they are trending, based on the type of ad, type of content and platform. They also don’t discuss any real details around ad personalization, percentage of ads skipped, best adoption of ad formats on specific platforms, ads that don’t trigger properly, ads formatted wrong based on device/platform or all of the other technical issues we have as an industry. Instead, it seems people just want to talk about the “growth” in AVOD, without even sharing many metrics around how growth is defined.

As an industry, we need real data shared so we can discuss video advertising problems, work towards solutions, set proper expectations and have a way to measure success, based on actual methodology. Without the industry coming together to do that in 2022, the new year is going to continue to bring a lot of poor video advertising experiences for us all.

My New Podcast Is Live! – Episode 1: Metrics, Churn and Retention; Unrealistic Vendor Valuations

I’m excited to announce the launch of my new weekly podcast. Curating the streaming media industry news of the week that matters most, in 30 minutes. Unvarnished, unscripted and providing you with the data and analysis you need, without any hype. With co-host Mark Donnigan. www.danrayburnpodcast.com

This week we discuss the problem with Nielsen’s reporting and how the industry should define churn and retention; the importance of HBO Max and Disney defaulting to a single video stack; how some vendors are overestimating their valuations; and why Roku and Google’s dispute isn’t over.

Companies and vendors mentioned: Roku, Google, Disney Streaming Services, Hulu, HBO Max, NFL, ESPN, Amazon, NFL, YouTube, WarnerMedia, Discovery, Nielsen, SiriusXM, Vudu, Hive, Firstlight Media, Vimeo, Mux, Hopin, Brightcove, Fastly, Panopto. Executives mentioned: Joseph Inzerillo, Jonathan Stock, Rick McConnell, Mike Green, Nathan Veer, Doug Castoldi.

Note: The podcast has been submitted to all the platforms, might take time to show up. Few audio blips in Episode 1 we’ll have worked out for the next show.

With Thursday Night Football Broadcast Exclusively to Amazon Next Season, What’s The Impact to ISPs?

With Thursday Night Football broadcast exclusively to Amazon next season, one has to wonder what the impact could be to ISP networks. I’ve heard Amazon has the desire to stream in 4K, but will Amazon’s Cloudfront CDN along with the many third-party CDNs they plan to use be able to handle 4K video at scale? And what volume of traffic is expected? Will the stream max out at 1080p instead?

If I understand the deal correctly, Amazon’s exclusivity doesn’t extend to local markets, so viewers will still be able to get the game OTA in local markets. Some will still opt for that route over streaming which will impact the volume of streams, but to what degree is unknown. I know some companies are already working with Amazon doing traffic studies and estimates of what it’s going to take from a network standpoint, but it’s something the industry should start looking at and discussing as we’ve never seen this type of distribution deal for the NFL’s content.

Some may point to the Super Bowl as guide of what we can expect, but that’s not a good comparison since the majority of people watch the game on TV. The 2021 Super Bowl peaked at 5.7 million “viewers per minute”, which puts the actual concurrent simultaneous streams count lower than that. It’s going to be interesting to watch how Amazon approaches the season from a capacity standpoint across CDNs and ISPs and while I can’t share some of the numbers they are already discussing with CDNs, they are quite large from a capacity standpoint.

With API Growth, Customers Demanding CDNs Offer API Acceleration at the Edge

Once considered just part of the “nuts and bolts” of application infrastructure, APIs have moved swiftly into a leading role in driving digital experiences. For the CDNs that handle this API traffic, this is creating high expectations for performance and reliability, as well as expanding security challenges. The worldwide focus on digital transformation is driving increased adoption of microservices architectures and APIs have quickly emerged as a standard way of building and connecting modern applications and enabling digital experiences where connection speeds are measured in milliseconds.

We use these services—and the APIs that enable them—every day, within all applications. Things like interactive apps for weather, news and social media; transactional apps like those for commerce and banking; location services; online gaming; videoconferencing; chat bots…all rely on APIs. With new microservices coming online daily, expect the proliferation of APIs to continue. Indeed, recent surveys revealed that 77% of organizations are developing and consuming APIs and 85% indicated that APIs are critical to their digital transformation initiatives.

API traffic has some specific characteristics that can make them tricky to manage. Transactions are small and highly dynamic, yet they can also be quite compute-intensive. They are sensitive to latency, often measured in milliseconds, and prone to spikes. These realities, together with the proliferation of APIs, create some significant challenges for delivering content and services. APIs also represent the most common attack vector for cyber criminals. It has been reported that 90% of web application attacks target APIs, yet API endpoints are often left unprotected due to the sheer number of APIs and the limited resources available to police them. The challenge of policy enforcement is especially complex in organizations with several autonomous development teams building and deploying across hybrid cloud environments.

Organizations expect API response times in the tens of milliseconds, particularly for public-facing APIs that are critical to user experience. This can be difficult to manage given the highly dynamic nature of API traffic, which is often compute-intensive and difficult to cache. Many APIs are in the critical path for applications and if they are not delivered, it can render the application unusable. That explains why 77% of respondents in a recent survey pointed to API availability as their top deployment concern. Ensuring that availability can be challenging because API traffic volumes tend to come in waves or spike quickly when an end-user request triggers a series of requests to third-party services. Large online events can also drive up request volumes, creating even greater availability challenges.

Any or all of these issues can significantly impact applications. When public-facing APIs aren’t delivered within fast response times or have poor reliability, the user experience suffers. And if APIs are not secure, they represent serious cyberattack vulnerabilities. The potential results, with equal a poor user experience, leads to a loss of revenue and damage to brands. To minimize that risk, companies should start with the fundamental step of API discovery. After all, you can’t manage, secure and protect what you can’t see. With developers launching new APIs left and right, it is likely that there are many undiscovered API endpoints out there. So it’s critical to discover and protect unregistered APIs and identify errors and changes to existing APIs.

Content owners also need to think about where application functionality is taking place. While public clouds have emerged as the “go-to” for all kinds of application workloads, they do present some limitations when it comes to handling API transactions. One leading cloud provider achieves response times around 130ms (measured using Cedexis Radar – 50th percentile – Global community), yet many microservices require API response times of less than 50ms. Edge computing offers an attractive alternative to the cloud. Moving application functionality to the edge benefits from closer proximity to end users, minimizing latency and maximizing performance. Making the edge an extension of your API infrastructure can also help unify your security posture, improving operational efficiency. Load balancing traffic at the edge can improve availability while simplifying management. And moving computing to the edge can improve scalability, allowing customers to serve users globally with more network capacity. Additionally, executing code at the edge gives developers the ability to move business logic closer to users, accelerating time to value.

Of course, like cloud providers and CDNs, not all edge compute platforms are equal. It’s important to look at how many points of presence there are, how globally distributed they are and the proximity to users. Does their network allow you to easily deploy microservices in multiple edge locations? These factors have a direct impact on latency. You also want to make sure the network is robust enough to handle the spikes that are common with APIs. Finally, is the network secure enough to mitigate the risk posed by bad actors targeting API endpoints? The API explosion is far from over. That reality presents a compelling case to view the edge as the logical extension of your organization’s API infrastructure, ensuring your users get the experience they expect, whenever and wherever they want it.

Data Shows 82% of Traffic to The Top 250 Piracy Sites is Routed Through Cloudflare

In September, MUSO, a London based company that collects data from billions of piracy infringements every day, scanned the highest trafficked 250 global piracy sites and found that 82% of the piracy traffic is being proxied through Cloudflare. This is an astonishing number and when compared to other network providers, no other provider had over a 2% share.

For those that track the network provider space these findings probably aren’t too surprising since Cloudflare has a history of allowing piracy sites, along with terrorist organizations, to use their services. The disappointing news is that MUSO routinely informs Cloudflare about copyright infringing content on these sites, but doesn’t see them shutting off the websites. This is a clear violation of the first clause in Cloudflare’s AUP which states, “By agreeing to these Terms, you represent and warrant to us… (iii) that your use of the Websites and Online Services is in compliance with any and all applicable laws and regulations.”

MUSO says other network providers like AWS and Microsoft act on notifications and takedown requests and remove sites from their service resulting in MUSO only seeing a small number of piracy sites on other these providers. In fact, the numbers are so small that AWS and Microsoft fall under the “other” bucket on the list, below 0.6% of all piracy traffic. If you want to see a snapshot of the top 25 piracy sites from MUSO, you can see them here.

For clarity on the methodology, MUSO took the top 250 highest trafficked piracy sites in September 2021, which represent 40% of global piracy traffic. They look up the network provider for each site, group them by traffic and then look at the volume. These figures don’t measure the piracy streams and downloads of the content within these sites, they reflect the traffic to the sites themselves. MUSO’s digital content database covers 196 countries, millions of measured devices and billions of piracy pages continuously tracked and is used by clients including Sony, AMC, Bloomberg and many others in the broadcast, publishing, film and software industries.

Based on the data, there’s a very clear pattern here. Cloudflare has a huge share of traffic from the top 250 global piracy sites running through its network and benefits financially as a result. There is no way to know how much revenue they make from these piracy sites, but with such a large share of traffic, it’s substantial.

The Importance and Role of APIs That Make Streaming Media Possible

As content providers strive to make the streaming viewer experience more personalized and friction-free, the role of API-enabled microservices is growing rapidly. But, just like the complex interactions that occur under the hood of your car, many probably don’t think much about the APIs that make feature-rich services possible. After all, they just work, until they don’t. And it doesn’t take much to frustrate subscribers who can switch to another provider with the click of their remote or a command to Alexa or Siri. Without APIs, streaming media services would not exist. There are three broad phases of a typical streaming media experience where APIs are essential:

  • Phase 1 – Login and selection: When the viewer opens the app, their saved credentials are checked against a subscriber database and they are automatically authenticated. Information stored in their user profile is then fetched, allowing the app to present a personalized carousel of content based on the subscriber’s viewing history and preferences, including new releases and series the viewer is watching or has watched recently. This involves a recommendation engine responsible for delivering the thumbnail images of recommended content for the subscriber. APIs must accomplish all of this in seconds.
  • Phase 2 – Initiating playback: Once the subscriber has selected their content and pressed “play,” a manifest request is generated, providing a playlist for accessing the content. This may involve interactions with licensing databases and/or advertising platforms for freemium content, all handled by APIs.
  • Phase 3 – Viewing experience: During playback, APIs manage multiple aspects of the experience, including serving up ads. In some cases, these ads may be personalized for the subscriber, requiring a profile lookup. If the subscriber decides to switch platforms midstream—say from their big-screen TV to their tablet—APIs come into play to make that transition seamless, so the content picks up right where it paused on the other platform.

API calls are occurring throughout playback, tracking streaming QoS to optimize performance given the subscriber’s bandwidth, network traffic and other conditions. Additional information, such as whether the subscriber watched the entire movie or bailed out partway through, may be delivered back to their profile to further refine their preferences (Most content providers only count a program “watched” once the subscriber has reached some percentage of completion). That’s just a high-level view—in practice, there may be hundreds or thousands of API interactions, each of which must occur in milliseconds to make this complex chain of events happen seamlessly. If just one of these interactions breaks down, the user experience can quickly go from streaming to screaming.

With APIs being so critical to the user experience there are many potential points of failure. The wrong profile may be loaded. The recommendation engine may not serve up the suggested content thumbnails. The manifest request could fail, triggering the dreaded “spinning wheel of death,” preventing content from loading. Ads may hang up, disrupting or even blocking playback completely. Some of these API failures are merely annoying—but others are in the critical path, preventing content delivery.

In addition to APIs essential for playback functions—manifest calls, beaconing for QoS, personalization, etc.—other APIs play crucial roles for business intelligence. APIs that gather data on user engagement, such as percentage of completion of content, are important for analytics that inform critical decisions impacting content development budgets and advertiser negotiations. This data is also used for subscriber interactions outside the platform, such as email touches to encourage subscribers to continue watching a series or alert them to new content matching their preferences. Continually keeping a finger on the pulse of subscriber behavior is crucial for business decisions that create a sticky service that builds subscriber loyalty. APIs make it possible.

Given the role APIs play, ensuring their performance, reliability and security should be a priority for streaming media providers. The first step is recognizing their importance and understanding which APIs are in the critical path. These may need a higher degree of protection, such as having backups for them. Delivering critical APIs from the edge, from endpoints closer to the subscriber, can also help maximize both their reliability and their performance. Another good practice is keeping your APIs on separate hostnames from your content (api.domain vs. domain/api), allowing each to be tuned for optimum performance and reliability. Making sure your content delivery network has robust security features is also important, as APIs are an increasingly common vector for cyberattacks, which is why you are seeing more CDN vendors offering API protection services.

With consumers demanding a continuous supply of new streaming features and capabilities, APIs will only become more numerous and more critical to business success. So it only makes sense to pay as close attention to how your APIs are architected and delivered as you do to your content.

HDR Streaming Across OTT Services Grows, Setting Up a Battle for the Best User Experience

Since AT&T closed its purchase of Time Warner, Viacom merged with CBS, Disney acquired Fox’s studio and key cable networks, Discovery took over Scripps Networks and Amazon looking to acquire MGM, content consolidation has been the main focus in the industry. With so many OTT services for consumers to pick from, alongside multiple monetization models (AVOD, SVOD, Free, Hybrid), fragmentation in the market will only continue to grow. We all know that content is king and is the most important element in a streaming media service. But with so many OTT services all having such a good section of content, the next phase of the OTT industry will be all about the differentiation of quality and experience amongst the services and the direct impact this has on churn and retention.

The fight for eyeballs will be primarily fought over the quality of the user experience, and each of these consolidated players will now own a rich and diversified content portfolio. This can suit all sorts of business models such as everything to everyone or more niche targeted content for specific audiences. CEOs of these OTT services and most of the media, tend to focus on the volume of subscribers. For example, Discovery CEO David Zaslav told CNBC recently “There’s billions of people out there that we could reach in the market.” Most of the attention has focused on how the Discovery-WarnerMedia deal could reach 400M subscribers, Netflix growing beyond its 200M+ global subscribers or Amazon Prime beyond its 175M. Many are multiplying number of subs per month, x ARPU, x total households in a region and huge numbers start popping out and rousing the financial markets.

However, these numbers being quoted aren’t truly representative of the real opportunity in the market. I would argue that combining companies, brands and content on excel sheets is a far cry from effectively reaching out to billions of potential customers with a high QoE needed to keep them from jumping to a competitive service. Whilst secondary to content, streaming services need to keep a keen eye on their technology and what competitive advantages they can offer to reduce churn. One such differentiation for some is the volume of content they are offering with support for HDR. By deploying HDR capabilities, media companies could impress their audiences with a richer breadth of color and deeper contrast within HD content, which is still viewed far more than 4K content. Amongst third-party content delivery networks, many tell me of all the video bits the deliver, less than 5% are in 4K. This makes HDR even more important to support within HD content.

HDR has been on the roadmap for a lot of content owners but historically has been held back largely due to device support. Rendering performance of HDR formats is different on devices, backward compatibility is a mess, APIs are pretty bad and content licensing agreements sometimes allow only for a specific HDR format. While HDR has been a struggle over the last few years, we are finally starting to see some faster adoption amongst streaming media services. For example, last month, Hulu added HDR support to certain shows and movies within its streaming catalog. Most streaming platforms now offer a portion of their video catalog with HDR support and thankfully most TV sets sold are now HDR ready thanks to support for the HEVC codec. But whilst consumption on smart TVs is significant, HDR is largely excluded from consumption on laptops, tablets, and mobile phones. And that is where you will find the ‘billions of people’ mentioned before. For many of us, our phones and tablets have the most advanced and capable displays that we own, so why restrict HDR delivery to the Smart TV?

Contrary to popular belief, HDR is achievable on 1080p and independently of HEVC with MPEG-5 LCEVC, as I previously detailed in a blog post here. LCEVC can add a 10 bit enhancement layer to any underlying codec including AVC/H.264 thus providing the possibility to upgrade all AVC/H.264 streaming and make it HDR capable, without the need for HEVC. LCEVC is deployable via a software upgrade and therefore quick rollouts could take place, rather than the usual years needed to get hardware updates into a meaningful number of devices. The opportunity to drive up user engagement and ARPU with premium HDR delivery to more of our devices could be a key advantage for one or more of the OTT services in our space. I predict that over the next two years, we’re going to see some of the fastest rate of HDR adoption across all streaming media services and we should keep an eye on what measurable impact this can have on the user experience.

Former CEO of KIT Digital, Found Guilty on All Charges, Get 3-Years Probation

Kaleil Isaza Tuzman, the former CEO of KIT Digital who was found guilty of market manipulation, wire fraud, defrauding shareholders, and accounting fraud was sentenced on September 10th to three years probation by the judge in the case. This is astonishing as the prosecutors had sought a sentencing of 17-1/2 to 22 years in prison. The judge also ordered three years of supervised release.

U.S. District Judge Paul Gardephe sentenced Kaleil to only probation, saying the 10 months Kaleil spent in Colombian prisons was so horrible it would put him at little risk of committing further crimes. “The risk associated with sending Mr. Tuzman back to prison, the risk to his mental health, is just too great,” Gardephe said. “While in many other cases it has been my practice to sentence white-collar defendants for these sorts of crimes to a substantial sentence, in good conscience I can’t do that here.”

It’s sad state of affairs in our legal system when someone who defrauded investors, lied, cost many employees to lose their jobs and was found guilty on every charge, gets probation. Reading through some of the legal filings, there appears to be more to the story though on what might have impacted his sentence. In a Supplemental Sentencing Memorandum filed on July 7th of this year, it says:

While incarcerated and during the five years since his release from prison, Kaleil has repeatedly provided material and substantial assistance to the Anti-Corruption Unit of the Colombian Attorney General’s Office, which ultimately resulted in the indictment of a number of government officials in the Colombian National Prison Institute known as “INPEC” (Instituto Nacional Penitenciario y Carcelario)—including the prior warden of La Picota prison, César Augusto Ceballos—on dozens of charges of extortion, assault and murder.” So one wonders if he got a lighter sentence due to information he was providing to the Colombian government.

On the civil side, Kaleil is still being sued by investors in a hotel project who accuse him of stealing $5.4 million and in May 2021, a similar suit was filed in U.S. federal court, asking for $6 million.

For a history of what went on on KIT Digital, you see read my post here from 2013, “Insiders Detail Accounting Irregularities At KIT Digital, Rumors Of A Possible SEC Fraud Investigation“.

Streaming Services Evaluating Their Carbon Footprint, as Consumers Demand Net-Zero-Targets

Right now, almost anyone has access to some sort of video streaming platform that offers the content they value at a satisfactory video quality level, most of the time. But the novelty factor has long worn off and most of the technical improvements are now taken for granted. Of course viewers are increasingly demanding in terms of video quality and absence of buffering, and losing a percentage of viewership due to poor quality means more lost profits than before, but consumers are starting to care about more than just the basics.

Just like in many other industries (think of the car or fashion industries) consumer demands – especially for Generation Z – are now moving beyond “directly observable” features and sustainability is steadily climbing the pecking order of their concerns. To date, this importance has mostly been for physical goods, not digital, but I wonder whether this may be a blind-spot for many in our industry? Remember how many people thought consumers would not care much about where and how their shoes were made? Some large footwear companies sustained heavy losses due to that wrong assumption.

Video streaming businesses should be quick to acknowledge that, whether they like it or not, whether they believe in global warming or not, that they have to have a plan to reach the goal of net-zero emissions. The relevance of this to financial markets and customer concerns about sustainable practices are here to stay and growing. About half of new capital issues in financial markets are being linked to ESG targets (Environmental, Social and Governance). Sustainability consistently ranks among the top 5 concerns in every survey of Generation Z consumers when it comes to physical goods and one could argue it’s only a matter of time before this applies to digital services as well.

Back in 2018 I posted that the growth in demand for video streaming had created a capacity gap and that building more and more data centers, plus stacking them with servers was not a sustainable solution. Likewise, encoding and compression technology has been plagued with diminishing gains for some time, where for each new generation of codec, the increase in compute power they require is far greater than the compression efficiency benefits they deliver. Combine that with the exponential growth of video services, the move from SD to HD to 4K, the increase in bit depth for HDR, the dawn of immersive media, and you have a recipe for everything-but-net-zero.

So, what can be done to mitigate the carbon footprint of an activity that is growing exponentially by 40% per year and promises to transmit more pixels at higher bitrates crunched with more power-hungry codecs? Recently Netflix has pledged to be carbon neutral by 2022, while media companies like Sky committed to become net zero carbon by 2030. A commonly adopted framework is the “Reduce – Retain – Remove”. While many companies accept that they have a duty to “clean up the mess” after polluting, I believe the biggest impact lies in reducing emissions in the first place.

Netflix, on the “Reduce” part of their pledge, aim to reduce emissions by 45% by 2030 and others will surely follow with similar targets. The question is how can they get there? Digital services are starting to review their technology choices to factor-in what can be done to reduce emissions. At the forefront of this should be video compression, which typically drives the two most energy intensive processes in a video delivery workflow: transcoding and delivery.

The trade-off with the latest video compression codecs is that while they increase compression efficiency and reduce energy costs in data transmission (sadly, only for the small fraction of new devices compatible with them), their much-higher compute results in increased energy usage for encoding. So the net balance in terms of sustainability is not a slam dunk, especially for operators that deliver video to consumer-owned-and-managed devices such as mobile devices.

One notable option able to improve both video quality and sustainability is MPEG-5 LCEVC, the low-complexity codec-agnostic enhancement recently standardized by MPEG. LCEVC increases the speed at which encoding is done by up to 3x, therefore decreasing electricity consumption in data center. At the same time, it reduces transmission requirements, and immediately does so for a broad portion of the audience, thanks to the possibility of deploying LCEVC to a large number of existing devices, and notably all mobile devices. With some help from the main ecosystem players, LCEVC device coverage may become nearly universal very rapidly.

LCEVC is just one of the available technologies with so-called “negative green premium”, good for the business and good for the environment. Sustainability-enhancing technologies, which earlier may have been fighting for attention among a long list of second-priority profit-optimization interventions, may soon bubble up in priority. The need for sustainability intervention is real, and will only become greater in the next few years, so all available solutions should be brought into play. Netflix says it best from their 2020 ESG report, “If we are to succeed in entertaining the world, we need a habitable, stable world to entertain.”

Real-World Use Cases for Edge Computing Explained: A/B Testing, Personalization and Privacy

In a previous blog post, [Unpacking the Edge Compute Hype: What It Really Is and Why It’s Important] I discussed what edge computing is—and what it is not. Edge computing offers the ability to run applications closer to users, dramatically reducing latency and network congestion, providing a better, more consistent user experience. Growing consumer demand for personalized, high-touch experiences is driving the need for running application functionality to the edge. But that doesn’t mean edge compute is right for every use case.

There are some notable limitations and challenges to be aware of and many industry analysts are predicting every type of workload will move to the edge, which is not accurate. Edge compute requires a microservices architecture that doesn’t rely on monolithic code. The edge is a new destination for code, so best practices and operational standards are not yet well defined or well understood.

Edge compute also presents some unique challenges around performance, security and reliability. Many microservices require response times in the tens of milliseconds, requiring extremely low latency. Yet providing sophisticated user personalization consumes compute cycles, potentially impacting performance. With edge computing services, there is a trade-off between performance and personalization.

Edge compute also presents some unique challenges around performance, security and reliability. Many microservices require response times in the tens of milliseconds, requiring extremely low latency. Yet providing sophisticated user personalization consumes compute cycles, potentially impacting performance. With edge computing services, there is a trade-off between performance and personalization.

Microservices also rely heavily on APIs, which are a common attack vector for cybercriminals, so protecting API endpoints is critical and is easier said than done, given the vast number of APIs. Reliability can be a challenge, given the “spiky” nature of edge applications due to variations in user traffic, especially during large online events that drive up the volume of traffic. Given these realities, which functions are the most likely candidates for edge compute in the near term? I think the best use cases fall into four categories.

A/B Testing
This use case involves implementing logic to support marketing campaigns by routing traffic based on request characteristics and collecting data on the results. This enables companies to perform multivariate testing of offers and other elements of the user experience, refining their appeal. This type of experimental decision logic is typically implemented at the origin, requiring a trip to the origin in order to make the A/B decisions on which content to serve to each user. This round-trip adds latency that decreases page performance for the request. It also adds traffic to the origin, increasing congestion and requiring additional infrastructure to handle the traffic.

Placing the logic that governs A/B testing at the edge results in faster page performance and decreased traffic to origin. Serverless compute resources at the edge determines which content to deliver based on the inbound request. Segment information can be stored in a JavaScript bundle or in a key-value store, with content served from the cache. This decreases page load time and reduces the load on the origin infrastructure, yielding a better user experience.

Personalization
Companies are continually seeking to deliver more personalized user experiences to increase customer engagement and loyalty in order to drive profitability. Again, the functions of identifying the user and determining which content to present typically reside at the origin. This usually means personalized content is uncacheable, resulting in low offload and negative impact to performance. Instead, a serverless edge compute device can be used to detect the characteristics of inbound requests, rapidly identifying unique users and retrieving personalized content. This logic can be written in JavaScript at the edge and personalized content can be stored in JavaScript bundle or in a key-value store at the edge. Performing this logic at the edge provides highly personalized user experiences while increasing offload, enabling a faster, more consistent experience.

Privacy Compliance
Businesses are under growing pressures to safeguard their customers’ privacy and comply with an array of regulations, including GDPR, CCPA, APPI, and others, to avoid penalties. Compliance is particularly challenging for data over which companies may have no control. One important aspect of compliance is tracking consent data. Many organizations have turned to the Transparency and Consent Framework (TCF 2.0) developed by the Interactive Advertising Bureau (IAB) as an industry standard for sending and verifying user consent.

Deploying this functionality as a microservice at the edge makes a lot of sense. When the user consents to tracking, state-tracking cookies are added to the session that enable a personalized user experience. If the user does not consent, the cookie is discarded and the user has a more generic experience that does not involve personal information. Performing these functions at the edge improves offload and enables cacheability, allowing extremely rapid lookups. This improves the user experience while helping ensure privacy compliance.

Third-Party Services
Many companies offer “productized” services designed to address specific, high-value needs. For example, the A/B testing discussed earlier is often implemented using such a third-party service in conjunction with core marketing campaign management applications. These third-party services are often tangential to the user’s request flow. When implemented in the critical path of the request flow, they add latency that can affect performance. Moreover, scale and reliability are beyond your control, which means the user experience is too. Now imagine this third-party code is running natively on the same serverless edge platform handling the user’s originating request. Because the code is local, latency is reduced. And the code is now able to scale to meet changing traffic volumes and improving reliability.

One recent example of this was the partnership between Akamai and the Queue-It virtual waiting room service. The service allows online customers to retain their place in line, while providing a positive waiting experience and reducing the risk of a website crash due to sudden, spikes in volume. The partnership was focused specifically on providing an edge-based virtual waiting room solution to handle traffic during the rush to sign up for COVID vaccinations. The same approach could be used for any online event where traffic spikes are expected, such as ticket reservations to a sought-after concert or theater event, now that these venues are poised to open back up.

Conclusion
These examples highlight how important it is to understand and think carefully about what functions make sense to run at the edge. It’s true that some of these use cases may be met by traditional centralized infrastructures. But consider the reduction in overhead, the speed and efficiency of updating functionality, and the performance advantages gained by executing them at the edge. These benefit service providers and users alike. Just as selecting the right applications for edge compute is critical, so it working with the right edge provider. In this regard, proximity matters.

Generally speaking, the closer edge compute resources are to the user, the better. Beware of service providers running code in more centralized nodes that they call “the edge.” And be sure they can deliver the performance, reliability and security needed to meet your service objectives, based on the methodology you choose, while effectively managing risk.

The edge compute industry and market for these services is an evolving landscape that’s only just starting off. But there is a growing list of use cases that can benefit now from edge compute deployed in a thoughtful way. We should expect to see more uses cases in the next 18 months as edge computing adoption continues and companies look at ways to move logic and intelligence to the edge.

Streaming Summit at NAB Show Returns, Call For Speakers Now Open

It’s back! I am happy to announce the return of the NAB Show Streaming Summit, taking place October 11-12 in Las Vegas. The call for speakers is now open and lead gen opportunities are available. The show will be a hybrid event this year, with both in-person and remote presentations. See the website for all the details or contact me with your ideas on how you want to be involved.

The topics covered will be created based on the submissions sent in, but the show covers both business and technology topics including; bundling of content; codecs; transcoding; live streaming; video advertising; packaging and playback; monetization of video; cloud based workflows; direct-to-consumer models, the video ad stack and other related topics. The Summit does not cover topics pertaining to video editing, pre/post production, audio only applications, content scripts and talent, content rights and contracts, or video production hardware.

Please reach out to me at (917) 523-4562 or via email at anytime if you have questions on the submission process or want to discuss an idea before you submit. I always prefers speaking directly to people about their ideas so I can help tailor your submission to what works best. Interested in moderating a session? Please contact me ASAP!

Apple Using Akamai, Fastly, Cloudflare For Their New iCloud Private Relay Feature

[Updated October 18, 2021: Apple’s iCloud Private Relay feature, which was being powered during the beta by Akamai, Fastly and Cloudflare, will officially launch with iOS 15 in “beta”. It will no longer be enabled by default, due to incompatibility issues with some websites.]

On Monday, Apple announced some new privacy features in iCloud, one of which they are calling Private Relay. The way it works is that when you go to a website using Safari, iCloud Private Relay takes your IP address to connect you to the website and then encrypts the URL so that app developers, and even Apple, don’t know what website you are visiting. The IP and encrypted URL then travels to an intermediary relay station run by what Apple calls a “trusted partner”. In a media interview published yesterday, Apple would not say who the trusted partners are but I can confirm, based on public details (as shown below; Akamai on left, Fastly on the right), that Akamai, Fastly and Cloudflare are being used.

On Fastly’s Q1 earnings call, the company said they expect revenue growth to be flat quarter-over-quarter going into Q2, but that revenue growth would accelerate in the second half of this year. The company also increased their revenue guidance range to $380 million to $390 million, up from $375 million to $380 million. Based on the guidance numbers, Fastly would be looking at a pretty large ramp of around 15% of sequential growth in the third and fourth quarter. Fastly didn’t give any indication of why they thought revenue might ramp so quickly, but did say that, “a lot of really important opportunities that are coming our way.” By itself, this new traffic generated from Apple isn’t that large when it comes to overall revenue and is being shared amongst three providers. This news comes out at an interesting time as this morning, Fastly had a major outage on their network that lasted about an hour.

Rebuttal to FCC Commissioner: OTT, Cloud and Gaming Services Should Not Pay for Broadband Buildout

Brendan Carr, commissioner of the Federal Communications Commission (FCC), published an op-ed post on Newsweek entitled “Ending Big Tech’s Free Ride.” In it, he suggests that companies such as Facebook, Apple, Amazon, Netflix, Microsoft, Google and others, should pay a tax for the build-out of broadband networks to reach every American. In his post he blames streaming OTT services as well as gaming services like Xbox and cloud services like AWS for the volume of traffic on the Internet. There are a lot of factual problems with his post from both a business and technical standpoint, which is always one of the main problems when regulators get involved in topics like this. They don’t focus on the facts of the case but rather their “opinions” disguised as facts. The Commissioner references a third-party post-doctoral paper as his argument, which contains many factual errors when it comes to numbers disclosed by public companies, some of which I highlight below.

The federal government currently collects roughly $9 billion a year through a tax on traditional telephone services—both wireless and wireline. That pot of money, known as the Universal Service Fund, is used to support internet builds in rural areas. The Commissioner suggests that consumers should not have to pay that tax on their phone bill for the buildout of broadband and that the tax should be paid by large tech companies instead. He says that tech companies have been getting a “free ride” and have “avoided” paying their fair share.” He writes that “Facebook, Apple, Amazon, Netflix and Google generated nearly $1 trillion in revenues in 2020 alone,” saying it “would take just 0.009 percent of those revenues,” to pay for the tax. The Commissioner is ignoring the fact that in 2020, Apple made 60% of their revenue overseas and that much of it comes from hardware, not online services. Apple’s “services” revenue, as the company defines it, made up 19% of their total 2020 revenue. So that $1 trillion number is much, much lower, if you’re counting revenue from actual online services that use broadband to deliver the content.

If the Commissioner wants to tax a company that makes hardware, why isn’t Ford Motor Company on the list? They make physical products but also have a “mobility” division that relies on broadband infrastructure for their range of smart city services. Without that broadband infrastructure Ford would not be able to sell mobility services to cities or generate any revenue for their mobility division. You could extend this notion to all kinds of companies that make revenue from physical goods or commerce companies like eBay, Etsy, Target and others. Yet the Commissioner specifically calls out video streaming as the problem and references a paper written in March of this year as his evidence. The problem is that the paper is full of so many factually wrong numbers, definitions and can’t even get the pricing of streaming services accurate, something the Commissioner clearly hasn’t noticed.

The paper says YouTube is “on track to earn more than $6 billion in advertising revenue for 2020”. No, YouTube generated $19.7 billion revenue in 2020. The paper also says that Hulu’s live service costs $4.99 a month, when in actuality it costs $65 a month. It also says that the on-demand version of Hulu costs $11.99 a month when it costs $5.99 a month. There are many instances of wrong numbers like this in the report that can’t be debated and are simply wrong. Full stop. The authors say the goal of the paper is to look at the “challenge of four rural broadband providers operating fiber to the home networks to recover the middle mile network costs of streaming video entertainment.” They say that “subscribers pay about $25 per month subscriber to video streaming services to Netflix, YouTube, Amazon Prime, Disney+, and Microsoft.” That’s not accurate. YouTube is free. If they mean YouTube TV, that costs $65 a month. The paper also uses words like “presumed” and “assumptions” when making their arguments, which isn’t based on any facts.

The paper also points out that the data and methodology used in the report to come to their conclusions “has limitations” since traffic is measured “differently” amongst broadband providers. So only a slice of the overall data is being used in the report and the methodology collection isn’t consistent amongst all the providers. We’re only seeing a small window into the data being used in the repot, yet the Commissioner is referencing this paper as his “evidence”. The paper also references industry terms from as far back as 2012 saying they have “adapted” them to today, which is always a red flag for accuracy.

The paper also incorrectly states that “The video streaming entertainment providers do not contribute to middle or last mile network costs. The caching services provided by Netflix and YouTube are exclusionary to the proprietary services of these platforms and entail additional costs for rural broadband providers to participate.” Netflix and others have been putting caches inside ISP networks for FREE, which saves the ISP money on transit. Apple will also work with ISPs via their Apple Edge Cache program. For anyone to suggest that big tech companies don’t spend money to build out infrastructure for the consumers benefit is simply false. Some ISPs choose not to work with content companies offering physical or virtual caches but that’s based on a business decision they have made on their own. In addition, when a consumer signs up for a connection to the Internet from an ISP, the ISP is in the business of adding capacity to support whatever content the consumer wants to stream. That is the ISPs business and there is no valid argument that an ISP should not have to spend money to support the user.

The paper stats that, “Rural broadband providers generally operate at close to breakeven with little to no profit margin. This contrasts with the double-digit profit margins of the Big Streamers.” Disney’s direct-to-consumer streaming division which includes Disney+, Hulu, ESPN+ and Hotstar lost $466 million in Q1 of this year. What “double-digit profit margins” is the paper referencing? Again, they don’t know the numbers. The paper also gets wrong many of their explanations of what a CDN is, how it works, and how companies like Netflix connect their network to an ISP like Comcast. The paper also shows the logo of Hulu and Disney+ on a chart listing them under the Internet “backbone” category, when of course the parent owner of those services, The Walt Disney Company, doesn’t own or operate a backbone of any kind. The paper argues that since rural ISPs have no scale, they can’t launch “streaming services of their own” like AT&T has. Of course, this is 100% false and there are many third-party companies in the market that have packaged together content ready to go that any ISP can re-sell as a bundle to their subscribers, no matter how many subscribers they have.

According to a 2020 report from the Government Accountability Office (GAO), the FCC’s number one challenge in targeting and identifying unserved areas for broadband deployment was the accuracy of the FCC’s own broadband deployment data. Congress recently provided the FCC with $98 million to fund more precise and granular maps. You read that right, the FCC was given $98 million dollars to create maps. In March of 2020, Acting FCC Chairwoman Jessica Rosenworcel said these maps could be produced in “a few months,” but that estimate has now been changed to 2022. Some Senators have taken notice of the delay and have demanded answers from the FCC.

It’s easy to suggest that someone else should pay a tax without offering any details on who exactly it would apply to, how much it would be, what the classifications are to be included or omitted, which services would or would not fall under the rule, how much would need to be collected and over what period of time. But this is exactly what Commissioner Carr has done by calling out companies, by name, that he thinks should pay a tax. All while providing no details or proposal and referencing a paper filled with factual errors. I have contacted the Commissioner’s office and offered him an opportunity to come to the next Streaming Summit at NAB Show, October 11-12, and debate this topic with me in-person. If accepted, I will only focus on the facts, not opinions.

Unpacking the Edge Compute Hype: What It Really Is and Why It’s Important


The tech industry has always been a prolific producer of hype and right now, no topic is mentioned more generically than that of “edge” and “edge compute”.  Everywhere you turn these days, vendors are promoting their “edge solution”, with almost no definition, no real-world use cases, no metrics around deployments at scale and a lack of details on how much revenue is being generated. Making matters worse, some industry analysts are publishing reports saying the size of the “edge” market is already in the billions. These reports have no real methodology behind the numbers, don’t define what service they are covering and talk to non-realistic growth and use cases. It’s why the moment edge compute use cases are mentioned, people always use the examples of IoT, connected cars and augmented reality.

Part of the confusion in the market is due to rampant “edge-washing”, which is vendors seeking to rebrand their existing platforms as edge solutions. Similar to how some cloud service providers call their points of presence the edge. Or CDN platforms marketed as edge platforms, when in reality, the traditional CDN use cases are not taking advantage of any edge compute functions. You also see some mobile communications providers even referring to their cell towers as the edge and even a few cloud based encoding vendors are now using the word “edge” in their services.

Growing interest among the financial community in anything edge-related is helping fuel this phenomenon, with very little understanding of what it all means, or more importantly, doesn’t mean. Look at the valuation trends for “edge” or “edge computing” vendors and you’ll see there is plenty of incentive for companies to brand themselves as an edge solution provider. This confusion makes it difficult to separate functional fact from marketing fiction. To help dispel the confusion, I’m going to be writing a lot of blog posts this year around edge and edge compute topics with the goal of separating facts from fiction.

The biggest problem is that many vendors are using the phrase “edge” and “edge compute” interchangeably and they are not the same thing. Put simply, the edge is a location, the place in the network that is closest to where the end user or device is. We all know this term and Akamai and been using it for a long time to reference a physical location in their network. Edge computing refers to a compute model where application workloads occur at an edge location, where logic and intelligence is needed. It’s a distributed approach that shifts the computing closer to the user or device being used. This contrasts with the more common scenario where applications are run in a centralized data center or in the cloud, which is really just a remote data center usually run by a third-party. Edge compute is a service, the “edge” isn’t. You can’t buy “edge”, you are buying CDN services that simply leverage an edge-based network architecture that perform work at the distributed points of presence closest to where the digital and physical intersect. This excludes basic caching and forwarding CDN workflows.

When you are deploying an application, the traditional approach would be to host that application on servers in your own data center. More recently, it is likely you would instead choose to host the application in the cloud, with a cloud service provider like Amazon Web Services, Microsoft Azure or the Google Cloud Platform. While cloud service providers do offer regional PoPs, most organizations typically still centralize in a single or small number of regions.

But what if your application serves users in New York, Rome, Tokyo, Guangzhou, Rio di Janeiro, and points in between? The end-user journey to your content begins on the network of their ISP or mobile service provider, then continues over the Internet to whichever cloud PoP or data center the application is running on, which may be half a world away. From an architectural viewpoint, you have to think of all of this as your application infrastructure, and many times the application itself is running far, far away from those users. The idea and value of edge computing turns this around. It pushes the application closer to the users, offering the potential to reduce latency and network congestion, and to deliver a better user experience.

Computing infrastructure has really evolved over the years. It began with “bare metal,” physical servers running a single application. Then virtualization came into play, using software to emulate multiple virtual machines hosting multiple operating systems and applications on a single physical server. Next came containers, introducing a layer that isolates the application from the operating system, allowing applications to be easily portable across different environments while ensuring uniform operation. All of these computing approaches can be employed in a data center or in the cloud.

In recent years, a new computing alternate has emerged called serverless. This is a zero-management computing environment where an organization can run applications without up-front capital expense and without having to manage the infrastructure. While it is used in the cloud (and could be in a data center—though this would defeat the “zero management” benefit), serverless computing is ideal for running applications at the edge. Of course, where this computing occurs matters when delivering streaming media. Each computing location, on-premises, in the cloud and at the edge, has its pros and cons.

  • On-premises computing, such as in an enterprise data center, offers full control over the infrastructure, including the storage and security of data. But it requires substantial capital expense and costly management. It also means you may need reserve server capacity to handle spikes in demand-capacity that sits idle most of the time, which is an inefficient use of resources. And an on-premises infrastructure will struggle to deliver low-latency access to users who may be halfway around the world.
  • Centralized cloud-based computing eliminates the capital expense and reduces the management overhead, because there are no physical servers to maintain. Plus, it offers the flexibility to scale capacity quickly and efficiently to meet changing workload demands. However, since most organizations centralize their cloud deployments to a single region, this can limit performance and create latency issues.
  • Edge computing offers all the advantages of cloud-based computing plus additional benefits. Application logic executes closer to the end user or device via a globally distributed infrastructure. This dramatically reduces latency and avoids network congestion, with the goal of providing an enhanced and consistent experience for all users.

There is a trade-off with edge computing, however. The distributed nature of the edge translates into a lower concentration of computing capacity in any one location. This presents limitations for what types of workloads can run effectively at the edge. You’re not going to be running your enterprise ERP or CRM application in a cell tower, since there is no business or performance benefit. And this leads to the biggest unknown in the market today, that being, which application use cases will best leverage edge compute resources? As an industry, we’re still finding that out.

From a customer use case and deployment standpoint, the edge computing market is so small today that both Akamai and Fastly have told Wall Street that their edge compute services won’t generate significant revenue in the near-term. With regards to their edge compute services, during their Q1 earnings call, Fastly’s CFO said, “2021 is much more just learning what are the use cases, what are the verticals that we can use to land as we lean into 2022 and beyond.” Akamai, which gave out a lot of CAGR numbers at their investor day in March said they are targeting revenue from “edge applications” to grow 30% in 3-5 years, inside their larger “edge” business, with expected overall growth of 2-5% in 3-5 years.

Analysts that are calling CDN vendors “edge computing-based CDNs” don’t understand that most CDNs services being offered are not levering any “edge compute” services inside the network. You don’t need “edge compute” to deliver video streaming, which as an example, made up 57% of all the bits Akamai’s delivered in 2020, for their CDN services, or what they call edge delivery. Akamai accurately defines the video streaming they deliver as “edge delivery”, not “edge compute”. Yet some analysts are taking the proper terminology vendors are using and swapping that out with their own incorrect terms, which only further adds to the confusion in the market.

In simple terms, edge compute is all about moving logic and intelligence to the edge. Not all services or content needs to have an edge compute component, or be stored at or delivered from the edge, so we’ll have to wait to see which applications customers use it for. The goal with edge compute isn’t just about improving the use experience but also having a way to measure the impact on the business, with defined KPIs. This isn’t well defined today, but it’s coming over the next few years as we see more uses cases and adoption.

CDN Limelight Networks Gives Yearly Revenue Guidance, Update on Turnaround

In my blog post from March of this year I detailed some of the changes Limelight Networks new management team is taking to set the company back on a path to profitability and accelerated growth. Absent from my post were full-year revenue guidance numbers as Limelight’s management team was too new at the time to be able to share them with Wall Street. Now, with Limelight having reported Q1 2021 earnings on April 29th, we have a better insight into what they expect for the year.

Limelight had revenue of $51.2 million in Q1, down 10%, compared to $57.0 million in the first quarter of 2020. This wasn’t surprising since Limelight’s previous management team didn’t address some network performance issues that resulted in a loss of some traffic. The good news is that Limelight stated during their earnings that they have since “reduced rebuffer rates by approximately 30%”, “increased network throughput by up to 20% through performance tuning” and believe that over the next 90-days they can create additional performance improvements that will “drive increased market share of traffic from our clients.” For the full year, Limelight expects revenue to be in the range of $220M-$230M, while having a $20M-$25M Capex spend. Limelight had total revenue of $230.2M in 2020, so at the high-end of Limelight’s 2021 projection, the growth of the business would be flat year-over-year.

New management has made some measurable progress addressing some of their short-term headwinds and identifying what they need to work on going forward. Based on some of the changes they have already made the company expects to benefit from an annual cash cost savings of approximately $15M. It’s a good start, but turnarounds don’t happen overnight and the new management team has only been inside the organization for 90-days. They need to be given more time, at least two quarters of operating the business, before we can expect to see some measurable results and see what growth could look like in Q4 and going into Q1 of 2022. Limelight also announced during their earrings that they will be holding a strategy update session in early summer to discuss their broader plans to evolve their offerings beyond video with the goal of taking advantage of their network during low peak times.

Earnings Recap: Brightcove, Google, FB, Verizon, AT&T, Microsoft, Discovery, Comcast, Dish

Here’s a quick recap that highlights the most important numbers you need to know from last week’s Q4 2021 earnings from Brightcove, Google, FB, Verizon, AT&T (HBO Max), Microsoft, Discovery, Comcast (Peacock TV) and Dish (Sling TV). Later this week I’ll cover earnings from Akamai, Fastly, T-Mobile, Fox, Vimeo, Cloudflare, Roku, ViacomCBS and AMC Networks. Disney and fuboTV report the week of May 10th.

  • Brightcove Q1 2021 Earnings: Revenue of $54.8M, up 18% y/o/y, but nearly flat from Q4 revenue of $53.7M; Expects revenue to decline in Q2 to $49.5M-$50.5M. Full year guidance of $211M-$217M. More details: https://bit.ly/2RgZLVE
  • Alphabet Q1 2021 Earnings: Revenue of $55.31B, up 34% y/o/y ($44.6B in advertising); YouTube ad revenue of $6.01B, up 49% y/o/y; cloud revenue of $4.05B, up 46% y/o/y (lost $974M). No details on YouTube TV subs. Added almost 17,000 employees in the quarter. More details: https://lnkd.in/dNrwSVD
  • Facebook Q1 2021 Earnings: Total revenue of $26.1B, up 22% y/o/y; Monthly active users of 2.85B, up 10% y/o/y; Daily active users of 1.88B, up 8% y/o/y; Expects y/o/y total revenue growth rates in Q3/Q4 to significantly decelerate sequentially as they lap periods of increasingly strong growth. More details: https://bit.ly/3tf2o7J
  • Verizon Q1 2021 Earnings: Lost 82,000 pay TV subscribers; added 98,000 Fios internet customers. Has 3.77M pay TV subs. More details: https://lnkd.in/dWrZ9iP
  • AT&T Q1 2021 Earnings: Added 2.7M domestic HBO Max and HBO subscriber net adds; total domestic subscribers of 44.2M and nearly 64M globally. Domestic HBO Max and HBO ARPU of $11.72. WarnerMedia revenue of $8.5B, up 9.8% y/o/y. More details: https://lnkd.in/eZeFy_8
  • Microsoft Q1 2021 Earnings: Total revenue of $41.7B, up 19% y/o/y; Intelligent Cloud revenue of $17.7B, up 33% y/o/y; Productivity and Business Processes revenue of $13.6B, up 15% y/o/y. More details: https://bit.ly/3tdv5BV
  • Discovery Q1 2021 Earnings: Has 15M paying D2C subs, but won’t say how many are Discovery+; Ad supported Discovery+ had over $10 in ARPU; Average viewing time of 3 hours per day, per user. More details: https://lnkd.in/dp8DZG4
  • Comcast Q1 2021 Earnings: Peacock Has 42M Sign-Ups to Date; Lost 491,000 pay TV subscribers; Peacock TV had $91M in revenue on EBITDA loss of $277M. More details: https://bit.ly/3vGnJZw
  • Dish Q1 2021 Earnings: Lost 230,000 pay TV subs; Lost 100,000 Sling TV subs (has 2.37M in total). More details: https://lnkd.in/dD9za-e
  • Netflix Q1 2021 Earnings: Added 3.98M subs (estimate was for 6M); finished the quarter with 208M subs; operating income of $2B more than doubled y/o/y; will spend over $17B on content this year; Q2 2021 guidance of only 1M net new subs. More details: https://bit.ly/33bsdei
  • Twitter Q1 2021 Earnings: Revenue $1.04B, up 28% y/o/y; Average mDAU was 199M, up 20% y/o/y; mDAU growth to slow in coming quarters, when compared to rates during pandemic. More details: https://lnkd.in/dK9PiJf