Sponsored by

    Search

    Subscribe to Blog Posts


Former CEO of KIT Digital, Found Guilty on All Charges, Get 3-Years Probation

Kaleil Isaza Tuzman, the former CEO of KIT Digital who was found guilty of market manipulation, wire fraud, defrauding shareholders, and accounting fraud was sentenced on September 10th to three years probation by the judge in the case. This is astonishing as the prosecutors had sought a sentencing of 17-1/2 to 22 years in prison. The judge also ordered three years of supervised release.

U.S. District Judge Paul Gardephe sentenced Kaleil to only probation, saying the 10 months Kaleil spent in Colombian prisons was so horrible it would put him at little risk of committing further crimes. “The risk associated with sending Mr. Tuzman back to prison, the risk to his mental health, is just too great,” Gardephe said. “While in many other cases it has been my practice to sentence white-collar defendants for these sorts of crimes to a substantial sentence, in good conscience I can’t do that here.”

It’s sad state of affairs in our legal system when someone who defrauded investors, lied, cost many employees to lose their jobs and was found guilty on every charge, gets probation. Reading through some of the legal filings, there appears to be more to the story though on what might have impacted his sentence. In a Supplemental Sentencing Memorandum filed on July 7th of this year, it says:

While incarcerated and during the five years since his release from prison, Kaleil has repeatedly provided material and substantial assistance to the Anti-Corruption Unit of the Colombian Attorney General’s Office, which ultimately resulted in the indictment of a number of government officials in the Colombian National Prison Institute known as “INPEC” (Instituto Nacional Penitenciario y Carcelario)—including the prior warden of La Picota prison, César Augusto Ceballos—on dozens of charges of extortion, assault and murder.” So one wonders if he got a lighter sentence due to information he was providing to the Colombian government.

On the civil side, Kaleil is still being sued by investors in a hotel project who accuse him of stealing $5.4 million and in May 2021, a similar suit was filed in U.S. federal court, asking for $6 million.

For a history of what went on on KIT Digital, you see read my post here from 2013, “Insiders Detail Accounting Irregularities At KIT Digital, Rumors Of A Possible SEC Fraud Investigation“.

Streaming Services Evaluating Their Carbon Footprint, as Consumers Demand Net-Zero-Targets

Right now, almost anyone has access to some sort of video streaming platform that offers the content they value at a satisfactory video quality level, most of the time. But the novelty factor has long worn off and most of the technical improvements are now taken for granted. Of course viewers are increasingly demanding in terms of video quality and absence of buffering, and losing a percentage of viewership due to poor quality means more lost profits than before, but consumers are starting to care about more than just the basics.

Just like in many other industries (think of the car or fashion industries) consumer demands – especially for Generation Z – are now moving beyond “directly observable” features and sustainability is steadily climbing the pecking order of their concerns. To date, this importance has mostly been for physical goods, not digital, but I wonder whether this may be a blind-spot for many in our industry? Remember how many people thought consumers would not care much about where and how their shoes were made? Some large footwear companies sustained heavy losses due to that wrong assumption.

Video streaming businesses should be quick to acknowledge that, whether they like it or not, whether they believe in global warming or not, that they have to have a plan to reach the goal of net-zero emissions. The relevance of this to financial markets and customer concerns about sustainable practices are here to stay and growing. About half of new capital issues in financial markets are being linked to ESG targets (Environmental, Social and Governance). Sustainability consistently ranks among the top 5 concerns in every survey of Generation Z consumers when it comes to physical goods and one could argue it’s only a matter of time before this applies to digital services as well.

Back in 2018 I posted that the growth in demand for video streaming had created a capacity gap and that building more and more data centers, plus stacking them with servers was not a sustainable solution. Likewise, encoding and compression technology has been plagued with diminishing gains for some time, where for each new generation of codec, the increase in compute power they require is far greater than the compression efficiency benefits they deliver. Combine that with the exponential growth of video services, the move from SD to HD to 4K, the increase in bit depth for HDR, the dawn of immersive media, and you have a recipe for everything-but-net-zero.

So, what can be done to mitigate the carbon footprint of an activity that is growing exponentially by 40% per year and promises to transmit more pixels at higher bitrates crunched with more power-hungry codecs? Recently Netflix has pledged to be carbon neutral by 2022, while media companies like Sky committed to become net zero carbon by 2030. A commonly adopted framework is the “Reduce – Retain – Remove”. While many companies accept that they have a duty to “clean up the mess” after polluting, I believe the biggest impact lies in reducing emissions in the first place.

Netflix, on the “Reduce” part of their pledge, aim to reduce emissions by 45% by 2030 and others will surely follow with similar targets. The question is how can they get there? Digital services are starting to review their technology choices to factor-in what can be done to reduce emissions. At the forefront of this should be video compression, which typically drives the two most energy intensive processes in a video delivery workflow: transcoding and delivery.

The trade-off with the latest video compression codecs is that while they increase compression efficiency and reduce energy costs in data transmission (sadly, only for the small fraction of new devices compatible with them), their much-higher compute results in increased energy usage for encoding. So the net balance in terms of sustainability is not a slam dunk, especially for operators that deliver video to consumer-owned-and-managed devices such as mobile devices.

One notable option able to improve both video quality and sustainability is MPEG-5 LCEVC, the low-complexity codec-agnostic enhancement recently standardized by MPEG. LCEVC increases the speed at which encoding is done by up to 3x, therefore decreasing electricity consumption in data center. At the same time, it reduces transmission requirements, and immediately does so for a broad portion of the audience, thanks to the possibility of deploying LCEVC to a large number of existing devices, and notably all mobile devices. With some help from the main ecosystem players, LCEVC device coverage may become nearly universal very rapidly.

LCEVC is just one of the available technologies with so-called “negative green premium”, good for the business and good for the environment. Sustainability-enhancing technologies, which earlier may have been fighting for attention among a long list of second-priority profit-optimization interventions, may soon bubble up in priority. The need for sustainability intervention is real, and will only become greater in the next few years, so all available solutions should be brought into play. Netflix says it best from their 2020 ESG report, “If we are to succeed in entertaining the world, we need a habitable, stable world to entertain.”

Real-World Use Cases for Edge Computing Explained: A/B Testing, Personalization and Privacy

In a previous blog post, [Unpacking the Edge Compute Hype: What It Really Is and Why It’s Important] I discussed what edge computing is—and what it is not. Edge computing offers the ability to run applications closer to users, dramatically reducing latency and network congestion, providing a better, more consistent user experience. Growing consumer demand for personalized, high-touch experiences is driving the need for running application functionality to the edge. But that doesn’t mean edge compute is right for every use case.

There are some notable limitations and challenges to be aware of and many industry analysts are predicting every type of workload will move to the edge, which is not accurate. Edge compute requires a microservices architecture that doesn’t rely on monolithic code. The edge is a new destination for code, so best practices and operational standards are not yet well defined or well understood.

Edge compute also presents some unique challenges around performance, security and reliability. Many microservices require response times in the tens of milliseconds, requiring extremely low latency. Yet providing sophisticated user personalization consumes compute cycles, potentially impacting performance. With edge computing services, there is a trade-off between performance and personalization.

Edge compute also presents some unique challenges around performance, security and reliability. Many microservices require response times in the tens of milliseconds, requiring extremely low latency. Yet providing sophisticated user personalization consumes compute cycles, potentially impacting performance. With edge computing services, there is a trade-off between performance and personalization.

Microservices also rely heavily on APIs, which are a common attack vector for cybercriminals, so protecting API endpoints is critical and is easier said than done, given the vast number of APIs. Reliability can be a challenge, given the “spiky” nature of edge applications due to variations in user traffic, especially during large online events that drive up the volume of traffic. Given these realities, which functions are the most likely candidates for edge compute in the near term? I think the best use cases fall into four categories.

A/B Testing
This use case involves implementing logic to support marketing campaigns by routing traffic based on request characteristics and collecting data on the results. This enables companies to perform multivariate testing of offers and other elements of the user experience, refining their appeal. This type of experimental decision logic is typically implemented at the origin, requiring a trip to the origin in order to make the A/B decisions on which content to serve to each user. This round-trip adds latency that decreases page performance for the request. It also adds traffic to the origin, increasing congestion and requiring additional infrastructure to handle the traffic.

Placing the logic that governs A/B testing at the edge results in faster page performance and decreased traffic to origin. Serverless compute resources at the edge determines which content to deliver based on the inbound request. Segment information can be stored in a JavaScript bundle or in a key-value store, with content served from the cache. This decreases page load time and reduces the load on the origin infrastructure, yielding a better user experience.

Personalization
Companies are continually seeking to deliver more personalized user experiences to increase customer engagement and loyalty in order to drive profitability. Again, the functions of identifying the user and determining which content to present typically reside at the origin. This usually means personalized content is uncacheable, resulting in low offload and negative impact to performance. Instead, a serverless edge compute device can be used to detect the characteristics of inbound requests, rapidly identifying unique users and retrieving personalized content. This logic can be written in JavaScript at the edge and personalized content can be stored in JavaScript bundle or in a key-value store at the edge. Performing this logic at the edge provides highly personalized user experiences while increasing offload, enabling a faster, more consistent experience.

Privacy Compliance
Businesses are under growing pressures to safeguard their customers’ privacy and comply with an array of regulations, including GDPR, CCPA, APPI, and others, to avoid penalties. Compliance is particularly challenging for data over which companies may have no control. One important aspect of compliance is tracking consent data. Many organizations have turned to the Transparency and Consent Framework (TCF 2.0) developed by the Interactive Advertising Bureau (IAB) as an industry standard for sending and verifying user consent.

Deploying this functionality as a microservice at the edge makes a lot of sense. When the user consents to tracking, state-tracking cookies are added to the session that enable a personalized user experience. If the user does not consent, the cookie is discarded and the user has a more generic experience that does not involve personal information. Performing these functions at the edge improves offload and enables cacheability, allowing extremely rapid lookups. This improves the user experience while helping ensure privacy compliance.

Third-Party Services
Many companies offer “productized” services designed to address specific, high-value needs. For example, the A/B testing discussed earlier is often implemented using such a third-party service in conjunction with core marketing campaign management applications. These third-party services are often tangential to the user’s request flow. When implemented in the critical path of the request flow, they add latency that can affect performance. Moreover, scale and reliability are beyond your control, which means the user experience is too. Now imagine this third-party code is running natively on the same serverless edge platform handling the user’s originating request. Because the code is local, latency is reduced. And the code is now able to scale to meet changing traffic volumes and improving reliability.

One recent example of this was the partnership between Akamai and the Queue-It virtual waiting room service. The service allows online customers to retain their place in line, while providing a positive waiting experience and reducing the risk of a website crash due to sudden, spikes in volume. The partnership was focused specifically on providing an edge-based virtual waiting room solution to handle traffic during the rush to sign up for COVID vaccinations. The same approach could be used for any online event where traffic spikes are expected, such as ticket reservations to a sought-after concert or theater event, now that these venues are poised to open back up.

Conclusion
These examples highlight how important it is to understand and think carefully about what functions make sense to run at the edge. It’s true that some of these use cases may be met by traditional centralized infrastructures. But consider the reduction in overhead, the speed and efficiency of updating functionality, and the performance advantages gained by executing them at the edge. These benefit service providers and users alike. Just as selecting the right applications for edge compute is critical, so it working with the right edge provider. In this regard, proximity matters.

Generally speaking, the closer edge compute resources are to the user, the better. Beware of service providers running code in more centralized nodes that they call “the edge.” And be sure they can deliver the performance, reliability and security needed to meet your service objectives, based on the methodology you choose, while effectively managing risk.

The edge compute industry and market for these services is an evolving landscape that’s only just starting off. But there is a growing list of use cases that can benefit now from edge compute deployed in a thoughtful way. We should expect to see more uses cases in the next 18 months as edge computing adoption continues and companies look at ways to move logic and intelligence to the edge.

Streaming Summit at NAB Show Returns, Call For Speakers Now Open

It’s back! I am happy to announce the return of the NAB Show Streaming Summit, taking place October 11-12 in Las Vegas. The call for speakers is now open and lead gen opportunities are available. The show will be a hybrid event this year, with both in-person and remote presentations. See the website for all the details or contact me with your ideas on how you want to be involved.

The topics covered will be created based on the submissions sent in, but the show covers both business and technology topics including; bundling of content; codecs; transcoding; live streaming; video advertising; packaging and playback; monetization of video; cloud based workflows; direct-to-consumer models, the video ad stack and other related topics. The Summit does not cover topics pertaining to video editing, pre/post production, audio only applications, content scripts and talent, content rights and contracts, or video production hardware.

Please reach out to me at (917) 523-4562 or via email at anytime if you have questions on the submission process or want to discuss an idea before you submit. I always prefers speaking directly to people about their ideas so I can help tailor your submission to what works best. Interested in moderating a session? Please contact me ASAP!

Apple Using Akamai, Fastly, Cloudflare For Their New iCloud Private Relay Feature

On Monday, Apple announced some new privacy features in iCloud, one of which they are calling Private Relay. The way it works is that when you go to a website using Safari, iCloud Private Relay takes your IP address to connect you to the website and then encrypts the URL so that app developers, and even Apple, don’t know what website you are visiting. The IP and encrypted URL then travels to an intermediary relay station run by what Apple calls a “trusted partner”. In a media interview published yesterday, Apple would not say who the trusted partners are but I can confirm, based on public details (as shown below; Akamai on left, Fastly on the right), that Akamai, Fastly and Cloudflare are being used.

On Fastly’s Q1 earnings call, the company said they expect revenue growth to be flat quarter-over-quarter going into Q2, but that revenue growth would accelerate in the second half of this year. The company also increased their revenue guidance range to $380 million to $390 million, up from $375 million to $380 million. Based on the guidance numbers, Fastly would be looking at a pretty large ramp of around 15% of sequential growth in the third and fourth quarter. Fastly didn’t give any indication of why they thought revenue might ramp so quickly, but did say that, “a lot of really important opportunities that are coming our way.” By itself, this new traffic generated from Apple isn’t that large when it comes to overall revenue and is being shared amongst three providers. This news comes out at an interesting time as this morning, Fastly had a major outage on their network that lasted about an hour.

Rebuttal to FCC Commissioner: OTT, Cloud and Gaming Services Should Not Pay for Broadband Buildout

Brendan Carr, commissioner of the Federal Communications Commission (FCC), published an op-ed post on Newsweek entitled “Ending Big Tech’s Free Ride.” In it, he suggests that companies such as Facebook, Apple, Amazon, Netflix, Microsoft, Google and others, should pay a tax for the build-out of broadband networks to reach every American. In his post he blames streaming OTT services as well as gaming services like Xbox and cloud services like AWS for the volume of traffic on the Internet. There are a lot of factual problems with his post from both a business and technical standpoint, which is always one of the main problems when regulators get involved in topics like this. They don’t focus on the facts of the case but rather their “opinions” disguised as facts. The Commissioner references a third-party post-doctoral paper as his argument, which contains many factual errors when it comes to numbers disclosed by public companies, some of which I highlight below.

The federal government currently collects roughly $9 billion a year through a tax on traditional telephone services—both wireless and wireline. That pot of money, known as the Universal Service Fund, is used to support internet builds in rural areas. The Commissioner suggests that consumers should not have to pay that tax on their phone bill for the buildout of broadband and that the tax should be paid by large tech companies instead. He says that tech companies have been getting a “free ride” and have “avoided” paying their fair share.” He writes that “Facebook, Apple, Amazon, Netflix and Google generated nearly $1 trillion in revenues in 2020 alone,” saying it “would take just 0.009 percent of those revenues,” to pay for the tax. The Commissioner is ignoring the fact that in 2020, Apple made 60% of their revenue overseas and that much of it comes from hardware, not online services. Apple’s “services” revenue, as the company defines it, made up 19% of their total 2020 revenue. So that $1 trillion number is much, much lower, if you’re counting revenue from actual online services that use broadband to deliver the content.

If the Commissioner wants to tax a company that makes hardware, why isn’t Ford Motor Company on the list? They make physical products but also have a “mobility” division that relies on broadband infrastructure for their range of smart city services. Without that broadband infrastructure Ford would not be able to sell mobility services to cities or generate any revenue for their mobility division. You could extend this notion to all kinds of companies that make revenue from physical goods or commerce companies like eBay, Etsy, Target and others. Yet the Commissioner specifically calls out video streaming as the problem and references a paper written in March of this year as his evidence. The problem is that the paper is full of so many factually wrong numbers, definitions and can’t even get the pricing of streaming services accurate, something the Commissioner clearly hasn’t noticed.

The paper says YouTube is “on track to earn more than $6 billion in advertising revenue for 2020”. No, YouTube generated $19.7 billion revenue in 2020. The paper also says that Hulu’s live service costs $4.99 a month, when in actuality it costs $65 a month. It also says that the on-demand version of Hulu costs $11.99 a month when it costs $5.99 a month. There are many instances of wrong numbers like this in the report that can’t be debated and are simply wrong. Full stop. The authors say the goal of the paper is to look at the “challenge of four rural broadband providers operating fiber to the home networks to recover the middle mile network costs of streaming video entertainment.” They say that “subscribers pay about $25 per month subscriber to video streaming services to Netflix, YouTube, Amazon Prime, Disney+, and Microsoft.” That’s not accurate. YouTube is free. If they mean YouTube TV, that costs $65 a month. The paper also uses words like “presumed” and “assumptions” when making their arguments, which isn’t based on any facts.

The paper also points out that the data and methodology used in the report to come to their conclusions “has limitations” since traffic is measured “differently” amongst broadband providers. So only a slice of the overall data is being used in the report and the methodology collection isn’t consistent amongst all the providers. We’re only seeing a small window into the data being used in the repot, yet the Commissioner is referencing this paper as his “evidence”. The paper also references industry terms from as far back as 2012 saying they have “adapted” them to today, which is always a red flag for accuracy.

The paper also incorrectly states that “The video streaming entertainment providers do not contribute to middle or last mile network costs. The caching services provided by Netflix and YouTube are exclusionary to the proprietary services of these platforms and entail additional costs for rural broadband providers to participate.” Netflix and others have been putting caches inside ISP networks for FREE, which saves the ISP money on transit. Apple will also work with ISPs via their Apple Edge Cache program. For anyone to suggest that big tech companies don’t spend money to build out infrastructure for the consumers benefit is simply false. Some ISPs choose not to work with content companies offering physical or virtual caches but that’s based on a business decision they have made on their own. In addition, when a consumer signs up for a connection to the Internet from an ISP, the ISP is in the business of adding capacity to support whatever content the consumer wants to stream. That is the ISPs business and there is no valid argument that an ISP should not have to spend money to support the user.

The paper stats that, “Rural broadband providers generally operate at close to breakeven with little to no profit margin. This contrasts with the double-digit profit margins of the Big Streamers.” Disney’s direct-to-consumer streaming division which includes Disney+, Hulu, ESPN+ and Hotstar lost $466 million in Q1 of this year. What “double-digit profit margins” is the paper referencing? Again, they don’t know the numbers. The paper also gets wrong many of their explanations of what a CDN is, how it works, and how companies like Netflix connect their network to an ISP like Comcast. The paper also shows the logo of Hulu and Disney+ on a chart listing them under the Internet “backbone” category, when of course the parent owner of those services, The Walt Disney Company, doesn’t own or operate a backbone of any kind. The paper argues that since rural ISPs have no scale, they can’t launch “streaming services of their own” like AT&T has. Of course, this is 100% false and there are many third-party companies in the market that have packaged together content ready to go that any ISP can re-sell as a bundle to their subscribers, no matter how many subscribers they have.

According to a 2020 report from the Government Accountability Office (GAO), the FCC’s number one challenge in targeting and identifying unserved areas for broadband deployment was the accuracy of the FCC’s own broadband deployment data. Congress recently provided the FCC with $98 million to fund more precise and granular maps. You read that right, the FCC was given $98 million dollars to create maps. In March of 2020, Acting FCC Chairwoman Jessica Rosenworcel said these maps could be produced in “a few months,” but that estimate has now been changed to 2022. Some Senators have taken notice of the delay and have demanded answers from the FCC.

It’s easy to suggest that someone else should pay a tax without offering any details on who exactly it would apply to, how much it would be, what the classifications are to be included or omitted, which services would or would not fall under the rule, how much would need to be collected and over what period of time. But this is exactly what Commissioner Carr has done by calling out companies, by name, that he thinks should pay a tax. All while providing no details or proposal and referencing a paper filled with factual errors. I have contacted the Commissioner’s office and offered him an opportunity to come to the next Streaming Summit at NAB Show, October 11-12, and debate this topic with me in-person. If accepted, I will only focus on the facts, not opinions.

Unpacking the Edge Compute Hype: What It Really Is and Why It’s Important


The tech industry has always been a prolific producer of hype and right now, no topic is mentioned more generically than that of “edge” and “edge compute”.  Everywhere you turn these days, vendors are promoting their “edge solution”, with almost no definition, no real-world use cases, no metrics around deployments at scale and a lack of details on how much revenue is being generated. Making matters worse, some industry analysts are publishing reports saying the size of the “edge” market is already in the billions. These reports have no real methodology behind the numbers, don’t define what service they are covering and talk to non-realistic growth and use cases. It’s why the moment edge compute use cases are mentioned, people always use the examples of IoT, connected cars and augmented reality.

Part of the confusion in the market is due to rampant “edge-washing”, which is vendors seeking to rebrand their existing platforms as edge solutions. Similar to how some cloud service providers call their points of presence the edge. Or CDN platforms marketed as edge platforms, when in reality, the traditional CDN use cases are not taking advantage of any edge compute functions. You also see some mobile communications providers even referring to their cell towers as the edge and even a few cloud based encoding vendors are now using the word “edge” in their services.

Growing interest among the financial community in anything edge-related is helping fuel this phenomenon, with very little understanding of what it all means, or more importantly, doesn’t mean. Look at the valuation trends for “edge” or “edge computing” vendors and you’ll see there is plenty of incentive for companies to brand themselves as an edge solution provider. This confusion makes it difficult to separate functional fact from marketing fiction. To help dispel the confusion, I’m going to be writing a lot of blog posts this year around edge and edge compute topics with the goal of separating facts from fiction.

The biggest problem is that many vendors are using the phrase “edge” and “edge compute” interchangeably and they are not the same thing. Put simply, the edge is a location, the place in the network that is closest to where the end user or device is. We all know this term and Akamai and been using it for a long time to reference a physical location in their network. Edge computing refers to a compute model where application workloads occur at an edge location, where logic and intelligence is needed. It’s a distributed approach that shifts the computing closer to the user or device being used. This contrasts with the more common scenario where applications are run in a centralized data center or in the cloud, which is really just a remote data center usually run by a third-party. Edge compute is a service, the “edge” isn’t. You can’t buy “edge”, you are buying CDN services that simply leverage an edge-based network architecture that perform work at the distributed points of presence closest to where the digital and physical intersect. This excludes basic caching and forwarding CDN workflows.

When you are deploying an application, the traditional approach would be to host that application on servers in your own data center. More recently, it is likely you would instead choose to host the application in the cloud, with a cloud service provider like Amazon Web Services, Microsoft Azure or the Google Cloud Platform. While cloud service providers do offer regional PoPs, most organizations typically still centralize in a single or small number of regions.

But what if your application serves users in New York, Rome, Tokyo, Guangzhou, Rio di Janeiro, and points in between? The end-user journey to your content begins on the network of their ISP or mobile service provider, then continues over the Internet to whichever cloud PoP or data center the application is running on, which may be half a world away. From an architectural viewpoint, you have to think of all of this as your application infrastructure, and many times the application itself is running far, far away from those users. The idea and value of edge computing turns this around. It pushes the application closer to the users, offering the potential to reduce latency and network congestion, and to deliver a better user experience.

Computing infrastructure has really evolved over the years. It began with “bare metal,” physical servers running a single application. Then virtualization came into play, using software to emulate multiple virtual machines hosting multiple operating systems and applications on a single physical server. Next came containers, introducing a layer that isolates the application from the operating system, allowing applications to be easily portable across different environments while ensuring uniform operation. All of these computing approaches can be employed in a data center or in the cloud.

In recent years, a new computing alternate has emerged called serverless. This is a zero-management computing environment where an organization can run applications without up-front capital expense and without having to manage the infrastructure. While it is used in the cloud (and could be in a data center—though this would defeat the “zero management” benefit), serverless computing is ideal for running applications at the edge. Of course, where this computing occurs matters when delivering streaming media. Each computing location, on-premises, in the cloud and at the edge, has its pros and cons.

  • On-premises computing, such as in an enterprise data center, offers full control over the infrastructure, including the storage and security of data. But it requires substantial capital expense and costly management. It also means you may need reserve server capacity to handle spikes in demand-capacity that sits idle most of the time, which is an inefficient use of resources. And an on-premises infrastructure will struggle to deliver low-latency access to users who may be halfway around the world.
  • Centralized cloud-based computing eliminates the capital expense and reduces the management overhead, because there are no physical servers to maintain. Plus, it offers the flexibility to scale capacity quickly and efficiently to meet changing workload demands. However, since most organizations centralize their cloud deployments to a single region, this can limit performance and create latency issues.
  • Edge computing offers all the advantages of cloud-based computing plus additional benefits. Application logic executes closer to the end user or device via a globally distributed infrastructure. This dramatically reduces latency and avoids network congestion, with the goal of providing an enhanced and consistent experience for all users.

There is a trade-off with edge computing, however. The distributed nature of the edge translates into a lower concentration of computing capacity in any one location. This presents limitations for what types of workloads can run effectively at the edge. You’re not going to be running your enterprise ERP or CRM application in a cell tower, since there is no business or performance benefit. And this leads to the biggest unknown in the market today, that being, which application use cases will best leverage edge compute resources? As an industry, we’re still finding that out.

From a customer use case and deployment standpoint, the edge computing market is so small today that both Akamai and Fastly have told Wall Street that their edge compute services won’t generate significant revenue in the near-term. With regards to their edge compute services, during their Q1 earnings call, Fastly’s CFO said, “2021 is much more just learning what are the use cases, what are the verticals that we can use to land as we lean into 2022 and beyond.” Akamai, which gave out a lot of CAGR numbers at their investor day in March said they are targeting revenue from “edge applications” to grow 30% in 3-5 years, inside their larger “edge” business, with expected overall growth of 2-5% in 3-5 years.

Analysts that are calling CDN vendors “edge computing-based CDNs” don’t understand that most CDNs services being offered are not levering any “edge compute” services inside the network. You don’t need “edge compute” to deliver video streaming, which as an example, made up 57% of all the bits Akamai’s delivered in 2020, for their CDN services, or what they call edge delivery. Akamai accurately defines the video streaming they deliver as “edge delivery”, not “edge compute”. Yet some analysts are taking the proper terminology vendors are using and swapping that out with their own incorrect terms, which only further adds to the confusion in the market.

In simple terms, edge compute is all about moving logic and intelligence to the edge. Not all services or content needs to have an edge compute component, or be stored at or delivered from the edge, so we’ll have to wait to see which applications customers use it for. The goal with edge compute isn’t just about improving the use experience but also having a way to measure the impact on the business, with defined KPIs. This isn’t well defined today, but it’s coming over the next few years as we see more uses cases and adoption.

CDN Limelight Networks Gives Yearly Revenue Guidance, Update on Turnaround

In my blog post from March of this year I detailed some of the changes Limelight Networks new management team is taking to set the company back on a path to profitability and accelerated growth. Absent from my post were full-year revenue guidance numbers as Limelight’s management team was too new at the time to be able to share them with Wall Street. Now, with Limelight having reported Q1 2021 earnings on April 29th, we have a better insight into what they expect for the year.

Limelight had revenue of $51.2 million in Q1, down 10%, compared to $57.0 million in the first quarter of 2020. This wasn’t surprising since Limelight’s previous management team didn’t address some network performance issues that resulted in a loss of some traffic. The good news is that Limelight stated during their earnings that they have since “reduced rebuffer rates by approximately 30%”, “increased network throughput by up to 20% through performance tuning” and believe that over the next 90-days they can create additional performance improvements that will “drive increased market share of traffic from our clients.” For the full year, Limelight expects revenue to be in the range of $220M-$230M, while having a $20M-$25M Capex spend. Limelight had total revenue of $230.2M in 2020, so at the high-end of Limelight’s 2021 projection, the growth of the business would be flat year-over-year.

New management has made some measurable progress addressing some of their short-term headwinds and identifying what they need to work on going forward. Based on some of the changes they have already made the company expects to benefit from an annual cash cost savings of approximately $15M. It’s a good start, but turnarounds don’t happen overnight and the new management team has only been inside the organization for 90-days. They need to be given more time, at least two quarters of operating the business, before we can expect to see some measurable results and see what growth could look like in Q4 and going into Q1 of 2022. Limelight also announced during their earrings that they will be holding a strategy update session in early summer to discuss their broader plans to evolve their offerings beyond video with the goal of taking advantage of their network during low peak times.

Earnings Recap: Brightcove, Google, FB, Verizon, AT&T, Microsoft, Discovery, Comcast, Dish

Here’s a quick recap that highlights the most important numbers you need to know from last week’s Q4 2021 earnings from Brightcove, Google, FB, Verizon, AT&T (HBO Max), Microsoft, Discovery, Comcast (Peacock TV) and Dish (Sling TV). Later this week I’ll cover earnings from Akamai, Fastly, T-Mobile, Fox, Vimeo, Cloudflare, Roku, ViacomCBS and AMC Networks. Disney and fuboTV report the week of May 10th.

  • Brightcove Q1 2021 Earnings: Revenue of $54.8M, up 18% y/o/y, but nearly flat from Q4 revenue of $53.7M; Expects revenue to decline in Q2 to $49.5M-$50.5M. Full year guidance of $211M-$217M. More details: https://bit.ly/2RgZLVE
  • Alphabet Q1 2021 Earnings: Revenue of $55.31B, up 34% y/o/y ($44.6B in advertising); YouTube ad revenue of $6.01B, up 49% y/o/y; cloud revenue of $4.05B, up 46% y/o/y (lost $974M). No details on YouTube TV subs. Added almost 17,000 employees in the quarter. More details: https://lnkd.in/dNrwSVD
  • Facebook Q1 2021 Earnings: Total revenue of $26.1B, up 22% y/o/y; Monthly active users of 2.85B, up 10% y/o/y; Daily active users of 1.88B, up 8% y/o/y; Expects y/o/y total revenue growth rates in Q3/Q4 to significantly decelerate sequentially as they lap periods of increasingly strong growth. More details: https://bit.ly/3tf2o7J
  • Verizon Q1 2021 Earnings: Lost 82,000 pay TV subscribers; added 98,000 Fios internet customers. Has 3.77M pay TV subs. More details: https://lnkd.in/dWrZ9iP
  • AT&T Q1 2021 Earnings: Added 2.7M domestic HBO Max and HBO subscriber net adds; total domestic subscribers of 44.2M and nearly 64M globally. Domestic HBO Max and HBO ARPU of $11.72. WarnerMedia revenue of $8.5B, up 9.8% y/o/y. More details: https://lnkd.in/eZeFy_8
  • Microsoft Q1 2021 Earnings: Total revenue of $41.7B, up 19% y/o/y; Intelligent Cloud revenue of $17.7B, up 33% y/o/y; Productivity and Business Processes revenue of $13.6B, up 15% y/o/y. More details: https://bit.ly/3tdv5BV
  • Discovery Q1 2021 Earnings: Has 15M paying D2C subs, but won’t say how many are Discovery+; Ad supported Discovery+ had over $10 in ARPU; Average viewing time of 3 hours per day, per user. More details: https://lnkd.in/dp8DZG4
  • Comcast Q1 2021 Earnings: Peacock Has 42M Sign-Ups to Date; Lost 491,000 pay TV subscribers; Peacock TV had $91M in revenue on EBITDA loss of $277M. More details: https://bit.ly/3vGnJZw
  • Dish Q1 2021 Earnings: Lost 230,000 pay TV subs; Lost 100,000 Sling TV subs (has 2.37M in total). More details: https://lnkd.in/dD9za-e
  • Netflix Q1 2021 Earnings: Added 3.98M subs (estimate was for 6M); finished the quarter with 208M subs; operating income of $2B more than doubled y/o/y; will spend over $17B on content this year; Q2 2021 guidance of only 1M net new subs. More details: https://bit.ly/33bsdei
  • Twitter Q1 2021 Earnings: Revenue $1.04B, up 28% y/o/y; Average mDAU was 199M, up 20% y/o/y; mDAU growth to slow in coming quarters, when compared to rates during pandemic. More details: https://lnkd.in/dK9PiJf

Too Early To Speculate on The Impact To The CDN Market, With the Sale of Verizon’s Media Platform Business

This morning it was announced that private equity firm Apollo Global Management has agreed to acquire Verizon’s Media assets for $5 billion, in a deal expected to close in the second half of this year. Apollo will pay Verizon $4.25 billion in cash, along with preferred interests of $750 million, and Verizon will keep 10% of the new company, which will be named Yahoo. I’m getting many inquires as to what this means for the CDN market as a whole since the Verizon Media Platform business (formerly called Verizon Digital Media Service) is part of the sale.

While part of Verizon’s Media Platform business involves content delivery, based in large part to Verizon’s acquisition of CDN EdgeCast in 2003, it’s far too early to speculate what this means for the larger overall CDN market. The Verizon Media Platform business includes a lot of video functionality outside of just video delivery, with ingestion, packaging, data analytics and a deep ad stack for publishers as part of their offering. What pieces of the overall Verizon Media business Apollo will keep, sell, consolidate or double-down on with further investment is unknown. For now, it’s business as usual for Verizon’s Media Platform business.

Anyone suggesting that this is good for other CDNs as maybe there will be less competition in the long-run, or bad for other CDNs as Apollo could double-down on their investment in CDN and make it a more competitive market, is pure speculation. It’s too early to know what impact this deal may or may not have on the CDN market.

Netflix Misses Subs Estimate: Added 3.98M Subs In Q2, Will Spend Over $17B on Content This Year

Netflix reported their Q1 2021 earnings, adding 3.98M subscribers in the quarter (estimate was for 6M) and finished the quarter with 208M total subscribers. On the positive side, Netflix reported operating income of $2B which more than doubled year-over-year. The company said they will spend over $17B on content this year and anticipates a strong second half with the return of new seasons of some of their biggest hits and film lineup. More details:

  • Q2 guidance of only 1M net new subs
  • Finished Q1 2021 with 208M paid memberships, up 14% year over year, but below guidance forecast of 210M paid memberships
  • Average revenue per membership in Q1 rose by 6% year-over-year
  • Q1 operating income of $2B vs. $958M more than doubled vs. Q1’20. The company exceeded their guidance forecast primarily due to the timing of content spend.
  • Netflix doesn’t believe competitive intensity materially changed in the quarter or was a material factor in the variance as their over-forecast was across all of their regions
  • Netflix believes paid membership growth slowed due to the big Covid-19 pull forward in 2020 and a lighter content slate in the first half of this year, due to Covid-19 production delays

Netflix’s stock is down almost 11% as of 5:12pm ET. Roku is also down 5%, probably seeing an impact from Netflix’s earnings.

The Current State of Ultra-Low Latency Streaming: Little Adoption Outside of Niche Applications, Cost and Scaling Issues Remain

For the past two years we’ve been hearing a lot about low/ultra-low latency streaming, with most of the excitement coming from encoding and CDN vendors looking to up-sell customers on the functionality. But outside of some niche applications, there is very little adoption or demand from customers and that won’t change anytime soon. This is due to multiple factors including a lack of agreed upon definition of what low and ultra-low latency means, the additional cost it adds to the entire streaming stack, scalability issues, and a lack of business value for many video applications.

All the CDNs I have spoken to said that on average, 3% of less of all the video bits they deliver today are using DASH-LL with chunked transfer and chunked encoding, with a few CDNs saying it was as low as 1% or less. While Apple LL-HLS is also an option, there is no real adoption of it as of yet, even though CDNs are building out for it. The numbers are higher when you go to low-latency, which some CDNs define as 10 seconds or less, using 1 or 2 second segments, with CDNs saying on that on average, it makes up 20% of the total video bits they deliver.

Low latency and ultra-low latency streaming are hard technical problems to solve in the delivery side of the equation. The established protocols (i.e. CMAF and LL-HLS) call for very small segment sizes, which correlate to much higher requests to the cache servers than a non-low latency stream. This could be much more expensive for legacy edge providers to support given the age of their deployed hardware. This is why some CDNs have to run a separate network to support it since low-latency delivery is very I/O intensive and older hardware doesn’t support it. As a result, some CDNs don’t have a lot of capacity for ultra-low latency delivery which means they have to charge customers more to support it. Based on recent pricing I have seen in RFPs many CDNs charge an extra 15-20% on average, per GB delivered.

Adding to the confusion is the fact that many vendors don’t define what exactly they mean by low or ultra-low latency. Some CDNs have said that low-latency is under 10 seconds and ultra-low latency is 2 seconds or less. But many customers don’t define it that way. As an example, FOX recently published a nice blog post of their streaming workflow for the Super Bowl calling their low-latency stream “8–12 secs behind” the master feed. They aren’t right or wrong, it’s simply a matter of how each company defines these terms.

In Q1 of this year I surveyed just over 100 broadcasters, OTT platforms and publishers asking them how they define ultra-low latency and the applications they want to deploy it for. (see notes at bottom for the methodology) The results don’t line up with what some vendors are promoting and that’s part of the problem, no agreed upon expectations. Of those surveyed, 100% said they define ultra-low latency as “2 seconds”, “1 second” or “sub 1 second”. No respondent picked any number higher than 2 seconds. 90% said they were “not willing to pay a third-party CDN more for ultra-low latency live video delivery” and that it “should be part of their standard service”. The biggest downside they noted in ultra-low latency adoption was “cost”, followed by “scalability” and the “impact on ABR implementation.” Of the 10% that were willing to pay more for ultra-low latency delivery, they all responded to the survey saying they would not pay more than 10% per GB delivered.

Part of the cost and scaling problems is why to date, most of the ultra-low latency delivery we see is coming from companies that build their own delivery infrastructure using WebRTC, like I noted in a recent post. Agora has been successful selling their ultra-low latency delivery and I consider them to be the best in the market. They were one of the first to offer an ultra-low latency solution at scale, but note that a large percentage of what they are delivering is audio only, has no video, is mostly in APAC and being used for two-way communications. Agora defines their low-latency solution as “400 – 800 ms” of latency and their ultra-low latency as “1,500 – 2,000 ms” of latency. That’s a lot lower than other solutions I have seen on the market, based on how vendors define these terms.

Aside from the technical issues, more importantly, many customers don’t see a business benefit from deploying ultra-low latency, except for niche applications. It doesn’t allow them to sell more ads, get higher CPMs or extend users viewing times. Of those streaming customers I recently surveyed, the most common video use cases they said ultra-low latency would be best suited for was “betting”, “two-way experience (ie: quiz show, chat)”, “surveillance” and “sports”. These are use cases when ultra-low latency can make the experience better and might provide a business ROI to the customer, but they are very specific video use cases. The idea that every sports event will go to ultra-low latency streaming in the near-term simply isn’t reality. Right now, no live linear streaming service has deployed ultra-low latency but with fuboTV having disclosed how they want to add betting to their service down the line, an ultra-low latency solution will be needed.  That makes sense but it’s not the live streaming that’s driving the need but rather the gambling functionality as the business driver for adopting it.

Live sports streaming is one instance where most consumers would probably say they would like to see ultra-low latency implemented, but it’s not up to the viewer. There is ALWAYS a tradeoff for streaming services between cost, QoE and reliability and customers don’t deploy technology just because they can, it has to provide a tangible ROI. The bottom line is that broadcasters streaming live events to millions at the same time have to make business decisions of what the end-user experience will look like. No one should fault any live streaming service for not implementing ultra-low latency, 4K or any other feature, unless they know what the company’s workflow is, what the limitations are by vendors, what the costs are to enable it, and what KPIs are being used to judge the success of their deployment.

Note on survey data: My survey was conducted in Q1 of 2021 and 104 broadcasters, OTT platforms and publishers responded to the survey, who were primarily based in North America and Europe. They were asked the following questions: how they define ultra-low latency; which applications they would use it for; if they would be willing to pay more for ultra-low latency delivery; the biggest challenges to ultra-low latency video delivery; how much latency is considered too much for sporting events and which delivery vendors they would consider if they were implementing an ultra-low latency solution for live. If you would like more details on the survey, I am happy to provide it, free of charge.

WebRTC is Gaining Deployments for Events With Two-Way Interactivity

While traditional broadcast networks have been able to rely on live content to draw viewers, we all know that younger audiences are spending more time in apps with social experiences. To better connect with young viewers, companies are testing new social streaming experiences that combine Hollywood production, a highly engaging design and in many cases WebRTC technology. (See a previous post I wrote on this topic here: “The Challenges With Ultra-Low Latency Delivery For Real-Time Applications“)

Within the streaming media industry, there is a lot of discussion right now about different low/ultra-low latency technologies for applications requiring two-way interactivity. Many are looking to the WebRTC specification that allows for real-time communication capabilities that work on top of an open standard and use point-to-point communication to take video from capture to playback. WebRTC was developed as a standard way to deliver two-way video and provides users with the ability to communicate from within their primary web browser without the need for complicated plug-ins or additional hardware.

WebRTC does pose significant scaling challenges as few content delivery networks support it natively today. As a result, many companies utilizing WebRTC in their video stack have built out their own delivery infrastructure for their specific application. An example of a social platform doing this would be Caffeine, which built out their own CDN with a few IaaS partners to facilitate the custom stack necessary for them to deliver ultra-low latency relays. Keeping latency low also involves custom ingest applications that Caffeine built out to keep latency low glass-to-glass.

Another hurdle to WebRTC streaming is that it incurs a higher cost than traditional HTTP streaming. The low latency space is a rapidly evolving field in terms of traditional CDNs support for ultra-low latency WebRTC relays and http based low latency standards (LL-HLS and LL-DASH). So cost, sometimes as high as three-times regular video delivery and the ability to scale are still big hurdles for many. You can see what the CDNs are up with regards to low/ultra-low latency video delivery by reading their posts about it here: Agora, Akamai, Amazon, Limelight, Lumen, Fastly, Verizon, Wowza/Azure.

One problem we have as an industry is that very few companies have put out any real data on how well they have been able to scale their WebRTC based real-time interaction consumer experiences. One example I know of is that Caffeine disclosed that in 2020, they had 350,000 people tuned in to the biggest event they have done to date, a collaboration with Drake and the Ultimate Rap League (URL). While getting to scale with WebRTC based video applications is good to see, we can’t really talk about scale unless we also talk about measuring QoE. Most companies are an ABR implementation within WebRTC, doing content bitrate adaption based on the user’s network connection leveraging the WebRTC standard similar to http multi-variant http streaming but adapting faster relative to what’s afforded by the WebRTC protocol. This is the approach Caffeine has taken, telling me they measure QoE via several dimensions around startup, buffering, dropped frames, network level issues and video bitrate.

Some want to suggest that low-latency based streaming is needed for all streaming events or video applications but that’s not the case. There are business models where it makes sense but many others where the stream experience is passive and doesn’t require two-way interactivity. For platforms that do need it, like Caffeine, people are reacting to one another because of exchanges happening in real time. Chat brings out immediacy amongst participants, whether being called out by a creator or sending digital items to them, fans can change the course of a broadcast in real-time, driven by extremely low latency at scale. In these cases, culture, community, tech and production come together to elevate the entertainment to a whole new level. For Caffeine, it works so well that average watch times were over 100 minutes per user in 2020 for their largest live events.

Streaming media technology has transformed traditional forms of media consumption from packaging to distribution. Now with lots of social media streaming taking place, we are seeing interactive experiences continue to evolve, shaping opportunities in content creation, entertainment, monetization and advertising, with live streaming events being the latest. WebRTC is now the go-to technology being used in the video stack for the applications and experiences that need it, but the future of WebRTC won’t be as mainstream as some suggest or for all video services. WebRTC will be a valuable point-solution providing the functionality needed in specific use cases going forward and should see more improvements with regards to scale and distribution in the coming years.

Bitmovin’s Flexible API Based Approach for Video Developers Has Investors Interested

For many of the largest streaming media related companies, picking best-in-breed video components in different areas of the streaming stack and integrating them into a customized OTT streaming service has become the new norm. API based streaming services combine everything available to customers allowing them to pick the best components between vendors, open source, or completely building it from scratch in-house. A good example is Crunchyroll, which shares a lot on their blog about how they combine their own video development with the best commercial and open source components for their service. Many OTT platforms, broadcasters and publishers take their streaming infrastructure stack very seriously and have invested in engineering expertise, which is a key component in successfully offering a great quality of experience.

Vendors that offer flexible API based video solutions are seeing a lot of growth in the market, which is attracting the attention of investors. Bitmovin, which has been in the industry since 2013 and has raised $43M to date, is currently raising another round of funding, rumored to be in the $15M-$20M range. [Updated April 20: Bitmovin announced is has raised $25M in a C round]  (Mux is raising another round as well) Bitmovin has seen some good growth over the last few years, choosing not to focus on a complete end-to-end video stack, but rather specializing in API based encoding, packaging, player and analytics. By my estimate the company will do $35M-$45M in 2021 revenue.

Bitmovin has benefited from large media companies getting more mature in their streaming stack and expertise, and switching away from less flexible end-to-end platforms, to more highly configurable and modular best-in-breed components. While encoding, packaging and playback are not everything you need to build a video streaming service, Bitmovin and others are trying to be the best in what they do, with a focused core set of offerings. Engineers then choose these components, together with others that are best in their respective category of CMS, CDN, Storage, DRM, Ad-Insertion, etc., to put together a streaming system where they are in full control of every component and gain a large degree of control and flexibility.

When it comes to encoding, Bitmovin takes an interesting approach by not only providing hosted encoding solutions in the cloud, but also a software-only option where customers can deploy their encoding software within the customer’s cloud account. All cloud providers are being extremely aggressive right now to get media companies as clients and gain market share by offering interesting deals on compute and storage, with Oracle being the latest. Leveraging these cloud options together with a software-only approach like Bitmovin offers for encoding, is a great way for some customers to benefit when it comes to costs and choosing the best cloud vendor as part of their best-in-breed strategy. It also gives companies flexibility by not being locked into one cloud provider, as many hosted encoding providers only run on AWS.

For vendors offering API based video solutions, focus is important. You can’t be everything to everybody, trying to solve every problem within every vertical. The approach that Bitmovin and some other vendors take, creates focus by providing specialized and differentiated point solutions, for encoding, player and analytics. Bitmovin can go a mile deep in these services, rather than build an end-to-end service that needs to be a mile wide because of the diversity of customer use cases and needs. This is especially important as the complexity of online video gets higher, with new codecs coming to the market, competing HDR formats, more devices to support, more complex DRM and ad use cases, and low latency requirements. Vendors that try to do everything in the video stack have a good chance of doing nothing really well. Of course, a vendor like AWS does a good job across the entire video stack due to their size and some of the acquisitions they have made, but they are the exception.

Customers who revamped their encoding stack and bitrate ladder with Bitmovin have told me they saw great improvements in viewer experience, a decrease in customer support tickets around buffering and bad quality, and also reduced their CDN and storage costs significantly. Two customers in particular that I spoke to saw a reduction of 60% in terms of their CDN bits delivered, when compared to what they used before. While Bitmovin can’t disclose all of their customers by name, looking at video offerings across the web or via apps, it’s not too hard to see some of the brands that are relying on the company. I see organizations including Discovery, DAZN, Sling TV, fuboTV, BBC, Red Bull and The New York Times that all rely on Bitmovin for core pieces of their streaming video workflow.

In addition to encoding, another video component that’s getting more and more attention are video players. Streaming services need to be available on every platform, with the best user experience, which brings a lot of challenges. While it’s fairly easy to make video play in a web browser, playback on mobile devices, multiple generations and brands of Smart TVs, game consoles and casting devices adds a lot more complexity. Despite the availability of a range of open source players, i.e. dash.js, Shaka player, video.js, hls.js, Exoplayer etc., it starts to become one of the main pain points of media companies, making sure their video works properly across every platform.

Every time I do an industry survey on QoE, problems with the video player or app is the number one complaint that OTT providers say they hear about from users. Dealing with DRM and video advertising on a 2016 Samsung Tizen or 2016 LG WebOS are among the common challenges, same as playback on PlayStation and Xbox, platforms that companies like Brightcove, JW Player or THEOplayer do not support for playback. Thus, going deep on players became important over the past years for Bitmovin and something the company has focused heavily on. From what I hear, they have aggregated hundreds of large customers in the player space and I noticed during March Madness that WarnerMedia was using Bitmovin’s player as well.

With the additional funding Bitmovin is securing, they will be an interesting company to watch as they have a deep understanding of the video stack and the way engineering teams build streaming services. They aren’t the only vendor offering some of these services in the market, but many of the other names people know have less than $10M in total revenue and are much, much smaller. Scale matters in the market and I expect we will see more video developers and engineering teams deploying Bitmovin’s APIs across their video offerings.

We Shouldn’t Have to Wait Until 2027 to Benefit From AV1

[Updated April 9: I like seeing this, V-Nova has replied to my post with their thoughts here: https://lnkd.in/d46pQSB]

For companies that supported the Alliance for Open Media (AOM) over the past few years, a lot is at stake to be able to deploy AV1 as soon as possible. There are signs that things are starting to move and the biggest ones are probably the upcoming WebRTC support, AV1 support in Chrome, and Android 10 support. Notably these are all Google projects.

That is great, but while marketing is giving visibility to what has obviously been a breakthrough project, broadcasters and operators alike are still shying away from the AOM technology. More than half of those surveyed in one of my recent industry surveys would like to deploy AV1 in their video services within the next two years but are concerned about hardware support. And, in fairness, reception has been mixed. Broadcom recently announced support for the codec, while Qualcomm support still seems far off.

As of now, it seems as if it will take a while to get the critical mass of device hardware support to make AV1 a success. Many encoding experts who track the topic in greater detail than I do suggest a timeline for adoption of at least another 5 years. In the meantime, AV1 may be confined to niche services and trials delivering low resolution video such as Google Duo in Indonesia. But I’d argue it doesn’t have to be this way, since we live in a world where the processing power in our hands offers up a lot of opportunities for better video experiences. Software decoding is possible and there are solutions out there that can enhance the potential of AV1 to the point of making it viable by serving high-quality premium services this year, not years down the road. AV1 with LCEVC is the solution that has been staring us in the face all along and is starting to get some adoption.

One thought that intrigues me is that there is an MPEG standard, the very organization that AOM tried to disassociate from, that could form an amazing technical combination. MPEG-5 LCEVC is the first enhancement standard that can improve the computational and compression performance of any “base” compression technology. That “any” is the radical part. This new standard could actually accelerate AOM and push AV1 out faster and more efficiently, by making it run on mobile handsets at full HD resolutions.

As both AV1 (from whichever source you choose) is mostly a software encoding and decoding option and LCEVC (from V-Nova) is a software encoding and decoding option, why doesn’t Google and V-Nova combine the two together? In a single software upgrade swoop, we could have full HD WebRTC, web conferencing, and entertainment of all kinds play back on our phones.

We have plenty of new compression technologies in HEVC, VP9, AV1, VVC and one enhancement standard in LCEVC. Yet, most of us are still consuming H.264/AVC video every day, missing out on the better experiences that the new formats could deliver. Let software be king, collaboration prevail, and have “MPEG-5 LCEVC enhanced AV1” deliver better, higher-quality services, sooner.

Kaltura Postpones IPO: How They Stack Up to Brightcove, Panopto, Qumu, Vimeo and Others

Last week Kaltura amended their Form S-1 filing, detailing how many shares they were offering to the market and their expected IPO price of $14-$16, in an effort to raise about $275M this month. But like some other tech companies, Kaltura has now postponed their IPO. The market for tech IPOs isn’t great right now and other companies like Intermedia, have also announced they are putting their IPO on hold while they wait for “favorable IPO conditions”. In the last few months, tech stocks have taken a beating on Wall Street and while we don’t know how long it will last, a perfect storm took place preventing some IPOs from going forward.

Some have asked what this all means for Vimeo’s upcoming IPO and if they are at jeopardy of postponing as well. As of the writing of this post I don’t have any other details on Vimeo’s IPO except that they were originally targeting an April time frame, which I’m hearing has now been updated to May. Vimeo has not yet filed an amended Form S-1 for the pricing of their shares, so that’s the next step we would expect to see in the process and keep an eye out for. [Updated April 1: Vimeo has announced their Board of Directors as they prepare to spin-off from IAC and said their IPO is scheduled for the end of Q2.]

Even without some IPOs not taking place right now and the potential for others to be impacted, it’s important to take a look at the differences between all the different video companies in the space, especially around the term of “enterprise video platform”. The goal of this post to lay out some of the numbers and differences between vendors, but it is not a complete product review comparison for all vendors side-by-side, with regards to functionality. As always, customers have to look at the strengths and weakness of vendors based on what they are trying to accomplish for their specific needs. All that aside, there are a lot of differences form a numbers standpoint when it comes to vendors in the market, with some overlap of services.

Kaltura had $120M in 2020 revenue, with year-over-year revenue growth of 17%, 21%, 27% and 30%, for each quarter last year. 2019 total revenue was $97.3M. Year-over-year revenue growth was 12% in 2018, 18% in 2019 and 24% in 2020. The company had net losses of $15.6M in 2019 and $58.8M in 2020. Kaltura’s top ten customers accounted for approximately 29% of their revenue in 2020, with Vodafone accounting for approximately 12% of that. The company grew revenue by 24% in 2020, while only increasing sales and marketing costs by $3.9M, compared to 2019.

While I hear Kaltura get compared to a lot of “enterprise” video platforms in the market, many vendors mentioned are not truly comparable from a revenue, scale or product functionality standpoint. Qumu and MediaPlatform are mentioned most often and do have some crossover into a few of the same vertical markets as Kaltura’s, but they are much smaller companies. MediaPlatform (private) had sub $10M in revenue for 2020 and not a lot of cash. Qumu (public) did $29.1M in total 2020 revenue and projects revenue to grow to about $35M in 2021. Qumu was running low on capital ending 2020 with $11.9M in cash and cash equivalents. The company did a raise of approximately $23.1M in January of this year to get more operating capital. While Qumu grew revenue 15% from 2019 to 2020, it’s off of a very small base and the company disclosed that the increase in revenue was “primarily due to a large customer order received at the end of Q1 2020.” If you strip out that single customer, Qumu’s year-over-year revenue would have been pretty flat. In comparison, Kaltura had $27.7M in cash and cash equivalents at the end of 2020 and had revenue grow 5X larger than Qumu.

Some are comparing Vimeo to Kaltura because Vimeo is using the term “enterprise” very heavily, but don’t believe the hype. Vimeo isn’t really an “enterprise” grade video platform from a product functionality standpoint. Vimeo’s “enterprise” offering is almost entirely Vimeo’s core product, simply with different levels of usage-based pricing and is not tied into other pieces of the enterprise video stack. Vimeo doesn’t have deep integration with vendors that enterprise organizations use for ingestion, LMS, CMS, analytics or a large number of third-party on-prem encoders. Kaltura is the opposite with integrations into just about every aspect of the enterprise video workflow, from ingestion to delivery. Companies like Ramp, SharePoint, Google Analytics, Blackboard, Matrix, dotsub and others are all in the Kaltura video ecosystem.

Vimeo has the largest percentage of their customer base paying an average of $17.83 a month in revenue, which isn’t the business Kaltura is going after. Vimeo said that at the end of 2020, the company had 3,300 hundred “enterprise” customers paying an ARPU of $1,833 a month, but the majority of that revenue is from their OTT related product, not “enterprise”, when defined by use case and application. Vimeo’s definition of an enterprise customer is simply the requirement that the customer purchase their plan through direct contact with their sales force. That’s it. This might be why Vimeo doesn’t break out verticals or use cases in their S-4 filing when it comes to what their “enterprise” revenue really consists of. I also expect a large portion of Vimeo’s revenue specifically tied to their OTT product to not renew throughout 2021. At the end of 2020, less than 1% of Vimeo’s subscribers paid more than $10,000 per year. By comparison, almost all of Kaltura’s customers paid at least $10,000 PER MONTH, over the same time.

If you look at Vimeo’s ARPU growth, the largest portion of it has come from their top ten customers. ARPU amongst their 1.5M+ paying customers grew only $24 a month, over 5 quarters from Q3 of 2019 to Q4 of 2020. That’s an average increase of only $4.80 per quarter. Vimeo’s “enterprise” customers saw over 100% growth, from their top ten customers that I estimate to be in the $100k-$250k a year contract size. So a small segment of Vimeo’s customers is making up the largest percentage of their highest ARPU growth. On the marketing front, Vimeo calls themselves the “world’s leading all-in-one video solution”, amongst many other high-level and generic marketing terms they use, which means nothing. Kaltura on the other hand is very clear about what they do in the market, with lots of details around product support for solving specific problems based on use case, vertical and size of customer.

Since day one, Kaltura has been in the live streaming space with deep knowledge and experience in what it takes to be successful with live, which requires solving a very different set of problems. As an example, Kaltura was the platform that powered Amazon’s AWS re:Invent conference last year and virtual trade shows are a use case Kaltura has been successfully selling into. In 2016, Vimeo tried to get into the live business by building the functionality in-house and then realized how hard it really was. They had to acquire Livestream in 2017 to get into the live business, but live streaming has never been in the DNA of Vimeo as a company. Now, four years after Vimeo acquired Livestream, customers are telling me Vimeo is shutting down the Livestream platform and trying to transition them over to the Vimeo live platform, at a much higher cost, with less functionality.

Vimeo’s live platform does not support streams longer than 12 hours, which is a problem for many current Livestream enterprise clients who have 24/7 linear channels. With Vimeo live, you also can’t restart a live stream without creating a new event as you could previously do with the Livestream platform. Most importantly, the functionality around APIs is essential when it comes to the enterprise video stack and is one of the major deciding factors on which vendors customers use. The Vimeo live API functionality is much weaker compared to the Livestream API. Many of Vimeo’s biggest Livestream customers were deeply using the Livestream API, so that’s a problem for some customers they want to move off the Livestream platform. Customers I have spoken to haven’t seen the value in paying the additional multiple so it puts into question what percentage of Livestream customers Vimeo will be able to retain with the change. By comparison, Kaltura has no limit on the length of a live stream and their platform is open-source, with one of the deepest set of video API functionality in the industry.

In addition to the vendors already mentioned, Kaltura more closely competes with Panopto, especially when it comes to the education vertical and use cases around corporate communications. While Panopto is private and doesn’t disclose numbers, I put their revenue to be around $50M in annual recurring revenue (ARR) last year, with about 40% year-over-year growth. Panopto is well funded, have a very solid platform with scale, and lots of product flexibility targeting specific use cases and verticals. Panopto doesn’t cross over into all the verticals Kaltura sells into as they don’t sell their platform into telecom companies for any cloud TV services. This is a common theme amongst all the vendors, they don’t all compete in the exact same verticals for 100% of their revenue.

Brightcove is another vendor that has overlap into some of Kaltura’s verticals and competes mostly around media and entertainment customers looking for an end-to-end video stack for publishing, broadcast and ad supported business models. Brightcove had $197.4M $194.7M (thank you Brightcove for noticing my wrong number) in 2020 revenue, growing 7% year-over-over, with a net loss of $5.7M, down from the net loss of $21.9M in 2019. The company has projected 2021 revenue to be in the range of $211M-$217.0M, which at the midpoint would be 8% revenue growth. Over the past four years, Brightcove average year-over-year revenue growth was 7.7%. What these numbers show is that the market for certain types of video services and the rate at which they are growing is not as big or as fast as some want to suggest.

Brightcove ended 2020 with $37.5M in cash and cash equivalents. At the end of Q4 2020, Brightcove had 1,051 customers paying an average of $4,400 a year, ($366 monthly) and 2,279 “premium” customers spending an average of $97,200 a year ($8,100 monthly). Brightcove defines a premium customer based on their revenue spend and refers to the non-premium customers as those who use “our volume offerings”, saying it’s “designed for customers who have lower usage requirements and do not typically require advanced features or functionality”. You can read Brightcove’s recent 10-K from February 24 if you want to see all the detailed language on how they bucket their customers based on various products. Like Kaltura, when it comes to the media and publishing space, Brightcove is very deep with their product functionality and API support for use cases around marketing, events and corporate communications.

There are so many high-level umbrella terms used to describe video platforms that it can be confusing when it comes to what vendors really do and who they are targeting with their services. With terms like “enterprise video platform”, “video as a service”, “online video platform”, “business broadcasting platform”, “video cloud platform” etc. many companies may or may not be competitive to one another, depending on what is being compared and your definition of these terms. Amongst all the generic terms I mentioned, you would also hear vendor names including Microsoft (Stream), Zype, Wowza, ON24, Hive Streaming, Frame.io, Zoom, Cisco (Webex), Hopin and many others in the discussion.

In most instances, the vendors I mentioned specialize in other core aspects of the video stack like building video apps, offering API tools for video developers, or focusing on players, video analytics and content management systems. These vendors wouldn’t truly be competitive with one another in an apples-to-apples product showdown. That said, Microsoft, with their Microsoft Stream product, Wowza and ON24 would have some competitive crossover to Kaltura based on product functionality in similar vertical markets. Hopin would compete when it comes to live trade shows and multi-day conferences and Zoom and Cisco with Vbrick compete in the town hall meetings market. If you add up all the use cases for video, verticals, size of customers, region(s) and product functionality needs, there are close to 50 vendors in the “video platforms” market. Many of the vendors names being thrown into the same bucket when it comes to “enterprise video platforms” would simply not be accurate, based on real methodology.

Note: I am an actual user of a lot of these platforms mentioned. I currently have account access to platforms at Brightcove, Kaltura, Wowza, Vimeo, Panopto and others.

Join me on Thursday for a Live Q&A on Advancing Your Career in the Streaming Industry

Looking for help getting a job in the streaming media industry? Thursday at 6pm ET I’ll be hosting a live Q&A chat, taking your questions and giving out some best practices on job search and placement. You can join the Zoom meeting as anonymous if you like and I’ll be taking questions via chat only and answering them live on video. Below is the Zoom link and the password is: streaming. This is free and open to everyone, so you are welcome to invite others.

Time: Mar 25, 2021 06:00 PM Eastern Time
https://us04web.zoom.us/j/78435691044?pwd=TXBBVkE3YTMwRXJoRlVQQ1lLQlFKZz09
Meeting ID: 784 3569 1044

CDN Limelight Networks Lays Off 16% of Workforce in Necessary Move to Re-Focus Company

This week, Limelight Networks announced it was laying off 16% of their workforce, or approximately 100 people. While it’s never good to see people lose their jobs, Limelight’s new management team needed to make drastic changes to the business to re-focus the company. New management typically takes the blame for layoffs but it’s simply because prior management didn’t take the necessary steps needed to put the company on a path to the proper growth and profitability.

Purely from a numbers standpoint, Limelight didn’t have the revenue to support such a large headcount. The company ended 2021 with $230.2M in revenue and had 618 employees. Two customers, Amazon and Sony, account for 48% of their revenue. The company missed both their Q3 and Q4 guidance and ended the year with 527 customers, down from 599 the year before. Also, when compared to other similar vendors in the market, Limelight’s sales team was nearly two times larger, but didn’t have the revenue growth to support it. Over the past four years, Limelight’s average revenue growth was just $11.5M per year, going from $184M in total revenue in 2017 to $232M in total revenue in 2020. The company’s gross profit percentage fell from 43.7% in 2019 to 30.4% in 2020. Simply put, new management needs to make some drastic changes and it starts with headcount.

Outside of the numbers, Limelight also had operational issues from a sales, product and technical standpoint that prior management never addressed. Limelight hasn’t had a full-time CTO in more then five years, which is unheard of for a CDN vendor selling a technical service. The company also has no dedicated Chief Product Officer, which is negatively impacting Limelight’s product road map and ultimately what sales could sell into the market. Limelight also needs to improve their cost structure, which is something the new management team is laser focused on and once done, should save them a lot of money. I won’t go into specific details, but the way Limelight deploys capacity in certain regions is simply not efficient from a dollars standpoint when compared to other CDNs.

Based on some of the changes new management has already made, Limelight is expected to benefit from an annual cash cost savings of approximately $15M. Limelight ended 2020 with nearly $47M in cash and cash equivalents, so the company has capital. The focus for new management will be around guaranteeing better performance at scale (with the right cost structure), offering a new products and a clear product road map, diversifying revenue so they aren’t dependent on two customers for half their revenue, and becoming profitable. In 2020, Limelight’s GAAP net loss was $19.3M, so that’s something they need to improve on so they can get to cash flow break-even or better.

Changes are never enjoyable when it involves layoffs, but in this case it was a necessary task Limelight’s new management team had to accomplish, so they can put the company on a path to faster growth and profitability.

Podcast Interview: Discussing the Latest OTT Business Models and Subscriber Projections

Thanks to John Clifton and Tim Meredith for having me on “The Tech That Connects U‪s‬” podcast, where we discuss some of the latest OTT business models; subscriber projections; what the future of the conference business looks like in a post-Covid-19 world; and how I got started in the industry. Great chat talking real-world happenings in the streaming media industry. Listen to it below or on Apple Podcasts and Spotify.


Live Discussion Monday 22nd: Encoding Workflows Best Practices, How to Scale for Quality and Cost

On Monday March 22nd at 1pm ET, I’m moderating a session as part of BitmovinLive on, “Encoding Workflows Best Practices: How to Scale for Quality and Cost.” Come join this unique conversation with no pitches or demos, just real-world information on the best practices you can apply to improve your OTT video offering. With speakers from Blizzard and Sinclair Digital, we’ll discuss how broadcasters and OTT streaming services are prioritizing optimizing their encoding stack to improve their Quality of Service (QoS) while keeping costs in check. Bring those burning questions to our panel and be part of the discussion. You can register for the event here.