Sponsored by

    Search

    Subscribe to Blog Posts


New Podcast: D2C Streaming Service Launches, My First Impressions

Thanks to Beamr for having me back on their Video Insiders podcast to share my first impressions on the state of the new D2C video streaming services. With significant M&A activity closed, and press briefings completed, there is a lot to discuss. Today, new services launching are having investor days where before the service goes live, companies are projecting to investors when these services will become profitable. Talk about a shift in our industry. Hear my thoughts on why the stakes are higher than ever for all streaming services.


Amazon Acquires QoE Streaming Tech From Net Insight For $33.5M

As expected, on January 3rd, Net Insight announced they have divested their streaming technology product Sye, by selling the business to Amazon for $33.5M. The Sye technology was already being used by Amazon Prime Video for many of their live streaming events and works to reduce end-to-end latency, sync video across multiple channels and allow instant channel changes. [see my post here: A Detailed Look At How Net Insight Syncs Live Streams Across Devices] The tech also has some dynamic server-side ad insertion, metadata synchronization and time-shift functionality. Sye generated less than $2M in revenue for Net Insight in 2019 and Amazon was their largest customer making up the majority of their revenue. Net Insight disclosed that to date, they had “invested around $22.9M” in the product. About 30 employees and consultants have been transferred to Amazon in the deal. Sye was built on top of Microsoft Azure, so it’s an easy assumption that Amazon will now further integrate the technology into their Amazon Web Services platform.

Announcing the “Streaming Experience”, a New OTT Demo Area at the NAB Show in Las Vegas

I’m very excited to announce the “Streaming Experience“, a new hands-on area at the NAB Show in Las Vegas where you can test out over 50 OTT platforms and devices, in a living room style setting. Taking over the North Hall lobby April 19-22, I will be curating and showcasing the best SVOD, AVOD, live linear and authenticated OTT apps.

From Netflix, Amazon and Hulu to new services by Disney and Apple, consumers now have many choices of where to get their video fix. But what are the REAL differences between these services from a quality, content and cost standpoint? See and compare:

    • Video quality: compression, HDR and 4K
    • Content bundling strategies
    • Video delivery: low-latency and QoS
    • Ad formats: pre/post roll in live and SVOD
    • Connected TV advertising
    • Playback and UI/UX

If you’re interested in sponsoring The Streaming Experience or having your content featured, please contact me directly and check out the website for more details.

The Streaming Experience will feature hardware from Amazon, Apple, Roku, Xbox, PlayStation, LG, TCL, and Samsung – all showcasing dozens of streaming apps including: Disney+, Apple TV+, Netflix, Amazon Prime Video, Hulu, CBS All Access, YouTube TV, AT&T TV, Sling TV, ESPN+, HBO Now, Showtime, Epix, Discovery, Fubo TV, Acorn TV, IMDb TV, Tubi, WWE, MLB.TV, NHL.TV, NBA TV, MLS, DAZN, FOX Sports, NBC Sports, The Roku Channel and many others. New services from HBO Max, Peacock and Quibi will also be showcased, depending on their launch dates. We’ll also have many of these companies speaking during the two-day Streaming Summit.

Videos From NAB Show Streaming Summit Now Online: Call For Speakers Open For Vegas

All videos from the October NAB Show Streaming Summit in NYC are now available online including the keynotes, presentations, fireside chats and round-table panels. You can check them out here and are welcome to share as you like. And if you’re interested in being considered for speaking at the next NAB Show Streaming Summit, April 20-21 in Las Vegas, the call for speakers is now open and I am accepting ideas, pitches and proposals.

Thanks to Mobeon for capturing all the content and to Kaltura for their video platform.

Here’s The Best Black Friday Deals on Streaming Devices ($18 Roku/$20 Fire TV Stick)

Black Friday is almost upon us and it’s time for some of the best pricing you will see all year on streaming media devices and this year, with $18 Roku’s and $20 Fire TV Sticks, there’s no reason not to add one to each TV. I’ve looked through all the deals in the market and below are the lowest prices I’ve found on streaming hardware for Apple TV, Roku, Amazon Fire TV Stick/Cube, Chromecast, Xbox and PS4. I will do a separate post on smart TVs shorty. Note that many of these deals say they will have “limited quantities” and you’ll want to check which of these are being offered online vs. in-store, or both. Some don’t have links live as of yet, so check their websites for when they launch.

Roku
Purchase a new Roku device (from any retailer) and upon activation, get 3 months of Hulu and Pandora Premium free.

  • Walmart will have an “exclusive” Roku SE on sale for $18, starting 11/28. [normally $50] The SE model does HD streaming (not 4K) and comes with an HDMI cable.
  • Walmart, Roku.com and other retailers will have the Roku Ultra on sale for $48, starting 11/28. [normally $100]. The Ultra model does 4K, comes with JBL earbuds and a voice remote.
  • Roku.com and other retailers will have the Roku Streaming Stick+ on sale for $30, starting 11/24. [normally $50] The Stick+ model does 4K and comes with a voice remote.

Amazon Fire TV Stick/Cube
Amazon is offering their usual discounts on everything in the Fire TV lineup. I’ll cover their Fire TV enabled TV sets in another post.

  • The Fire TV Stick will be on sale for $20, starting 11/28. [normally $35] The model does HD streaming and comes with an Alexa remote.
  • The Fire TV Stick 4K will be on sale for $25, starting 11/28. [normally $48] This is the model you want if 4K is important and comes with an Alexa remote. It also supports for Dolby Vision and HDR, HDR10+
  • The Fire TV Cube will be on sale for $90, starting 11/28. [normally $120] The cube does 4K streaming and has Alexa built so you can use your voice to control compatible soundbars and A/V receivers, and change live cable or satellite channels with your voice. It also supports for Dolby Vision and HDR, HDR10+

Apple TV
There are almost no discounts available on Apple TV devices that have been announced, which seems to be the norm each year. Rarely are Apple TV devices discounted for Black Friday.

  • Meijer is offering a credit of $50 off a future order, if you buy an Apple TV for $150 or more.
  • Your best bet for a discounted Apple TV is to keep checking Apple’s refurbished Apple TV store page, because when they do have inventory, they typically discount them by $30-$50 and comes with the same 1-year warranty.

Chromecast
Most retailers online and in-store will have Chromecast’s discounted by $10-$20.

  • Best Buy and Target will have the Chromecast on sale for $25. [normally $35]
  • Best Buy will be offering the Chromecast Ultra 4K on sale for $50. [normally $70]

Xbox One/PS4
There are a ton of great deals on Xbox and PS4 consoles this year and far too many bundles with extra games and controllers to list. He’s the best pricing I’ve seen so far on some of the more basic bundles for those more interested in the hardware.

If you see any better deals that I missed, please feel free to add them to the comments section.

Disney+ Tech Problems: What We’ve Learned and What Needs Fixing

With the launch of Disney+ being such a big deal for consumers and the streaming media industry, many have been giving their take on what caused the problems, why it happened, or want to point to poor planning on Disney’s part. But the fact is, the majority of those commenting don’t have any details on what Disney has built, don’t know what needs to be fixed, and have no experience themselves in building such services – at scale.

I’m not going to speculate myself as to what caused all of the different problems as I don’t know all the details. But from talking to some folks involved and looking at all of the different problems I’ve seen personally, or that have been reported by users, there are some things we do know.

Some want to immediately suggest that Disney, “doesn’t have enough servers”, or “isn’t doing load balancing right”, or “needs a multi-CDN strategy”. For starters, public data shows Disney has a multi-CDN strategy, using at least six CDNs for different parts of the workflow from vendors including Verizon Media, Akamai, Fastly, Limelight Networks, CenturyLink and AWS. Almost no one is complaining about the videos streaming, the problems are around non-streaming issues like account login, setting up profiles, apps freezing or crashing, media not able to be found and other user management issues.

Disney announced today that more they got more than 10M signups for Disney+ and even if 100% of them were all streaming at the same time, delivering 10M video streams across so many different CDNs would not be a problem. 10M simultaneous streams across multiple CDNs is a very low number so it’s not a “capacity” problem when it comes to video. Disney is also using services from AWS in at least three regions including us-east-1, us-west-2 and eu-west-1. So the idea that Disney isn’t load balancing the website or other parts of their platform is not true and suggesting Disney isn’t purchasing enough capacity or resources from cloud and CDN providers is simply not accurate. [side note: For all the media outlets that are reporting that Disney+ got more than 10M signups “on day one”, or “10M subscribers”, that’s not accurate. Subscribers were signing up for Disney+ months in advance and I have 2 accounts, but am only 1 “subscriber”. Hence why Disney correctly said “sign ups”, not subs.]

To build a platform like Disney+ there are more pieces in the backend than most are ever aware of. The website, authentication, billing, subscription management, user profiles, account personalization, error beacons, API calls, Swagger tools, support for Widevine and Fairplay, QoS monitoring by Conviva, metadata, transcoding, storage (of more than just video), plus all the delivery, playback, reporting and analytics, etc. and that’s only scratching the surface. All of these components have to talk to one another and some can have issues and not impact other parts of the workflow, while others pieces when they fail, cascade down the line causing other unintended problems. That’s the nature of the streaming media workflow when building such a complex platform.

While I expected there to be some issues on day one, I was disappointed in how widespread the problems were and the number of different issues viewers experienced. No doubt Disney Streaming Services is learning a lot from this, as have all of the OTT services that have launched in the market. But for those to suggest that Disney Streaming Services didn’t plan in advance, or didn’t test things first, that’s simply not accurate. You can do all the testing you want at scale in a controlled environment, but until you put real traffic over the platform, you can only know so much. No amount of pre-testing can replicate what will happen when a service goes live, which is why Disney+ was rolled out in the Netherlands so they could get some real-world feedback.

What problems Disney saw in the Netherlands (few were reported) and how much of an insight that gave them at scale, I don’t know. I also don’t know to what degree Disney tested each portion of the workflow separately or together, and at what scale. But we’ve seen everyone including HBO, WWE, Sling TV, AT&T, Hulu, Amazon Prime Video, CBS and now Disney all have problems, of varying degrees. It’s also worth noting that HBO had repeated problems with Game Of Thrones, which we now know from AT&T’s presentation last month, peaked at 5M simultaneous viewers. What this shows us is that it’s difficult and complex to build out a consumer video service, even when you are talking 5-10M subscribers.

While I don’t know why Disney made some of the decisions they did, I was surprised that they decided to push out the Disney+ app across all platforms, at the same time. Many video services in the past would allow users to download and login to the app before the service launched, so any authentication issues could be worked out in advance. That seemed like a risky approach, but maybe there was a good reason why Disney did it. I also found it odd that platforms like Roku and Fire TV were promoting the Disney+ app, but you couldn’t download the app as it wasn’t yet in the store since that takes time to propagate. We know the app can’t show up at once for everyone, but why not wait to showcase it on the main screen until it’s actually available for download? The very first experience I get with Disney+ is that the app can’t be found.

For me, the big failure on Disney’s part, was the lack of communication and setting customers’ expectations properly. Complaints where coming in for almost 4 hours before Disney put out a short statement on Twitter saying “The consumer demand for Disney+ has exceeded our high expectations. We are working to quickly resolve the current user issue. We appreciate your patience.” But they didn’t give a list or acknowledge which issues they were working on or an ETA as to when they would be resolved. Blaming customers for signing up for a service you ask them to buy is a really bad strategy. It also didn’t help that wait times with support would say 20 minutes, but hours later still you could not get through. Disney set expectations and then didn’t live up to them and saying on Twitter, “please note that due to the overwhelming response to Disney+, wait time may be high for customer service”, is a really poor reply. Especially since the support chat window says, “Please expect wait times to be longer than normal.” What is normal? It’s day one of the service.

As of now, Disney has 30 different error codes listed on their website, but yesterday when I was on their help page looking for specific ones, they didn’t come up. Today they do, but some are confusing as they give two very different responses, to the same error. Error code 9 says they can’t process my payment, but it also says it could be a not logged in error. Also, if you want to send Disney an email to let them know of a problem, you can’t type in your problem and only get to select from four options in a drop down menu, and none of them are related to logging in, or any of the issues people are having. And they ask you to select which device you are using, but Roku isn’t listed in the drop down menu and if I select Apple TV 4 as my device, why am I getting an option to say it’s connected via 3G, 4G and 5G? When you do send in a report of a technical issue, you get a pop up window telling you, “Unfortunately sometimes we just aren’t able to show or license a particular movie or show.” That has nothing to do with the subject of the message I sent in.

While it appears the service is now working for the majority of people and complaints on social media have died down from day one, there are a lot of little bugs that still need to be fixed. For instance, if I start playing a video and exit it, when I go back in, sometimes it won’t pick back up where I left off. Issues like this Disney is working to fix but since they haven’t published any kind of functionality spec of what the user experience supports, we also don’t know which features are having problems or simply are not supported in the current build. For instance, the second episode of the Mandalorian comes out on the 15th, but nowhere on the show page does it tell me that. How can that be missing? I see lots of things that need to be fixed across different platforms, for instance on my MacBook, every show I start is muted by default, but that’s not the case on my iPad, Amazon or other devices. If you are watching something episodic, it will start the next show in the series automatically, but it doesn’t tell you that something is coming “up next”. These are just a few of the user experience issues myself and others have documented.

Disney Streaming Services knows what needs to be fixed, what’s not working right and what they need to do to improve the user experience. I expect they will resolve all of these issues sooner than later and the next major build and release of Disney+ should include all kinds of new features and improvements. But what those will be and when they will come out we just don’t know. While Disney Streaming Services has been working on the Disney+ platform for a while leading up to launch, now their work really begins to make it a feature-rich and reliable service that can scale to tens of millions of users, with good quality and a great user experience. The hardest part is now over with the launch, but they still have a lot of work ahead of them.

Just Announced! Marc DeBevoise, President & COO of CBS Interactive to Keynote Streaming Summit

I’m excited to announce that Marc DeBevoise, President & COO of CBS Interactive will be the closing keynote on day one at the Streaming Summit, taking place October 16-17 as part of the NAB Show New York event. Taking place at Stage 1 on the exhibit floor from 5:15pm-6:00pm, you can see Marc’s keynote for FREE by getting a pass here bit.ly/2ASEsy4. And stick around after the keynote for free drinks and networking! #streamingsummit

Hosting Private Dinner With Akamai for Wall Street Investors/Analysts, Nov 7th in NYC

For those on Wall Street that would like to get a better understanding of Akamai’s business, especially in the web performance and security markets, I’ll be hosting a private dinner on Thursday November 7th in NYC. I will be leading a discussion with Rick McConnell, President of Akamai and GM of the Web Division, along with Josh Shaul, VP Product, Web Security. The discussion will be centered around Akamai’s performance and security solutions detailing customer use cases, how these services are deployed from a technical standpoint, the value they offer to customers and short and long-term trends the company is seeing in the market.
If you are interested in attending the dinner, please reach out to me (917-523-4562) for details but note that the dinner is only for Wall Street investors and analysts.

Come Hear All About AVOD, SVOD and D2C Business Models: Battle For The Living Room

Come to the Streaming Summit, on October 16-17 at the NAB Show New York, and Hear all about “AVOD, SVOD and D2C Business Models: Battle For The Living Room.” Learn which business models are expected to get the most traction, what consumers want when it comes to a content bundle, how OTT services will stand out from the crowd and what the future of the video experience looks like for live, on-demand, AVOD and SVOD streaming services.

With Andrea Clarke Hall at Tubi, Richard Shirley at A&E Networks, Cameron Douglas at Fandango and Matt Smith from Comcast Technology Solutions, moderator by Jon Watts from MTM. Get your ticket before it’s too late!

Streaming Mixer, Wednesday October 16th in NYC, Free Drinks and Networking

Come by the Streaming Mixer Wednesday October 16th at the Streaming Summit in NYC. Network and have a drink with industry peers thanks to Limelight Networks, Conviva and Verizon Media, from 5-6pm on the show floor of the NAB Show New York. Contact me for a free exhibits pass to attend.

AWS Hosting Free Workshops at Streaming Summit: Learn “How To Design & Build Well-Architected, Serverless Media Solutions

I am please to announce that Amazon Web Services (AWS) is offering FREE hands-on workshops at the Streaming Summit, taking place on Oct. 16th in NYC, led by their Training & Certification/Media Solutions Architect teams. Learn “How to design & build well-architected, serverless media solutions”. You can register for free here and also see a list of all the classes. Learn how to build well-architected, serverless media solutions using AWS Elemental and Amazon Web Services API-driven artificial intelligence and machine learning services. Developers (including Asset Managers, Digital Team Members, DevOps and Streaming Engineers) are encouraged to attend.

Thursday, September 26th Webinar: Investing in Your Enterprise Video Strategy

Thursday, September 26th at 11am ET, Frost & Sullivan will be hosting a webinar entitled “Investing in Your Enterprise Video Strategy – Don’t Get Left Behind“. Hear the best practices on how to articulate the value of the investment to gain internal agreement and how enterprise video can deliver real business results and complement the digital transformation agenda. Learn about:

  • The opportunity cost of not actively developing and investing in an enterprise video strategy
  • The return on investment and total economic impact of video in the enterprise
  • Top-line benefits and bottom-line efficiencies companies are unlocking through their deployments
  • The role of video in enabling your company’s digital transformation initiative
  • The pitfalls of further delaying the execution of a comprehensive video strategy

Register for free and hear from two Frost & Sullivan analysts on this important topic.

CenturyLink Acquires Streamroot, Will Use Mesh Technology to Extend CDN Capacity Globally

Last week, CenturyLink announced they had acquired privately held Streamroot, a CDN provider with an underlying P2P technology. Streamroot had raised $6M to date and the company is expected to do sub-$5M in revenue for this year. Based on what Streamroot’s valuation would have been for a Series A round, CenturyLink valued the company between $20M-$30M.

CenturyLink’s acquisition is motivated by improving the performance for viewers of live events and day/date content releases to reduce buffering and failed starts during peak hours and catch up TV. Streamroot leverages a software-based mesh network and a deterministic data science methodology, providing device awareness to ensure that consumers have only optimal experiences without creating any data privacy concerns. In short, this extends the capabilities of CenturyLink’s existing CDN network to offer the promise of better performance in hard-to-reach places and during volume spikes when delivery is most difficult. Of course, we have heard these value propositions before, however, the vision that CenturyLink and Streamroot share and what’s under the hood is admittedly more nuanced.

While most previous P2P attempts have failed at scale, Streamroot has actually delivered premium content with million+ simultaneous viewers with national broadcasters like TF1, Canal Plus, and RTVE in Europe and LATAM during the World Cup, Copa América, and others. To date, most vendors that have focused on P2P have tended to roll out “cheap” CDN alternatives, but Streamroot has been quick to identify that “cheap” wasn’t what the market wanted. They had to deliver visibility to material QoS improvements while handling complex workflows including proprietary ABR, multiple SSAI providers and DRM considerations.

CenturyLink said what makes them combination of the companies stand apart, is in their belief in using data from the device to improve content delivery at both a micro and a macro level. At a micro level, in the ability of Streamroot products to take into account each device’s instantaneous conditions to select the best delivery source; on the macro level, in the data gathered on the network topology of ISPs globally, of device behavior depending on OS, version, content type, etc. In other words, this isn’t just “peer-to-peer.” It’s about device telemetry; it’s about adapting delivery to each individual user and content and network; it’s about broadcasters being able to customize a technology solution to their unique use case and all the variables that come along with it: encoding ladder, ABR, ISP peering, networks, topology, and more.

Thinking holistically about delivery has allowed Streamroot and CenturyLink to find a common avenue to improve content delivery. And this offers applications far beyond just P2P for video. Client-side load balancing, file download, ISP-aware routing and devices localizing to the best possible topological and performance based cloud edge serving locations, could all be use cases. They have the potential of using device-side technology to provide the best user experience, helping ISPs route traffic flows more efficiently and helping content providers, web applications and edge cloud services localize connections.

Client-side integration is never a pleasant subject but we might as well take it as a given in a multi-CDN, multi-OS, multi-hardware world. With Streamroot, the company says pre-integrations on pretty much every major web player make web deployment possible in a matter of minutes. Streamroot has also worked to make its mobile SDKs light years simpler than the average advertising – whether SSAI or CSAI – or DRM deployment. It essentially boils down to roadmap prioritization, as the actual man-hours to integrate the solution on any device can be counted on two hands. And when benchmarking that again the quality improvement, the company says the ROI calculation is pretty simple.

I’m very interested in seeing where CenturyLink is going with this acquisition and with its media delivery services in the short term. The CenturyLink team is making smart moves to incorporate technologies that may very well redefine content localization in an otherwise lackluster market, especially in Europe. Right now the Streamroot technology works just for video content, both live and on-demand, but CenturyLink said they plan to invest in the technology, add more developers, and add download functionality in the future. It’s a good move by their part to pick up some proven technology and a good engineering team in Streamroot, and overlay it on top of their network, supported by CenturyLink’s sales and marketing team.

How CDN Switching Blind Spots Lead To Rebuffering

Reducing video rebuffering can be difficult. One solution that many people are talking about these days is moving to a multi-CDN architecture, a topic I’ve written a lot about. But will going multi-CDN magically reduce your rebuffering and drive away all of your streaming ills? The answer, of course, is complicated. Going multi-CDN can provide several benefits for customers, such as better geographic coverage and, possibly, better economics. Adding live switching logic between the CDNs goes a step further and enables load balancing and redundancy in case of problems. But what are the problems that customers are likely to encounter? Let’s examine some common CDN problems and their impact.

Catastrophe and Chaos
Once in a blue moon, a CDN will experience a major outage that affects a large geographic area. These outages are so extreme that they shut down a large portion of the Internet for a non-trivial amount of time. Recent examples of such outages:

Detectability: Very easy to detect as any metric that you care to measure will explode. Your alerts will fire or, if you don’t have alerts, you’ll get messages and phone calls from users.
Solution: A good CDN switching engine will attempt to re-route users to a working CDN within the limits of the outage.

PoP Flop
Occasionally, a CDN might experience a local issue in one of its PoPs (Point-of-Presence), which means all of the users that are routed through that specific PoP will have problems fetching video segments, and will most likely experience rebuffering.

Detectability: Medium or hard depending on the percentage of users that are affected. A large portion of the traffic would skew the metrics enough to create an anomaly, while a small portion might get swallowed within the geographic granularity of the monitoring system.

Solution: A good CDN will re-route the impacted traffic to a different PoP within its network which will cover for the faulty PoP with best-effort performance. A good CDN switching engine will eliminate the faulty CDN altogether from that region and just use a non-faulty one.

Smaller scale outages are so hard to detect that you might never know they ever happened. Have your users experienced such an outage? The answer is most likely yes, since all CDNs see dozens of them as part of their daily monitoring efforts. In the industry, we call these events “blind spots”.

The Blind Spots of CDN Switching
There are 3 main blind spots that server-side CDN switching engines do not address very well:

  • #1. The DNS Propagation Problem. “We already know there’s a problem, but we have to wait at least 5-10 minutes for DNS to propagate.” A common CDN switching implementation is based around DNS resolving. The DNS resolver incorporates a switching logic that responds with the best CDN at that given moment. If one of the CDNs in the portfolio experiences an outage or degradation, the DNS resolver will start responding with a different (healthy) CDN for the affected region. The blind spot of DNS would be its propagation time. From the moment the switching logic decides to change CDNs, it might take several minutes (or longer) until the majority of traffic is actually transitioned. Moreover, while most ISPs will obey the TTL (DNS response lifetime) defined by the DNS resolver, some will not, causing the faulty CDN to remain the assigned CDN for the users behind that ISP. Rebuffering on existing sessions is inevitable, at least until the DNS TTL expires on the user’s browser.
  • #2. The Data Problem. “We select CDNs based on a synthetic test file, but real video delivery is much slower.” Any switching solution must implement a data feed that reflects the performance of the CDNs in different regions and from different ISPs. A common approach for gathering such data is to use test objects that are stored on all of the CDNs in the portfolio. The test objects are downloaded to users’ browsers, which then report back the performance that was observed. Often times, for various reasons, the test objects don’t represent the actual performance of the video resources. For example, imagine that the connection between the origin and the edge server is congested – the test object will not be impacted since it’s already warmed in the cache of the edge server and does not need to use the congested middle mile connection to the origin. This performance gap will cause a CDN to be erroneously selected as the best one even though the reality differs. It’s also possible that the test objects and the actual video resources do not share the same CDN bucket configuration. If the video resources bucket is misconfigured, some users might get unoptimized or even faulty responses which at no point will get detected because the test objects bucket functions properly. This kind of disconnect between synthetic performance measurements and actual delivery performance often generate degraded performance that gets undetected for a very long time.
  • #3. The Granularity Problem. “We select CDNs based on overall performance in each region, but this stream is performing poorly for a subset of the users.” A typical CDN Switching flow might be:
    • measure CDN performance across different regions
    • report results to the server
    • server chooses the “best” CDN
    • users are assigned to the best CDN for their region

    Unfortunately, not all regions have fresh performance data all the time and so a fallback logic is usually applied. When there isn’t enough data in a specific region, data from its greater containing region will be used instead. It’s possible that a region with small amount of users gets swallowed up by a larger fallback region, in which case an outage might not be detected at all because the affected users comprise a small portion of total traffic that is not enough to “move the needle”.

    This is a data granularity problem. A broadcaster might have 100k users that are spread across 15 countries, 1,000 unique regions, 5,000 ISPs and a host of other parameters. Taken together, these parameters segment the user base into millions of tiny dimensions, none of which will have enough data to perform meaningful switching decisions, not to mention the load it will create on the switching system. For this reason, server-side switching is inherently limited to a more coarse grouping that is technically and mathematically viable. This reality creates a blind spot when it comes to smaller regions that might get hit by a local, undetectable outage.

Real World Example of The Granularity Problem
In the last week of August, an outage occurred in the U.S. which demonstrates the granularity problem. An outage by a CDN I won’t name, caused a significant drop in request performance that, in turn, led to rebuffering. Thanks to Peer5 for sharing the screen grabs (below) from their monitoring tool showing that at 7:20 AM, an increase in Time-To-First-Byte(TTFB) was observed from an average of 850ms to a peak of 6700ms. When comparing the 95th percentile TTFB of the affected area to its greater containing region, it’s clear that the affected area didn’t constitute enough data to move the overall metric.

95th Percentile TTFB – Affected area vs Greater Region

The greater containing region (blue) doesn’t show any anomalies throughout the outage. (less is better)

Rebuffer Time as %

Rebuffering spikes render the playback unwatchable. (less is better)

Allocated CDNs Over time

While this chart might seem dull, it illustrates the fact that throughout the entirety of the outage, no CDN switching took place for the given region.

Enter: Per User CDN Switching
Video playback is a very fragile thing. A user might have just a couple of seconds of content buffered ahead and any slowdown in fetching segments can easily consume that buffer and freeze the playback. For this reason, vendors in the market are coming up with ways to fix the problem. For instance, Peer5 created a client-side switching feature which constantly monitors the playback experience for each individual user and is able to react to poorly performing CDNs within a split second (literally, milliseconds) and prevent rebuffering from ever happening. This means that even an outage that only affects one user will be accounted for. The below charts shows the performance during the outage described above with and without a client-side switching feature.

95th Percentile TTFB – Affected area vs Greater Region

The TTFB of the client-side switching group (green) was affected as well but much less than the other group. (less is better)

Rebuffer Time as %

The client-side switching group (green) experiences almost no interruption in playback. (less is better)

As seen in the graph above, users that relied solely on server-side switching (red line) were impacted significantly, compared to users with client side switching. Server-side CDN switching was not granular enough to detect the local outage and the assigned CDN for that region remained the same even though some users experienced terrible performance degradation. The client-side switching, with its per-user granularity, was able to change the mixture of CDNs within the region and avoid the issue in real-time. The rebuffering was reduced from 11.2% to 0.2% for client-side switching enabled users, and the overall region rebuffering was reduced by 70% from 1% to 0.3%.

Summary
When CDNs experience outages, users will encounter rebuffering. There are multiple types of outages, some will go below the radar completely undetected while some will make you notice them immediately. Different layers of redundancy and different levels of granularity tries to address the various outages an online delivery pipeline might experience. A combination of several such redundancy tools is likely to achieve the best UX. Employing server-side switching alongside client-side switching allows customers to:

  • Reduce rebuffering by monitoring video playback constantly for all users
  • Allow existing sessions to respond to outages very QUICKLY by switching CDNs on a per request level
  • Improve bitrate and quality by increasing the granularity of CDN selection to a per-user level

There’s lots of ways to solve the video buffering problem depending on what type of video you are delivering, (live vs VOD), the platform or devices you are delivering it to and the user-experience you are looking to achieve. What’s your take on the best ways to reduce video buffering? Feel free to leave them in the comments section.

Streaming Summit Program and First Set of Speakers Announced: Hear from CBS, Quibi, NBC, YouTube, Twitter, Amazon, WarnerMedia/HBO

I’m pleased to announce the first set of speakers for my Streaming Summit, at NAB Show New York, taking place Oct 16-17. The program schedule has also been added to the website and when completed, we’ll have over 100 speakers, across two days of the show. Newly added speakers include executives from CBS Interactive, Quibi, NBC Sports, YouTube, Twitter, Amazon Fire TV, WarnerMedia/HBO – with lots more on the way! Register before Sept 12th using code “early” for a discount on your ticket. You can see the entire agenda on the schedule page. #streamingsummit #nabshowny

Podcast: Disney+, Quibi, HBO Max. What Happens When Content Owners Go Direct

Thanks to Beamr, Mark Donnigan and Dror Gill for having me on their “Video Insiders” podcast to talk about on Disney+, Quibi, HBO Max, Hulu, ViacomCBS, and what the forthcoming D2C launches mean for incumbents, including Netflix and Pay TV operators. Hear my thoughts on content aggregation and ideas for measuring success in OTT, along with the technical plans and platform choices being made by these developing services. This is a frank and honest, real-time and real-world conversation about the groundswell of direct to consumer OTT services which will be unleashed over the next few quarters.

 

You can listen to the podcast here: https://thevideoinsiders.simplecast.com/episodes/episode24

The Challenges and Best Practices For Inserting Ads Into OTT Downloadable Videos

When Disney+ launches in November, one of the unique features of the service is that 100% of their video catalog will be available via download, for offline viewing. As OTT services evolve, more consumers are going to expect content to be available offline as part of the user experience. While Disney+ won’t include adds within their videos, many AVOD providers are looking at how the feature presents a huge revenue opportunity, especially for mobile. Yet, given how new the tech is, there are many questions AVOD providers have about how the technology works and whether or not it can truly integrate into an ad-supported business model.

For the most part, the reason AVOD providers haven’t offered download capabilities in the past (while some SVODs have) is that delivering video advertising offline adds a lot more technical complexities. Most importantly, any ad-based video download feature must include processes to ensure that ads are always able to remain timely and monetized and it’s harder to do measurement and reporting of ad viewership. If a viewer is served an ad past a date when it can be monetized, time is wasted and money is lost. So if a viewer downloads a video, providers have to be certain the ads attached will still be current, even if the viewer doesn’t watch the asset for two weeks. This is crucial because if the initial ads downloaded with a video asset are, for example, promoting a Labor Day sale at a retail store, they will no longer be relevant if the viewer watches after the holiday, and the ad creative is then no longer profitable.

I recently spend a few hours in-person with Penthera to learn how their platform uses dynamic ad insertion, so that after a video is downloaded onto a user’s device, the ads are regularly refreshed in the background (when the user isn’t using the app but is connected to cellular or Wifi) to ensure that ads served are up to date and monetized. The company says video ads are typically monetized if played within 3 to 4 days, on average. Based on the configured refresh window, Penthera receives a notification that tells their platform to update the ad in the background before the monetization expires, typically about 2 days before. This guarantees that downloaded ads are always timely and monetized when attached to a video stream.

Both providers and advertisers need accurate information about who is seeing ads, but tracking viewership becomes more complicated when users aren’t connected to a Wifi or cellular network. Penthera calls its solution agnostic, which means it can plug into existing ad systems such as FreeWheel, Google Ads, and SpotX. Because of this, the downloaded ads have the same targeting capabilities and analytics reports on ad performance as streaming ads. The ad server is still able to provide all the standard insights into the ad performance, and Penthera’s SDK provides additional analytics around when the downloaded content starts or stops, download progress, and whether the downloaded video was watched. This means, by integrating into existing advertising infrastructure, ad-supported video downloads can act as an extension of streaming ad campaigns.

The success of certain digital ad campaigns isn’t only measured in terms of impressions, but also by viewer click-throughs. Naturally, clicking through to an online landing page is impossible when the user isn’t connected to the internet. But Penthera has come up with an interesting work-around for this, so that offline ads can still promote engagement: their SDK is built to track click-heres. This means that, even when offline, a viewer can click on an ad. Later, once they are connected to the internet again, the viewer receives a notification reminding them that they were interested in the ad content, with a link directing them to the appropriate landing page, website, or App Store.

Penthera says their data shows that a large majority of users watch downloaded video while their device is online. This is an interesting insight, as it demonstrates that users are downloading content because they either value the experience they get from offline playback over streaming or they are generally watching content while on cellular networks in order to limit data plan usage. What this means for advertisers, however, is that in the many instances, playing downloaded video with ads functions much the same as if the user was online. The advertising beacons (the network calls that inform the ad networks that an ad was played) can be reported immediately when the impression occurs, just like existing beacons. However, if the user happens to be offline when they play their video, the technology steps in to support the process. Penthera’s SDK catches the beacons and records the exact moment they were triggered. Then, later when the device gets back online (even if the user doesn’t open the video app again), the beacons and the time they were recorded are sent to the advertising platforms to be recorded. Thus, all offline advertising impressions have a chance to be monetized.

In addition to beacon management, you also have to have to manage dynamic offline advertising loads. The value of advertisements typically diminishes rapidly from the point an ad is initially delivered. The ads with the highest value usually have a short validity window before they don’t pay out. An effective offline advertising platform needs to be able to balance the needs of the publisher to delivery high-value ads with the needs of the advertiser to have ads only display when they’re valid. Penthera’s says their solution allows advertisements, delivered via both server-side and client-side insertion methods, to be refreshed and updated over time, without requiring the video to be re-downloaded. They are also considering the ability to download multiple ad loads simultaneously (some with high value, but short expiry, and some with lower value, but longer expiry) to better insure that when the customer wants to play a video, there’s always the ability to include a monetizable ad.

I’ve written before about the importance of download as a feature within OTT services and how it may be a big business opportunity for AVOD providers. But this will only hold true if the technology can work seamlessly with existing apps and ad-based revenue models. From what I’ve learned about Penthera, it appears as though they’ve solved for some of the biggest challenges of taking AVOD offline. Now we just need more OTT services, both AVOD and SVOD, to start including downloads as an option, to enable additional monetization options.

Job Opening NYC – Solution Architect, Front End Development, Video Applications, $130K-$150K

There is an immediate opening for a Solution Architect, Front End Development, with one of the largest public M&E companies in the world($200B+ market cap). They have multiple live/VOD OTT offerings and will coming out with more. I am helping the person you will report to find the right candidate for this job, which is based in NYC (not negotiable) and pays $130k-$150k. This job is not currently listed online. I’ll also add, your boss is someone you would want to work for. I know them on a personal level and they have a very unique background. You would learn a lot from them and be given an opportunity to be amongst some extremely smart individuals. If you are interested in learning who the company is and more about the job, please email me or just give me a call anytime at 917-523-4562. Candidates are being interviewed immediately.

Job Description:​ Own the process of solving high impact, highly technical problems that span the purview of multiple organizations and stakeholders, where requirements and direction are often yet to be defined or discovered. This role is one part technical evangelist, and one part technical architect. Success in this role requires effectively working with various technical leaders from different organizations to design solutions that work for all parties involved, and to evangelize these solutions and ensure teams can execute effectively.

Preferred Qualifications

  • Able to bridge communication and technical knowledge between multiple engineering and product teams
  • Well organized with good written and verbal communication skills
  • Self-learner, independent, and ​easily adaptable
  • Architecting resilient applications that handle failure gracefully
  • RESTful web service development
  • Other Tools
    • API testing – PAW and/or Postman
    • Plantuml or other similar sequence diagram tool
    • Jira/Confluence
    • Github
    • Jenkins
  • Scripting Language – node/ruby/python/etc

Industry Disconnect: As Cord Cutting Grows, Live OTT Services Aren’t Seeing Big Share Gains – Does Live Matter Anymore?

In the second quarter of this year, Dish, Comcast, Spectrum and DIRECTV combined have lost 1.3M pay TV accounts. Add in what Verizon may have lost in Q2, when they report earnings on Thursday, and we could see a number near 1.5M pay TV subscribers lost in Q2. Projections are that combined, the cable TV and satellite companies will lose about 5M pay TV subscribers in 2019.

While no one can debate these numbers, the big disconnect is that live OTT services aren’t seeing a big percentage of cord cutters sign up for their streaming services. This begs the question, where are all these cord cutters going and do consumers really care about live TV anymore, outside of sports and some other specific big events? In the first six months of this year, Sling TV gained 28,000 subscribers. DirecTV Now lost 520,000. We don’t know how many subs YouTube Live, Hulu Live, PlayStation Vue, fuboTV or Philo have, but combined they didn’t gain 2M subs that left pay TV and DirecTV Now in Q2.

As viewers content habits have shifted to an on-demand world over the past few years, one could argue that without sports content, live streaming would be a thing of the past. The Grammy’s, Olympics, news and some other one-off events would still garner interest, but it’s clear that the live OTT services simply aren’t resonating with consumers in large numbers. A big part of that is due to the rising costs of live OTT services and the constant change in channel lineups and packaging. Make no mistake, live OTT is simply the new pay TV bundle. It can be called something else, but in reality it is priced like pay TV, bundled like pay TV, and has more restrictions than pay TV, with a limit on the number of concurrent streams from one account. Many will say the benefit is that OTT services have no contracts, which is true, but some pay TV providers don’t have them anymore either.

That’s not to say live content is dead completely and personally, I love live content because it’s a different type of viewing experience. Twitch and other platforms like ESPN+ are seeing some great growth in consumption, but that content is targeting a very specific user demographic, with specific content, and isn’t hitting the largest swath of the market. Facebook is seeing huge growth in live, but most of that is short-form content. As an industry, the real question we have to ask is, what does the future of live video consumption look like and who’s going to control the market?

At some point, Disney will offer a bundle of their Disney+ service in with Hulu Live. And HBO Max will bundle live content in, or offer some kind of add-on option for live streaming. One could debate if the new streaming aggregators like AT&T and Disney will end up controlling the live viewing experience or if the majority of consumers will still stick with pay TV from traditional cable and satellite providers. We could also see a world where live TV isn’t that important anymore, outside of some specific large-scale live events and sports, with more money being put into original content creation for on-demand viewing. This year alone it’s estimated that more than $10B in being spent on original content creation across all the major SVOD services in the market.

Consumers viewing of live TV has drastically changed and as an industry, we need to re-think the impact that consumers content choices, wallet spend and viewing habits are going to have on live TV, in any form. This is an important topic and one that we’re going to discuss and debate more, with many of the leading OTT providers in the space, at the next Streaming Summit, as part of the NAB Show New York, taking place October 16-17. You can join the debate and register with the discount code of “streaming” to get another $100 off your ticket, and pay only $595, if you register before September 12th.

Survey Of 238 Akamai, Cloudflare and Fastly Security Customers Shows Akamai’s Dominance, Limited Pressure on Pricing

In June, I completed a survey of 238 customers using on-prem and cloud based security solutions from Akamai, Fastly and Cloudflare. The survey collected data on customer’s deployment architecture, spend per year, preference for bundling security with other cloud/CDN services, pricing changes, transition from on-prem to cloud, and which vendors are being used, amongst other data points. If you are interested in purchasing the full set of data, please contact me for more details.