Why Apple’s HEVC Announcement Is A Big Step Forward For The Streaming Media Industry

The battle for bandwidth is nothing new. As CE manufacturers push the bounds on display technologies, and with 360 and VR production companies demonstrating ever more creative content, the capacity of networks will be taxed to levels much greater than we see today. For this reason, the Apple announcement at their 2017 Worldwide Developers Conference that they are supporting HEVC in High Sierra (macOS) and iOS 11 is going to be a big deal for the streaming media industry. There is little doubt that we are going to need that big bandwidth reduction that HEVC can deliver.

While HEVC has already been established on some level since Netflix, VUDU, Fandango Now, and Amazon Instant Video have been distributing HEVC encoded content, that’s all been to non-mobile devices to date. But what about the second screen, where more than 50% of viewing time is occurring? With this announcement, Apple set the de-facto standard for the premium codec on second screen devices. We know that H.264 is supported fully across the mobile device ecosystem and any codec, which is going to replace it must have a realistic path to being ubiquitous across devices and operating systems. That’s why the argument from some that VP9 will win on mobile, never made sense, as I don’t see any scenario where Apple would adopt a Google video codec. But prior to Monday morning June 5th, 2017, and the WWDC2017 HEVC announcement, no one could say for certain that this wouldn’t happen.

We’ll likely never know what the considerations were for Apple to select HEVC over VP9. With VP9 supported only on Android devices while HEVC is supported on Apple as well as certain Android devices such as the Samsung Galaxy S8 and Galaxy Tab, streaming services now face a conundrum. Do they encode their entire library twice (HEVC and VP9) or only once (HEVC) to cover iOS devices and connected TVs? The decision is an obvious one. HEVC should receive the priority over VP9 as most services have too much content to maintain three libraries (H.264, HEVC, VP9). When you consider that HEVC decoding is available in software and hardware for Android, the choice to deploy HEVC as the next generation codec beyond H.264 seems an obvious one.

With Beamr and other HEVC vendors supporting OTT streaming services in production since 2014, we are well down the road with a proven technology in HEVC. And as we heard from Joe Inzerillo, CTO of BAMTech during his keynote talk at Streaming Media East show, serious companies should not be wasting time with “free” technology that ultimately is unproven legally. Though Joe may have been thinking of VP9 when he made this statement, it could have also been referring to the Alliance for Open Media codec AV1 which has been receiving some press of late mainly for being “free.” My issue with AV1 is that the spec is not finalized so early proof of concepts can be nothing more than just that, proof of concepts that may never see production for at least 18-24 months if not longer. Then there is the issue of playback support for AV1 where to put it simply, there is none.

What Apple delivers to the industry with adoption of HEVC is 1 billion active iOS devices across the globe, where consumer demand for video has never been higher. Until today, OTT services have been limited by only having access to an H.264 codec across the massive Apple device ecosystem. I predict that the first “user” of the HEVC capability will be Apple themselves, as they will likely re-encode their entire library including SD and HD videos to take advantage of the 40% bitrate reduction that HEVC can deliver over H.264, as Apple has claimed. Streaming services with apps in the app store, or those who deliver content for playback on iOS devices will need to be mindful that the consumer will be able to see the improved UX and bandwidth savings from iTunes, along with higher quality.

I reached out to Beamr to get their take on the Apple HEVC news and Mark Donnigan, VP of marketing make three good point to me. The first point is that higher quality at lower bitrates will be a basic requirement to compete successfully in the OTT market. As Mark commented, “Beamr’s rationale for making this claim is that consumers are growing to expect ever higher quality video and entertainment services. Thus the service that can deliver the best quality with the least amount of bits (lowest bandwidth) is going to be noticed and in time preferred by consumers.” Beamr has been hitting their speed claim hard saying that they can deliver an 80% speed boost with Beamr 5 compared to x265, which removes the technical overhead of HEVC.

Mark also suggested that, “there is no time to wait for integrating HEVC encoding into content owners video workflow. Though every vendor will make the time is of the essence claim, in this case, it’s possible that they aren’t stretching things. With iOS 11 and High Sierra public betas rolling out to developers in June, and to users this fall, video distributors who have not yet commissioned an HEVC encoding workflow don’t have a good reason to still be waiting.” It’s well known that outside of Netflix, VUDU, Amazon Instant Video and a small number of niche content distributors, HEVC is not in wide use today. However, active testing and evaluation of HEVC has been going on for several years now. Which means it’s possible that there are services closer to going live than some realize.

Finally, Mark also correctly pointed out that Apple is clearly planning to support HDR with displays and content. With the announcement that the new iMac’s will sport a 500 nit display, 10-bit graphics support (needed for HDR) and will be powered by the 7th generation Intel Kaby Lake processor with Iris Pro GPU, Apple is raising the bar on consumer experience. Not every home may have an HDR capable TV, but with Apple pushing their display and device capabilities ever higher, consumers will grow to expect HDR content even on their iOS devices. Soon it will not be sufficient to treat the mobile device as a second class playback screen. As Mark told me, “Services who do not adopt HDR encoding capabilities (and HEVC is the mandatory codec for the HDR10 standard), will find their position in the market difficult to maintain.” Studies continue to show that higher resolution is difficult for consumers to see, but HDR can be appreciated by everyone regardless of screen size.

Apple drives many trends in our industry, and history has shown that those who ignore them do so at their peril. Whether you operate a high-end service that differentiates based on video quality and user experience, or you operate a volume based service where delivery cost is a key factor, HEVC is here. With HEVC as the preferred codec supported by the TV manufactures, and adopted by some Android devices, and with Apple bringing on board up to 1 billion HEVC capable devices, it seems HEVC has been prioritized by the industry as the next generation codec of choice.

Sponsored by

The Business Benefits Of Using A Hybrid DIY CDN Approach To Content Delivery

No CDN provider can guarantee best performance for all kinds of content and across all geographies 100% of the time. For some use cases, this can result in a lack of control over content delivery when and where it is most critical for customers. To counter this, I’m starting to see a move by some companies toward hybrid CDN delivery, defined as building a private DIY CDN on top of a third-party CDN. For some this might seem like overkill, but in business-critical operations, slow performance, downtime and giving up too much control of content can dull a company’s competitive edge.

The hybrid CDN approach enables business advantages that ultimately boil down to optimizing performance within a content delivery ecosystem that they create, control and oversee to ensure availability, reliability, flexibility, and user experience. At the CDN Summit last month, there was a lot of talk about the business benefits of doing a hybrid CDN. The four most important being: always available and risk distribution; performance improvement; flexibility and control; security.

Always available and risk distribution
With online services at the heart of doing business, downtime or outages equal losses. As such, ensuring as-near-as-possible availability is the goal. Coupling a DIY, private CDN with a third-party CDN helps ensure this uptime by avoiding a single point of failure. When a third-party CDN goes down, websites go down with it – leading to the “all eggs in one basket” syndrome. Risk can be distributed more effectively by assembling your own private CDN and adopting a hybrid CDN approach that allows for better insight and control into what is happening in real-time. This results on cuts back on avoidable downtime, and, while seeing fewer problems overall, being able to spot them before or as they hit, to troubleshoot and mitigate their effects early and fast.

Performance improvements
As usual, one size does not fit all, nor does one tool solve all the complex problems: this is why, when looking for performance gains, leveraging a private DIY CDN on top of a third-party CDN ensures that content is deployed by best-performing CDN components in real-time in both regular or surging traffic. It can also allow for content delivery from the PoP closest to your primary region(s), ensuring fastest possible content delivery.

Flexibility and control
Implementing a hybrid CDN approach can let you to gain better oversight into traffic management and better traffic control of content. By creating your own purpose-built CDN ecosystem specifically for the needs of your specific business/traffic patterns you can direct traffic to your own private CDN nodes or third-party nodes to gain maximum performance and efficiency where demand is and run your business and application logic as close to your user as possible. This flexibility also introduces the ability to gain more control of your costs. Private CDNs enable greater scalability, as costs don’t rise based on capacity but rather on number of nodes and requests. This makes a big difference when heavier content (e.g. high-res video) requires more bandwidth. You gain control over your own traffic without adding more costs, paying for bandwidth, which you’d already do anyway, but with a private CDN layer, you gain oversight into the costs you incur. Commercial CDNs are basically rented resources, which can become expensive and out of your control.

Security
Part of “taking back control”” of content includes ensuring its security. For many content owners, having valuable content on a multi-tenant, third-party CDN is not optimal for security. In addition to the aforementioned limitations of “renting resources”, letting your proprietary content travel across someone else’s infrastructure removes the layer of privacy and security some companies need. The private cloud structure can also act as a WAF to provide protection against DDoS and other attacks since with a hybrid CDN solution, your private content is not on an open, public network.

Sometimes the adage “less is more” doesn’t apply. In the case of business-critical content delivery, more is more. A hybrid CDN approach (your own DIY CDN combined with your third-party CDN) lets you address your content delivery challenges on a global scale to get the best of what the hybrid solution offers for tangible, positive business results.

Summing it all up, a DIY CDN solution allows you to:

  • Mitigate outages and downtime.
  • Eliminate the “single point of failure” problem.
  • Set up private nodes that will act as “origin shield” to optimize content delivery and cache-hit ratio, protecting the origin from sudden floods of calls from the CDN.
  • Focus on your performance: with a DIY CDN, you are not sharing performance with the thousands of other customers of the third-party CDN.
  • Direct traffic to your own private CDN nodes or third-party nodes to gain maximum performance and efficiency where demand is and run your business and application logic as close to your user as possible while controlling your costs.
  • Secure your proprietary content by delivering it over your own private CDN.

Recognizing the need for the kind of flexibility and control a DIY CDN offers, Varnish Software and Cedexis partnered last year to create a solution delivering all of these benefits: Varnish Extend. Underpinned by high-performance caching from Varnish Plus and global traffic management from Cedexis, Varnish Extend is a DIY content delivery solution designed specifically to provide tailored, individualized content delivery infrastructure options. [See my post entitled: Varnish Software And Cedexis Announce A New Private Content Delivery Offering]

For more in-depth details on a hybrid CDN approach, check out the video of Varnish Software’s presentation entitled, “Combining Your Existing CDN With a Private Content Delivery Solution” from the CDN Summit last month.

Wednesday Webinar: Using Data To Enhance QoE and QoS

Wednesday at 2pm ET, we’ll have a StreamingMedia.com webinar on the topic of “Using Data: Enhancing Quality of Experience and Quality of Service.” Online video viewers now expect a broadcast-quality viewing experience. The only way to do that is with highly granular analytics that help assess and improve Quality of Experience and Quality of Service. Join this roundtable to hear insights into how to get the most out of your data, and how to make sure your viewers get the quality they expect. Topics we’ll cover include the following:

  • The four key indicators for analyzing video quality of experience for an enterprise webcast
  • How to utilize system, stream, network, and user level data
  • How to use analytics to improve the customer experience of your OTT service
  • The ways prescriptive analytics can ensure webcasting success
  • Encoding considerations for the various types of networks
  • Measuring QoE in DASH-based adaptive streaming—start-up delay, buffering, quality switches and media throughput
  • Optimizing adaptive streaming with the QoE data—pick the most appropriate quality level based on measured parameters
  • Real-time insights with real-time monitoring—use cases and customer examples

REGISTER NOW to join this FREE Live web event.

Best Practices For CDN Origin Storage

Because a CDN is a highly scalable network it handles most requests from edge cache without impacting the content distributor’s origin and application infrastructure. However, the content must be available for retrieval on cache miss or when the request has to pass to the application. Whether the assets are videos, images, files, software binaries, or other objects, they must reside in an origin storage system that is accessible by the CDN. Thus, the origin storage system becomes a critical performance component on cache miss or application request.

Most CDNs have historically offered file-based solutions designed and architected to permanently store content on disk and act as the origin server and location for CDN cache-fill. Other alternatives include object stores, general-purpose cloud storage services including Amazon S3, and CDN-based cloud storage solutions with object-based architectures. What’s interesting about origin storage services within CDNs is that they should be able to offer an advantage over monolithic cloud environments. CDNs are essentially large distributed application platforms, and one of those applications is storage.

To be clear, origin storage is fundamentally different from the caching applications that CDNs also provide. Storage implies some permanence and durability, whereas with caching, the objects are ultimately evicted when they become less popular or expire. CDNs all operate some form of distributed architecture that is connected with last mile telco providers. If storage is distributed throughout the CDN in multiple locations, requests from end-users for content that is not already in cache can be delivered significantly faster from a nearby storage location. However, performance suffers if a request has to traverse the CDN network and potentially the open Internet to access remote origin storage.

Some content distributors have accepted the risk that the availability and durability metrics for single storage locations offered up by cloud storage providers are good enough and their applications will continue to work even if some issues are experienced by their cloud provider. With hindsight and experience though, it is clear that is not always the case, as can be seen from the fallout from the Eastern USA S3 outage in early 2017. The solution offered by Amazon is to use their tools to architect and build your own HA solution and redundancy using multiple storage locations and versioning your objects. This is a complex change in operations from simply uploading content to a single location. The operational overhead and cost of doing this for multi-terabyte or petabyte libraries is significant.

I also hear a lot of customers who focus on the cost of storage at rest but they don’t consider the additional costs of replicating content or all of the storage access fees. For traditional cloud storage workflows, the costs of accessing content can be even more expensive than the costs of storing the content. Content owners should pick a CDN that charges a flat fee to store multiple copies of a customer’s content, without any additional charges for moving content into storage or accessing the content when it is requested by users. In many cases, storage from a CDN is actually more cost effective for customers who need to frequently access content from storage, than from a traditional cloud storage provider. Storing content in multiple locations also allows faster delivery of content that is not already in the cache.  While it can be difficult to assign a specific value to delivery performance, the improved customer satisfaction of faster delivery can potentially outweigh any additional cost of replicating content closer to users. Storage costs can potentially be relatively small compared to the benefits of customer satisfaction and retention from the improved performance.

Storing content in multiple locations also allows faster delivery of content that is not already in the cache. While it can be difficult to assign a specific value to delivery performance, the improved customer satisfaction of faster delivery can potentially outweigh any additional cost of replicating content closer to users. Storage costs can potentially be relatively small compared to the benefits of customer satisfaction and retention from the improved performance.

At the Content Delivery Summit last month, I had a conversation with Limelight Networks about what customers are asking for when it comes to origin storage and what a CDN should be doing to provider better performance than cloud storage providers. What Limelight said they have done is consider why and how a company would choose object storage integrated with a CDN, as well as the challenges of migrating content, and architected a solution. So they purpose-built something called Intelligent Ingest, which automates the movement of objects into integrated origin storage based on either audience demand or a manifest of files. In load-on-demand mode, audience requests that require retrieval from origin storage deliver the content and load it into edge cache. In addition, the content is also automatically stored in Limelight’s origin storage services. In manifest mode, content distributors provide a list of content to migrate and parameters to control the rate of migration.

In picking and choosing the best origin storage solution, content owners should look for one that has automatic replication to multiple locations based on regional policies. Customers can choose policies based on audience location such as a single region like the Americas, EMEA or APAC, weighted policies across geographies, or fully global policies. By doing so, content is then automatically positioned appropriately to be close to the audience and future origin storage calls, whether due to cache-miss or refresh checks, are automatically served from the best origin storage location available for that request.

There are a number of workflows and use cases where these features could be useful for customers. For new content production, one could automate the movement of new content into the CDN storage environment as it is published and end users start requesting it. It could also be useful if you are migrating a library from your existing solution to a CDNs origin storage, which Limelight said is a frequent use case. Enabling load-on-demand lets audience requests determine which assets to migrate, and providing a manifest of files automates migration of those assets. Another use case is pre-positioning, or what is sometimes known as pre-caching or cache warming. In advance of a launch, the CDN could distribute all the necessary files across their origin storage and when the launch is pushed live, the CDN handles the subsequent traffic spikes, offloading demand from the customer’s infrastructure.

Looking at the CDN market today, it is clear the emphasis is not only on high quality and highly efficient delivery solutions but also on the range of services provided to help manage production workflows and improve the end-user experience. Moving more logic to the CDN edge to incorporate smart solutions for request handling — and as discussed above, automating content asset migration and distribution to improve performance and QoE — are areas where CDNs can make a clear difference.

Third HEVC Patent Pool Launches With Ericsson, Panasonic, Qualcomm, Sharp & Sony

For content owners and broadcasters looking to adopt HEVC, two patent pools offered by MPEG LA and HEVC Advance have been offering licensing terms around HEVC patents for some time. But if dealing with two pools wasn’t already confusing enough, we now have a third pool that has entered the market. Launched in April, Velos Media has launched a new licensing platform that includes patents from Ericsson, Panasonic, Qualcomm Incorporated, Sharp and Sony.

As all these patents pool claim, Velos Media says they are making it “easy and efficient” to license from the “innovators that have made many of the most important contributions to the HEVC standard.” Of course, they also say that their pool “reduces transaction costs and time to market”, that their terms are “reasonable”, with “rates that will encourage further adoption of the technology.” For a pool that tries to sound professional, their website is a joke. It contains no real details of any kind, such as which patents are covered in the pool, by which companies. Nor does it give any details on what the licensing terms are or whom exactly they cover.

In their press release they make a mention of “device manufacturers” but give no other context of whom they are targeting. To make matters more confusing, Velos Media is being run by the Marconi Group, “an entity formed to create, launch and support new patent licensing platforms.” It’s clear Velos Media has no understanding of the market, doesn’t know the use cases and doesn’t realize the important of transparency when it comes to patent pools.

Recovering From The Flu, Back To Blogging Shortly

I’m recovering from the flu and will be back online shortly. I’m catching up on emails this week and will be responding to all follow up from the Streaming Media East and CDN Summit events.

MLBAM CTO To Kick Off Day Two Of Streaming Show With Fireside Chat

I’m excited to announce that Joe Inzerillo, Executive VP and CTO of Major League Baseball Advanced Media will sit down with me for a fireside chat to kick off day two of the Streaming Media East show, on Wednesday May 17th at 9am. From BAMTech’s recent billion dollar investment by Disney to their new CEO and European office, it’s going to be a great discussion with lots of topics covered. If you have a question you want me to ask Joe, feel free to email it to me. The keynote is free to attend if you register online using code 200DR17 and select a discovery pass. #smeast