New HEVC Patent Pool Launches Creating Confusion & Uncertainly In The Market

Last month, a new group named HEVC Advance announced the formation of a new HEVC patent pool, with the goal of compiling over 500 patents pertaining to HEVC technology. Some were surprised by the announcement since MPEG LA already offers licensing for HEVC patents, but it’s not unusual for multiple patent pools to emerge. Philips and Mitsubishi have some essential patents and aren’t currently in the MPEG LA pool, so there was always the option that another HEVC pool might be formed.

What caught people off guard and what I don’t like about HEVC Advance’s approach is their lack of concrete details and clarity of their intentions. In a call with a company spokesperson who works at GE, one of the backers of the pool, they would not give me any details on what patents they have, what the licensing terms are or which HEVC applications they might impact. The company also took a shot at MPEG LA telling me that some patent holders wanted an alternative to MPEG LA, but wouldn’t tell me why, or what alternative HEVC Advance offers. In their press release, they say their initial list of companies in the pool are, “expected to include GE, Technicolor, Dolby, Philips, and Mitsubishi Electric” but didn’t break out which patents they each hold and would not share patent numbers with me.

The company has said they will have more details to share in the coming months, yet they acknowledge that the patents have not gone through an independent patent evaluation process. Essential patent evaluation generally works by having an evaluator compare claims in a patent with the applicable (HEVC) standard specification and if one or more claims is necessarily infringed in connection with use or implementation of/reads on the standard, then that patent claim is determined to be essential. A new patent pool should have that completed and in place, before announcing in the market. Immature licensing programs are a threat to everyone, and that’s what HEVC Advance is. Before launching, HEVC Advance needed to be way more mature, specific and decisive if they are trying to position themselves as a significant and industry-enabling HEVC patent pool.

Patent licensing and IP uncertainty is always a risk with any new video compression technology, and HEVC is no exception to that rule. Most in the industry have been predicting minimal concern on that front so far, given the well structured nature of the MPEG LA patent pool, the fair licensing structure, and a general belief within the CE and codec vendor communities that the industry had learnt from past experiences and would not adversely hinder HEVC uptake through patent uncertainty.

CE Adoption numbers have been looking promising since late 2014, with many smart 4K TVs and newest smart phones from Apple and powered by Android offering built-in HEVC decoding capability. 4K trials are also underway around the world, most recently by Tata Sky for the World Cup Cricket tournament, powered by Ericsson and Elemental. The recent announcement by HEVC Advance throws a hard wrench into that momentum for several reasons. One, it offers no clarity or reassurance to potential licensees that they will be given a smooth path to truing up on past shipments and be offered reasonable and financially viable terms. Second, it is heavy on brand names and light on details, which does not generally reflect a mature program that is designed to maximize adoption.

We also have no clarity on the strength of the claims that the group is making, or what exactly these patents relate to. If they are for areas not directly tied to core video processing, for example audio or certain HDR techniques or specific filters that offer incremental improvement, then their impact could potentially be circumvented. But if they cover core video processing tasks within the HEVC standard, then we have a big problem on our hands. The good news? The community has by and large had enough of patent related disruptions, and so if indeed the latter is the case, expect some heavyweights to jump into the fray quickly and decisively to resolve the mess.

You would think HEVC Advance would offer more details, but so far, they refuse to. They did tell me that their patents were “core” and “essential” to HEVC deployments, but didn’t define exactly what that meant, or which use cases they were referring to. Of the five companies they expect to have in the pool at launch, they offered no opportunity for me to speak to any of them and it’s clear that this GE is currently leading the pool. This might be good for patent holders who have a clear desire to make money from their patents, but bad for the industry participants that mighth have to license them. Of course HEVC Advance spins it in a positive way saying it’s good for the entire industry as it allows you to go to one place to license many patents, but if they are expensive, then that’s not positive.

One has to wonder why all of the five companies named in the press releases decided not to join the MPEG LA pool. They had the opportunity to join, but clearly felt they can earn more money with a new pool. How much more money we don’t know until we see their licensing terms. If you want a breakdown on MPEG LA’s HEVC licensing costs, see Jan Ozer’s great article, that gives you all the hard numbers.

HEVC Advance has been quoted as saying that the, “market is requiring a different approach to aggregating and making HEVC essential patents available for license”, but again, won’t say or detail how their approach is different. I also don’t see the “market”, defined as those who license HEVC patents, saying their needs to be an alternative model to what MPEG LA already has in place. Companies behind HEVC Advance simply want to get paid more than they could by being in MPEG LA’s patent pool, which we’ll know for sure when they disclose their licensing terms. As a CNET article pointed out, HEVC Advance promises a “transparent” licensing process, yet won’t share any details. There is nothing transparent about how they have decided to come into the market.

Note: Frost & Sullivan Analyst Avni Rambhia contributed to this post.

Sponsored by

Cogent Plans To File Complaint Against Comcast With FCC Over ISP Network Access

Now that the new Open Internet rules have been passed, Cogent plans to file a complaint with the FCC against Comcast regarding access to their network. I’m also hearing that Comcast won’t be the only ISP that Cogent plans to go after. The complaint can’t be filed until 60 days after the order goes into effect following publication in the Federal Register, which hasn’t happened yet 60 days after the law was passed, so the earliest we could see the complaint is the 26th of this month still a few months away. (Thanks to Larry Downes for noticing my error) I don’t know if the FCC will even make the complaint public, but in typical Cogent fashion, the company wants all of their access to the ISPs to be free, with as much capacity as they want. It’s the same sorry argument Cogent has been using for years and plans to argue about the ISP’s “monopoly” and “control” over the last mile.

When it comes to Cogent, you have to remember that this is a company that does not use common sense when it comes to their business practices. To date Cogent has had peering disputes with Level 3, Sprint, Deutsche Telekom AG, Teleglobe, AOL, France Telecom, TeliaSonera, Sprint-Nextel and others. Their argument is that they should not have to pay for access to an ISPs network and think they should be able to send any ISP as much traffic as they want, with no cost to Cogent of any kind.

I find it funny that the only ISP that has actually violated net-neutrality principles with prioritized traffic is the one complaining. And yes, Cogent is an ISP. Maybe not at the scale of a Comcast or Verizon, but Cogent offers access to the Internet as a service. Cogent thinks they should get access for free, but if I am a CDN or a non-peer of Cogent will Cogent give me free peering to reach their customers? Of course not. So why does Cogent expect the same from all other networks until the end of time? Because they don’t use any common sense or intelligence when it comes to this topic. They simply aren’t rational.

The bottom line is that interconnects and transit are competitive, paying for them is standard industry practice in the U.S. and this model has been working just fine for years, except for Cogent’s dispute history. Some folks are also telling me that Cogent has told them that ISPs are in violation of the new Open Internet rules, which of course is factually inaccurate. The Open Internet rules do not “regulate” interconnection agreements in any way, as of now. Of course, while the FCC applauds itself on forbearing from Title II price regulation, (which is a lie) it provides a backdoor way for it to control interconnect pricing if they wanted to.

The Open Internet order uses langauge that gives the FCC the right to hear interconnection complaints but the order “does not apply the Open Internet rules to interconnection.” The FCC says the “best approach is to watch, learn, and act as required, but not intervene now, especially not with prescriptive rules.” I don’t think the FCC will change their stance on this any time soon, since they have made it clear they don’t understand the interconnection market, but I’m sure Cogent is hoping to convince them otherwise.

At least Cogent’s argument never changes. They want it all for free, to improve their bottom line. Trying to disguise that argument as a violation of Open Internet rules is not only factually wrong, it’s just grandstanding on their part.

Enterprise/Edu Speaking Spots Open At Streaming Media East Show

The final program for the Streaming Media East show is nearly complete with just a few speaking spots still open. If you are interested in joining any of the enterprise/education focused sessions below, please contact me.

Tuesday, May 12, 2015
10:30 a.m. – 11:30 a.m.
Enterprise Delivery: Building an Internal Streaming Solution
Two round-table speaking spots open

Tuesday, May 12, 2015
4:00 p.m. – 5:00 p.m.
From the Classroom to the Athletic Fields: Streaming In Educational Institutions
One round-table speaking spot open

Wednesday, May 13, 2015
3:15 p.m. – 4:00 p.m.
Integrating Streaming, Video Conferencing, and Unified Communications Solutions
One round-table speaking spot open

White Paper: Shifting Live-To-VOD Media Processing To The Edge

To date, the most popular paradigm for multiscreen content transcoding has been a just-in-time-packaging (JITP) approach, driven by low-cost storage, manageable content volumes, and expensive live transcoders. However, soaring content volumes and growing profile complexities on the one hand, and increasing transcoder densities and falling transcoding costs on the other, are shifting economics in favor of just-in-time-transcoding (JITT). This is particularly true of network DVR deployments where operators are required to maintain one copy of recorded content per user.

In a new Frost & Sullivan white paper, we take a comprehensive look at the CAPEX and OPEX economics of JITT deployment as compared to JITP deployment. Data shows that when considering a steady audience with consistent consumption of time-shifted linear content, the five-year total cost of ownership (TCO) of JITP infrastructure is nearly twice that of the JITT alternative.

Screen Shot 2015-04-08 at 8.42.34 PMThis is due to the fact that the CAPEX for JITT transcoders and the reduced storage that they enable is 30% lower than the JITP alternative. In addition, we see 40% annual savings in OPEX in the JITT scenario as compared to JITP. This is also because volume consumption in this use case is predictable, and therefore capacity can be carefully planned to optimize utilization.

The paper also provides details on the total cost of ownership, CAPEX Considerations, types of transcoding workflows, sources of savings and other industry trends. You can download the paper for free here.

Limelight Looking To Improve Content Delivery Efficiency With A Smarter Purge Solution

As content owners have become smarter about delivering digital content, they have learned to develop workflow processes and integrated systems designed to publish, manage and deliver digital content. From websites to software downloads to games, these workflows have become the backbone of how organizations deliver their content and determine what the end-user experience looks like. CDNs have become a critical element of these workflows by enabling organizations to not only publish their digital content quicker but also to reach viewers with a quality of experience they might not be able to achieve on their own.

Content delivery networks empower organizations with faster content delivery, but also introduce complexities into the workflow, and therein lies the rub. Now organizations need to implement processes that enable them to manage objects not only on origin or storage but also in the CDN cache. With many CDNs, the process of cache management can be slow and tedious. Most CDNs support both cache invalidation, which marks content as stale and triggers a refresh the next time the content is requested, and cache eviction, which physically deletes the content from disk. But for some CDNs, these requests to invalidate or remove objects from cache can sometimes take hours resulting in users receiving incorrect or faulty content in the interim.

While invalidations are much more common, as they don’t re-download the entire content unless something has actually changed, evictions are sometimes required by policy, and generally take much longer to complete, as the files must be physically deleted from every server. And most CDNs only allow Exact URL matches, so when removing lots of small objects (say from chunked video, or many versions of the same HTML with differing query strings), this list can be daunting and error-prone to cobble together.

As part of a host of new services and features that Limelight Networks recently announced, one of the things they are looking to dramatically improve is CDN cache management. With this new technology, which Limelight Networks has named SmartPurge, customers can fire off a purge request and be assured that within seconds, all delivery edges stop serving content subject to the purge request. In the case of a cache eviction, while that’s happening, the SmartPurge technology removes the physical content object in the background. Their new SmartPurge user interface and API allows submitting a mix of up to 100 URLs or patterns in a single batch. Patterns can be automatically generated by the UI, and allow end-users to purge exact URLs, entire directories, all files with a particular name, all files matching a list of extensions, and more – all while specifying whether query terms should be included in the match or not. The purge request gets compiled into a single optimized request, and occurs atomically. Here’s a screenshot from their system:

2With SmartPurge, a Limelight Networks customer could purge all JavaScript files, within seconds and at once, without worrying that users will be exposed to script errors from the time the first JS file is purged until the final one is. While other CDNs offer what they call “fast” purge, many are not flexible as they only support exact URLs. Other CDNs support limited wildcard scenarios such as directory name or file extension, but the requests can take anywhere from 15 minutes to many hours, and occur without guaranteed order and without guaranteed execution, so if a server is unreachable, it is skipped.

For many, the choice has been fast or flexible, but not both. According to Limelight Networks, SmartPurge is the first CDN cache management solution that is near instant, completely flexible, atomic (the entire request completes at once), and also reliable (not best effort execution). It is worth noting that even prior to this launch, Limelight was already offering full Regular-Expression-based purging, providing total flexibility, but the requests could take a long time, were not atomic, and were best effort. Therefore, this announcement is a major leap forward for them, really leaving nothing to be desired when it comes to CDN cache management.

The point about flexibility isn’t just an academic one. A major challenge with CDN cache purging is determining the URLs or cache keys for all the content you need to purge. In many cases, URLs are obfuscated and player range requests result in hundreds of small objects in cache, making up a single video. Furthermore, multi-bitrate HTTP segmented streaming can result in thousands of HLS, HDS, DASH, or MSS chunks per bit-rate per asset. Building the complete list of URLs would be a daunting, near impossible, task. With SmartPurge, Limelight says they have solved this by allowing organizations to submit a single pattern that will match all the chunks related to a particular media asset or a single bit-rate of that chunked asset.

Because CDN cache management has generally either been fast or flexible, but not both, organizations have learned to set their CDN TTLs (the minimum frequency between checks for newer versions of content) lower than they would like, which causes performance to suffer, as refresh checks require going all the way back to their origin to either confirm the content in cache is still fresh or to fetch a newer version. With SmartPurge, organizations can set TTLs arbitrarily high, which is great for performance, and programmatically invalidate all content, or if using a content management system, invalidate just the updated content, and be assured the new content will be visible globally as quickly as if they were still using very low TTLs.

They are also providing advanced reporting that provides organizations with a clear report on what was purged and when, ensuring a validation check as part of the workflow process, which they say is the first “closed loop” guaranteed CDN cache management solution. This is especially important when content must be deleted from cache for policy or regulatory reasons, like DMCA takedown notices. From a product standpoint, Limelight’s been hard at work rolling out new features and functionality like SmartPurge, a new DDoS solution, self-service configuration management tools, and upgraded storage and analytics offerings.

Streaming Vendor News Recap For The Week Of March 30th

Here’s a recap of all the announcements from streaming media vendors that I saw for the week of March 30th. We had three more labeled as “world’s first” and one as “ground breaking”. Vendors really should leave the marketing phrases out of the title of their releases and instead, use that space to better describe what the product/service actually is. Also, releases that say things like “Powers Next Generation Video Experience”, or offers a “One Stop Solution” doesn’t mean anything. There is no way to tell from the title of the release if the company has launched a new product, service etc. or what “solution” they are referencing.

Many are prepping lots of announcements for NAB, so April should be a busy month for news.

Kwicr Unveils Its Mobile Delivery Network

kwicr_horizontal_Hi_ResBurlington, Mass. backed Kwicr, a company funded by Sigma Prime and Venrock with funding of $11.5M, unveiled earlier this week their Mobile Delivery Network. Founded three years ago, the 35-employee firm says it is working with more than 80 mobile apps, including those of some well-known media companies, carriers, development houses and enterprises.

Kwicr’s departure from stealth mode is yet another indication that demand for protocol manipulation technologies that impact the quality of delivery for streaming content is heating up. Two weeks ago, Twin Prime announced its launch and the completion of its initial $9.5M funding round and other companies are soon to follow.

Certainly the timing makes sense. Mobile broadband performance is a problem, with poor quality delivery being the key reason consumers give up on apps and streaming content. As I wrote last week, a recent consumer survey from Conviva noted that 29% of viewers will close a video and try a different app or platform when they encounter a poor experience. And 75% abandon the app in four minutes or less.

In reality though, the issues content owners face today are just the tip of the iceberg. Demand for streaming content of course is increasing and the problem, both as it relates to delivery quality and the resulting user experience, will only get worse. Just last week Facebook unveiled new video features which if successful, and there’s a good chance they will be, will see a rapid increase in the volume of streaming video shared by Facebook’s users. That alone may present issues for some mobile networks already struggling to keep up.

Kwicr stresses that its mobile delivery network is complementary to existing CDNs, addressing mobile broadband performance from where the content is served to the mobile device. In essence, Kwicr aims to do for the last mile what firms like Akamai, Limelight and others do for the Internet backbone. From a high-level, here’s how Kwicr’s technology works.

Mobile app owners and developers integrate Kwicr’s SDK with their app, a process that takes minimal coding and which enables the app to utilize the company’s technology, which it describes as Dynamic Packet Recovery and Intelligent Rate Control Technology. Customers access the technology through the corresponding mobile delivery network which is delivered via a cloud-based service.

The Kwicr functionality, which increases the performance of any mobile app, and therefore the streaming content that is delivered through them, works on any 3G, LTE or Wi-Fi network as well as any Android or iOS device. App owners can monitor the performance of their apps via a dashboard and turn on Kwicr when needed, or set it to turn on automatically when the mobile network’s performance degrades to a set level or, for example, when usages rates for the app are particularly high.

Kwicr uses a protocol solution that adapts to changing network conditions, something that happens often as power levels change from cell towers or Wi-Fi, additional users join and leave a node, and different applications place varying demand on the network. Data from testing they did for a customer showed that available bandwidth varied an average of 239% during 10 minute sessions.

With network conditions constantly changing on a millisecond basis, static profiles are of limited value. Kwicr’s solution dynamically adapts to these rapidly changing conditions by measuring the available bandwidth and adjusting the rate of transmission accordingly. In addition, their protocol has built-in redundancy to recover lost packets without the need for retransmission. The real impact is that the user sees much higher performance (higher video resolution and less buffering), which allows the app owner to better monetize their content. Because the Kwicr technology is built into the app, they monitor all traffic, whether accelerated or not. By providing this analytic data to their customers, they can set policy based on their business goals.

Kwicr says the resulting performance gains are significant and apply not only to streaming video and audio, but content loads of all kinds. In a recent test over five days, Kwicr did a side-by-side comparison of a national sports media outlet’s video player, with what they say are impressive results. During the test and in the same elapsed time as a control where Kwicr was not deployed, the company delivered a 30% improvement in throughput on average, with some devices seeing an uplift of more than 180%. Kwicr also accelerated the download time, gaining a 30% advantage and delivered a 10x improvement in the reduction of severe video stalls. During the test users were also 4k ready 39% of the time when Kwicr was on, versus 19% of the time when it was not. All of the test results reflect performance on the same network and devices. (Note: Kwicr plans to make this testing data available in the next few days)

Beyond the acceleration Kwicr delivers via its over-the-top model, it also enables app owners to gather detailed data on app traffic, making it possible to look in detail at app performance, and streaming media usage. Both are of value when determining how to best address consumers’ preferences for streaming content. Kwicr says pricing for their mobile delivery network is based on accelerated traffic and starts at $500 per month. Kwicr is also offering a complimentary 45-day trial in which app owners and developers can try them, analyze the results, and see the performance increase in content delivery first hand.

Kwicr will be presenting a talk on “The Impact of Mobile Broadband on Mobile Video” at the Content Delivery Summit, on Monday May 11th in NYC. Use discount code DR100 and get a pass for only $395.