Akamai Rolls Out New “Fast Purge” Solution; Questions Remain About Speed and Scale

For well over a year, customers of Akamai have been complaining about Akamai’s cache invalidation times, which has impacted manifest caching, real-time news feeds, and any time sensitive content. Historically, Akamai purges used to take at least 15 minutes and sometimes, in a couple of really terrible cases, I’ve heard of hours. Competitors like Fastly have been quick to jump on Akamai’s purging limitations and have been winning deals in the market based on Fastly’s ability to purge content within hundreds of milliseconds.

It seems that some CDNs caching strategies are old school and based around the idea that you have little to no real-time control of your caches. You set a TTL and if you need to invalidate, it can’t be mission critical, you just have to wait minutes or hours. So you set your caching strategy by identifying what you can afford to behave like this and then build against that, so your home page, news feeds, api’s, dynamic elements, HLS manifests etc. can’t be cached. Customers tell me that with Fastly, they have turned the caching strategy on its head. They cache everything (except truly uncacheable content like PII or specific to a single user), and then invalidate it as needed. It’s the difference between 90% cache hit rate and 99.999%. New York Times is a classic example of an Akamai customer, which serves all their HTML from origin and only caches images on Akamai because they don’t have this capability.

Akamai has been aware of the major limitations of their platform when it comes to purging content (see their blog post from January) and has been building out a new system, which allows purging as low as a few seconds. That is a dramatic improvement, but content owners have been asking me how widespread Akamai’s new system is, if it is available on their entire platform, or if there’s any rate limiting. Some Akamai customers tell me they are still in the 15 minute purge range. By comparison, when they compare Akamai to Fastly, their entire platform supports instant purge, they don’t rate limit and you can purge the entire cache if you want and it’s all API driven. Fastly customers tell me they have 150ms or less purging capabilities.

So with all these questions out in the market, I had a chance to speak to Akamai about how they are addressing their purging issue and got details from the company on their new platform, made available this week, which looks to address some of their customer’s purging complaints. Akamai’s new “Fast Purge” solution enables Akamai customers to invalidate or delete their content from all of Akamai’s Edge servers via API and UI in “approximately 5 seconds”. With Akamai’s Fast Purge API, the company said their customers “can automate their publishing flow to maximize performance and offload without compromising on freshness.” With this “Hold Til Told” methodology, Akamai customers can now cache semi-dynamic content with long TTLs, and refresh it near instantly as soon as it changes.

Akamai says the Fast Purge UI will complete the roll out process this week to all customers, and is already available to 85% of them. Fast Purge API has been adopted by almost 100 Akamai customers so far and they said it supports a “virtually unlimited throughput of over 100x that of our legacy purge APIs per customer.” Its early adopters include major retailers caching their entire product catalog and major media companies caching news stories and live API feeds for day-to-day operations. In Q1 2017, Akamai says Fast Purge will support purge-by-cpcode and purge-by-content-tag. With Fast Purge by content tag, customers will be able to apply tags to content, and then with one purge-by-tag request, refresh all content containing that specific tag. For example, eCommerce customers will be able to tag search result pages with the SKUs on them, and then when an SKU is out of stock, with one request remove all pages referencing it.

It’s good to see Akamai finally offering a better purging solution in the market, but only customers will determine if what Akamai now offers will fit the bill or not. The keys to instant purge are speed and reliability at scale. Customers say their experience on the “Hold Til Told” approach suggests that you need to trust that purges will happen and they need to be reliable across the world and at scale. If your site depends on being updated in real-time to ensure you don’t sell something you don’t have or provide outdated information the users need to trust it will work. If purges do not happen reliably, it creates mistrust and damages the entire premise of “hold til told”. So customers of any CDN should test purge times under many different conditions and in various regions on the production network to ensure it actually works as advertised. Even more so for Akamai customers, since we don’t know what scale or reliability their new Fast Purge solution has. While Akamai said they now have a 100x increase on the throughput from the legacy system, the old system was so limited that it’s possible that a 100x increase simply isn’t enough and would not meet the needs of many large customers.

Another unanswered question is what Akamai has done to integrate their underlying purging system into major CMS vendors and platforms, so that you get this feature as part of your basic install of the CMS. Akamai has not traditionally worked with the partner ecosystem well and it will be interesting to see how they plan to be on by default in the key CMS and platforms. Competitive CDNs have historically been developer friendly and have well-documented APIs for integrating with other platforms, and that has traditionally been a challenge for Akamai.

On the speed front, it’s good to see Akamai improving, but many businesses would not function with 5 second purges times. For example, customers that have real-time inventory that cannot be oversold. I see this 5 second limitation and the unknown scale and reliability of the system being a huge challenge for Akamai in a market that is truly milliseconds based. It is great they went from minutes to seconds but the performance game is now measured in milliseconds. Scale, reliability and speed are words everyone uses when it comes to delivering content on the web but for purging of content, customers use real-world methodology to measure the impact it has, positive or negative on their business. Customers are the ultimate judge of any new service of feature in the market and at some point, as more look to adopt Akamai’s Fast Purge solution, we’ll find out if 5 seconds is fast enough or not.

If you are a customer of Akamai or any other CDN, I’d be interested to hear from you in the comments section on how fast you need purging to take place.

Sponsored by

As Pay TV Flocks To Devices, Multi-DRM Can Make Or Break Service Success

[This is a guest post by my Frost & Sullivan colleague Avni Rambhia]

It’s no longer news that TVE and OTT have graduated from experimental endeavors to full-fledged service delivery channels. On metrics such as subscriber growth, growth in hours viewed, and growth in advertising revenue – OTT services are surpassing traditional Pay TV services. That is not to say that OTT services are fully monetized today. Revenue generation, whether ad-based, transactional or subscription, remains an ongoing challenge for TVE/OTT services despite growing uptake and aggressive infrastructure investments.

The quest to bring a consistent, managed-quality experiences to  an unruly stable of unmanaged devices is a formidable challenge. Maintaining support across all past and present devices in the face of changing streaming and delivery standards is an undertaking in its own right. Nonetheless, secure multimedia delivery holds the key to delivering premium content to subscribers. With competing services only a few clicks away, ongoing growth relies heavily on the ability to deliver a service transparently across the many underlying DRM systems, device platforms, browsers and streaming protocols currently in use and on the horizon.

HTML5, with its related EME and CDMi standards, was envisioned as a way to unify cross-platform fragmentation and simplify cross-platform app development and content delivery. Things didn’t quite materialize that way, with the result that content companies will need to manage secure content delivery and handle back-end license and subscriber management across all major DRM platforms. While there is a perception that “DRM is free”, stemming primarily from the royalty-free nature of Widevine and the falling costs of PlayReady licensing, in reality the total cost of ownership is quite high. Moreover, DRM needs to be treated as a program rather than a project, subject often to unexpected spikes in R&D and testing overhead when a new operating system is released, a new device surges in popularity, old technology is deprecated, the DRM core itself is revised, or when a new standard takes hold. While the client side of the problem is often the first concern, server-side components play an important role in service agility and scalability in the longer run.

As part of our content protection research coverage at Frost & Sullivan, we took an in-depth look at factors affecting the total cost of ownership for both content companies (including broadcasters, new media services and video service operators) as well as OVPs who are increasingly outsourcing OTT workflows on behalf of content companies. The findings from this research are reported in a new white paper sponsored by Verimatrix. We’ll be discussing many of these factors, and their real life impact on customers, during a live webinar on Wednesday September 28th at 10am ET. Divitel will be joining to discuss their experiences first hand.

As we’ll talk about in the webinar, agility and scalability are crucial to OTT services as TV by appointment fades away and customers continue to trend towards device-first viewing behavior. While some companies may have the engineering talent and budget capacity to build and maintain their own multi-DRM infrastructure, our best practice recommendation in the majority of cases is to work with a security specialist vendor instead of going DIY. If you would like to share your own stories, whether as a vendor or a customer, or if you have any questions about DRM and available options, feel free to comment here or reach out to Avni Rambhia, Industry Principal, ICT/Digital Transformation at Frost & Sullivan.

Tuesday Webinar: Accelerating Your OTT Service

Tuesday at 2pm ET, I’ll be moderating a StreamingMedia.com webinar on the topic of “Accelerating Your OTT Service“. The OTT market is expected to generate an estimated $15.6 billion in revenue globally through 2018. Join Brightcove’s Vice President of OTT Solutions Luke Gaydon for an interactive discussion about the state of the OTT landscape and the key opportunities and challenges facing media companies looking to capitalize on this thriving market.

Luke will review the latest data on the growth of OTT and discuss complexities including device fragmentation and how to address them. Then, he will showcase Brightcove OTT Flow – powered by Accedo, including key product features, and share how this innovative turnkey solution enables the seamless, rapid deployment of OTT services across multiple platforms. Join this webinar to learn:

  • The latest data on the growth of OTT across devices, platforms and audiences
  • The growing challenges, including device fragmentation and technical scope
  • Strategies for taking your content OTT
  • Key features, analytics and how OTT Flow provides a consistent user experience across devices

REGISTER NOW to attend this free live web event.

Correcting The Hype: Twitter’s NFL Stream Lacks Engagement and A Profitable Business Model

nfl-twitterWith two NFL games under Twitter’s belt now, I’m reading far too many headlines hyping what it means for Twitter to be in the live streaming business. Phrases like “game changing,” “milestone,” and “make-or-break moment” have all been used to describe Twitter’s NFL stream. Many commenting about the stream act as if live video is a new kind of new technology breakthrough, with some even suggesting that “soon all NFL games will be broadcast this way.” While many want to talk about the quality of the stream, no one is talking about the user interface experience or the fact that Twitter can’t cover their licensing costs via any advertising model. What Twitter is doing with the NFL games is interesting, but it lacks engagement and is a loss leader for the company. There is nothing “game changing” about it.

The first NFL game on Twitter reached 243,000 simultaneous users and 2.3M total viewers. But looking further at the data, the average viewing time was only 22 minutes. Most who turned into Twitter didn’t stick around. Many like myself tuned into only to see what the game would look like and how Twitter would handle live video within their platform. For week two, Twitter reached 2.2M total viewers and had 347,000 simultaneous users, but the NFL isn’t saying what the average viewing time was. Twitter and the NFL are also counting a viewer as anyone who watched a “minimum of three seconds with that video being 100% in view”, which is a very short metric to be using.

Unfortunately, the whole NFL experience on Twitter was a failure in what Twitter is supposed to be about – engagement. Watching the stream in full screen, on any device, felt like I was watching the game via any other app. Twitter didn’t overlay tweets in any way, some commercial breaks had no ads shown at all and tweets weren’t curated. Far too many tweets added nothing to the game with comments like “Jets Stink.”

Streaming live content on the web has been around for 21 years now, and it’s sad state of the industry when the most exciting part of the event was that people could not believe the video didn’t buffer or have widespread quality issues. It’s not hard to deliver a video stream to 243,000/347,000 simultaneous users, especially when Twitter hired MLBAM, who then used Limelight, Level 3 and Akamai to deliver the stream. Some suggested that the “NFL on Twitter opens an enticing lucrative new revenue stream,” which of course isn’t the case at all. We don’t know for sure what Twitter paid for the NFL games, but if the rumors of $1M per game are right, Twitter can’t make that back on advertising. They don’t have a large enough audience tuning into the stream and would never get a CPM rate to cover their costs. Some have even suggested that the NFL stream on Twitter is a “model for other revenue-generating live stream events.” but of course that’s not the case. One-off live events can’t generate any substantial revenue as the audience size is too small, and the length of the event too short.

There is nothing wrong with Twitter using NFL content to try to attract more users to the service and grow their audience, but the NFL content itself isn’t a revenue generator for the company. Some, including NFL players, suggested that soon all NFL games will be broadcast online and that what Twitter and the NFL are doing is the future. That idea isn’t in touch with reality. The NFL is getting paid $28B from FOX, CBS and NBC over the course of their contracts, which end in 2022. That averages out to $3.1B per year the NFL is getting from just those three broadcasters. The NFL has no financial incentive to put more NFL games online, without restrictions, or they risk losing their big payday from the broadcasters. It’s not about what consumers want, it’s about what’s best for the NFL’s bottom line. Economics drives this business, not technology or platforms.

If Twitter has a game plan for all the live video they are licensing, it’s not clear what that is. In a recent interview with Twitter’s CFO, he commented that Twitter’s goal with the NFL games is to, “help the NFL reach an audience it was not otherwise reaching.” How does Twitter know they are doing that? There were plenty like me who were watching the game on TV and Twitter at the same time. The NFL didn’t need Twitter to reach me. And when the CFO uses the term “high fidelity” to describe the stream, what does that mean? Twitter keeps saying they have the “mobile audience,” but they won’t break out numbers on what the usage was on mobile versus the web, or any data on average viewing time on mobile. Twitter also said, “there was evidence that people who had not watched the NFL in a while were back engaged with it.” Why kind of evidence is that exactly? Twitter can’t tell if I was on NFL.com the day before, or watching the game on TV today, so what kind of data are they referencing?

Twitter also says that they were “incredibly pleased with how easy it was for people to find the Thursday night game on Twitter.” Making a live link available isn’t hard. A WSJ article said there are other “live sports streaming technologies out there” but Twitter’s was “easy to use.” All the live linear video on the web is using the same underlying technology, Twitter isn’t doing anything special. They are using the same platform that MLB, ESPN, PlayStation Vue, WWE and others use, as they are all MLBAM customers. Many in the media made it sound like Twitter did something revolutionary with the video technology, which wasn’t the case at all.

Someone commented online that the reason Twitter’s NFL stream is so “successful” is because “advertisers love the mass that live sports delivers.” But it doesn’t deliver that mass audience online, only on TV. And that’s the rub. The NFL and every other major sports league isn’t going to do anything to mess with their core revenue stream. So for at least the next six years, until their contracts with the broadcasters come up for renewal, the NFL isn’t going to do anything more than experiment with live streaming. And there will always be another Twitter, Yahoo, Verizon, Google, or someone else who wants to give the NFL money, more for the publicity of the deal, than for anything that actually increases their bottom line.

Moderating NYME Session On Future Of Live Streaming Thursday, Looking For Speaker

nyme-logoThursday at 12:20pm I am moderating a session on “The Future Of Live Streaming: Sports, Linear TV & Social” at the New York Media Festival in NYC. It’s a short 30-minute round table panel, with lots of Q&A from the audience. I am looking for one more speaker to join the panel, preferably a content owner/broadcaster/aggregator etc. Email me if interested.

The Future Of Live Streaming: Sports, Linear TV & Social
From NFL games on Twitter, to upcoming live linear services from Hulu and AT&T joining Sling TV, live streaming is exploding on the web. With rumors of Amazon wanting to license live sports content, Disney’s investment in MLBAM, and Twitch pumping out millions of live streams daily, consumers now have more live content choices than ever before. Attendees of this session will have the opportunity to participate in the discussion about the most important obstacles being faced when it comes to live streaming. Topics to be covered include content licensing costs for premium content, monetization business models, what consumers want to watch and the impact social services could have on live video. This will be an interactive session with plenty of Q&A from the audience. http://mefest.com/session/the-future-of-live-streaming/

How To Implement A QoS Video Strategy: Addressing The Challenges

While the term “quality” has been used in the online video industry for twenty years now, in most cases, the word isn’t defined with any real data and methodology behind it. All content owners are quick to say how good their OTT offering is, but most don’t have any real metrics to know how good or bad it truly is. Some of the big OTT services like Netflix and Amazon have built their own platforms and technology to measure QoS, but the typical OTT provider needs to use a third-party provider.

I’ve spent a lot of time over the past few months looking at solutions from Conviva, NPAW, Touchstream, Hola, Adobe, VMC, Interferex and others. It seems every quarter there is a new QoS vendor entering the market and while choice is good for customers, more choices also means more confusion on the best way to measure quality. I’ve talked to all the vendors and many content owners about QoE and there are a lot of challenges when it comes to implementing a QoS video strategy. Here’s some guidelines OTT providers can follow.

One of the major challenges in deploying QoS is the impact that the collection beacon itself has on the player and the user experience. These scripts can be built by the content owner, but the time and resources it takes to not only build them for their ecosystem of platforms, but develop dashboards, create metrics and analyze the data is highly resource intensive and time consuming. Most choose to go with a third-party vendor who specifically offers this technology, however choosing the right vendor can be another pain point. There are many things to consider when choosing a vendor but in regards to implementation, content owners should look at the efficiency of their proposed integration process (for example, having standardized scripts for the platforms/devices/players you are using and the average time it takes to integrate) and their ability to adapt to your development schedule. [Diane Strutner from Nice People At Work (NPAW) had a good checklist on choosing a QoE solution from one of her presentations, which I have included with permission below.]

dAnother thing to consider is the technology behind the beacon itself. The heavier the weight of the plug-in the longer the player load time will be. There are two types of beacons, ones that process the data on the client (player-side) which tend to be heavier or the ones that push information back to a server to be processed, which tend to be lighter.

One of the biggest, if not the biggest, challenge to implement QoS is that it forces content owners to accept a harsh reality: that their services do not always work as they should all the time. It can reveal that the CDN, or DRM or ad server or player provider that the content owner is using is not doing their job correctly. So the next logical question to ask is what impact does this have? And the answer is, that you won’t know (you can’t know) until you have the data. You have to gather this data through properly implementing a QoS monitoring, alerting and analysis platform and insights gathered from it and then apply it into your online video strategy.

When it comes to collecting metrics, there are some metrics that matter most to ensure broadcast quality video looks good. The most important are buffer ratio (amount of time in buffer divided by playtime), join time (or time to first frame of the video delivered), and bitrate (the capacity of bits per second that can be delivered over a network). Buffer and join time have elements of the delivery process that can be controlled by the content owner, and others that cannot. For example, are you choosing a CDN who had POPs close to your customer base, had consistent and sufficient throughput to deliver the volume of streams being requested, and peers well with the ISP your customer base is using? Other elements like the bitrate are not something that a content owner can control, but should influence your delivery strategy, particularly when it comes to encoding.

For example, if you are encoding in bitrates that are HD but your user base streams at low bitrates, this will cause quality metrics like buffer ratio and join time to increase. One thing to remember is that you can’t look to just one metric to understand the experience of your user base. Looking at one metric alone can lead to misinformation. These metrics are all interconnected, and you must have the full scope of data in order to get the full picture needed to really understand your QoS and the impact it has on your user experience.

Content owners routinely ask how they can ensure a great QoE when there are so many variables (i.e., user bandwidth, user compute environment, access network congestion, etc.)? They also want to know once they data is collected, what industry benchmarking should they use to compare their data to? The important thing to remember is that such benchmarks can never be seen as anything more than a starting block. If everything you do is “wrong” (streaming from CDNs with POPs half way across the world from your audience base, encoding in only a few bitrates, and other industry mistakes) and your customer base and engagement grow (and earn more on ad serve and/or grow retention) then, who cares? And let’s say you do everything “right” (streaming from CDNs with the best POP map, and encoding in a vast array of bitrates) and yet your customers leave (and the connected subscription and ad serve revenue drops) then, who cares either?

When it comes to QoS metrics, the same logic applies. So what do content owners focus on? The metrics (or combination of them) that are impacting your user base the most. How do content owners identify what these are? They need data to start. For one it could be join time. For their competitor it could be buffer ratio. Do users care that one content owner paid more for CDN, or has a lower buffer ratio that their competitor? Sadly, no. The truth behind what matters to the business of content owners (as it relates what technologies to use, or what metrics to paramount) in their own numbers. And that truth may (will) change as your user base changes, viewing habits and consumption patterns change, and your consumer and vendor technologies evolve. And content owners must have a system that provides continual monitoring to detect these changes at both a high level and granular level.

There has already been widespread adoption for online video, but the contexts in which we use video, and the volume of video content that we’ll stream per capita has a lot of runway. As Diane Strutner from Nice People At Work correctly pointed out, “The keys to the video kingdom go back to the two golden rules of following the money and doing what works. Content owners will need more data to know how, when, where and why their content is being streamed. These changes are incremental and a proper platform to analyze this data can detect these changes for you“. The technology used will need to evolve to improve consistency, and the cost structure associated with streaming video will need to continually adapt to fit greater economies of scale, which is what the industry as a whole is working towards.

And most importantly, the complexity that is currently involved with streaming online video content, will need to decrease. And I think there is a rather simple fix for this: vendors in our space must learn to work together and put customers (the content owner, and their customers, the end-users) first. In this sense, as members of the streaming video industry, we are masters of our own fate. That’s one of the reasons why last week, the Streaming Video Alliance and the Consumer Technology Association announced a partnership to establish and advance guidelines for streaming media Quality of Experience.

Additionally, based on the culmination of ongoing parallel efforts from the Alliance and CTA, the CTA R4 Video Systems Committee has formally established the QoE working group, WG 20, to bring visibility and accuracy to OTT video metrics, enabling OTT publishers to deliver improved QoE for their direct to consumer services. If you want more details on what the SVA is doing, reach out to the SVA’s Executive Director Jason.

Flash Streaming Dying Out, Many CDNs Shutting Down Support For RTMP

fms4-0-mnemonicI’ve gotten a few inquiries lately from content owners asking which CDNs still support Adobe’s proprietary Flash streaming format (RTMP). Over the past 12 months, many, but not all, of the major CDNs have announced to their customers that they will soon end support for Flash streaming. Industry wide, we have seen declining requirements for RTMP for some time and with most of the major CDNs no longer investing in Flash delivery, it has allowed them to reduce a significant third-party software component from their network. Flash Media Server (FMS) been a thorn in the CDN service providers’ sides for many years operationally and killing it off is a good thing for the industry. HLS/DASH/Smooth and other HTTP streaming variants are the future.

Since it’s confusing to know which CDN may still support Flash Streaming, or for how much longer, I reached out to all the major CDNs and got details from them directly. Here’s what I was told:

  • Akamai: Akamai still supports RTMP streaming and said while they are not actively promoting the product, they have not announced an end-of-life date. Akamai said they are investing in RTMP streaming but that their investment is focused on ensuring continued reliability and efficiency for current customers.
  • Amazon: Amazon continues to support RTMP delivery via CloudFront streaming distributions, but the company has seen a consistent decrease in RTMP traffic on CloudFront over the past few years. The company doesn’t have a firm date for ending RTMP support, but Amazon is encouraging customers to move to modern, HTTP-based streaming protocols. 
  • Comcast: Comcast does not support RTMP on their CDN and chooses to support HTTP-based media and all formats of that (HLS, HDS, Smooth, etc.) The only principal requirement they see in the market that involves RTMP is for the acquisition of live mezz streams which then get transcoded into various bit-variants and HTTP-based formats.
  • Fastly: Fastly has never supported RTMP to the edge/end-user. Their stack is pure HTTP/S and while they use to support RTMP ingest, the company retired that product in favor of partnering with Anvato, Wowza, JWPlayer and others.
  • Highwinds: Highwinds made the decision to stop supporting RTMP back in 2012 in favor of HTTP and HTTPS streaming protocols and have since helped a number of customers transition away from RTMP delivery to an HTTP focus.
  • Level 3: Level 3 stopped taking on new Flash streaming customers a year ago and will be shutting down existing customers by the end of this year.
  • Limelight Networks: Limelight still supports RTMP streaming globally across their CDN. The company said their current investment focus for video delivery is in their Multi Device Media Delivery (MMD) platform which can be used to ingest live RTMP feeds and deliver RTMP, RTSP, HLS, HDS, SS and DASH output formats. Limelight is encouraging customers to move away from RTMP and to HTTP formats for stream delivery.
  • Verizon Digital Media Services: Verizon announced plans to no longer support Flash streaming come June of 2017. They are actively working to decommission the RTMP playout infrastructure based on FMS 4.5. Verizon has written their own engine to continue to support RTMP ingest and re-packaging for HLS/DASH playout that is more natively integrated with their CDN, but they will no longer support RTMP playout after that time. Verizon is no longer actively onboarding new RTMP playout customers (since June 2016).

While many of the major CDNs will discontinue support for RTMP, a lot of smaller regional CDNs still support Flash streaming, so options do exist in the market for content owners. But the writing is on the wall and content owners should take note that at some point soon, RTMP will no longer be a viable option. It’s time to start making the transition away from RTMP as a delivery platform.