Thursday Webinar – Transcoding and Prepping Content for The Multiscreen Mandate

Thursday at 2pm ET, I’ll be moderating another StreamingMedia.com webinar, this time on the topic of, “Transcoding and Prepping Content for The Multiscreen Mandate.” Encoding your video is one thing; transcoding is another ball of wax entirely. As multiscreen video consumption continues to grow, how does a company with a large video library ensure the best viewing experience, especially when new devices — and their wide array of screen sizes and operating systems — continue to enter the playing field? We’ll discuss workflows and on-prem and cloud solutions that offer ease, efficiency, and cost savings for your video publishing, on demand and live. Join iStreamPlanet, Akamai, Adobe, and Ooyala to learn about:

  • The top three challenges facing content owners and distributors in delivering multiscreen services
  • Secure, cloud-based media processing that offers high quality content preparation through an efficient turnkey media solution
  • One-stop content preparation and packaging for multiple platforms for both live and on-demand media
  • Supporting a wide range of source codecs from established broadcast codecs to newer codecs like h.264/h.265
  • Integration with a content management system and playback infrastructure to support existing and new business models

These questions and many more will be covered in this webinar so REGISTER NOW to join us for this FREE Web event.

Sponsored by

New Findings Show Cloud Solutions Deliver an Outstanding Price-Performance Value Proposition

Screen Shot 2014-09-16 at 12.22.01 PMWe all know that video volume is skyrocketing, millenials are far more likely to watch video on their devices than on their primary screens, and end users are only a mouse click away from an alternative content source should your service fail to delight. Online video offerings are no longer an option for content companies – users demand their content when they want it, where they want it, and services must provide this or risk subscriber flight. However, other factors related to online video are less understood.

One of those factors is how complex online streaming is, and how quickly its parameters can change. For example, the relative volume of video consumed on portable devices such as smart phones and tablets has more than quadrupled over the last two years – the absolute value has risen by more than 500X in two years. In terms of streaming formats, Flash once dominated the scene but today is a nearly archaic format – presently HLS and HDS playing key roles, with Silverlight fading and HTML5 right around the corner. Where HD was a challenge as recently as a few years ago, today 4K is the new differentiator. The other factor is the speed with which new services must be taken online – in today’s experimental environment, services need to go live in days, but companies often spend weeks if not months procuring and deploying equipment.

This constant disruption of technical workflows, coupled with the need for agility, creates difficult challenges for content companies to grapple with. Soaring volumes, fluctuating requirements and elusive monetization are a tough combination to handle in a financially sustainable manner. Cloud-based online workflow solutions can go a long way in mitigating these challenges. Since vendors of online video services specialize in this specific business, they can devote much higher levels of resources towards optimizing their online video workflows, stay closely in tune with changing requirements and latest innovations, and can amortize the costs of innovation and scale across a number of customers. As a result, content owners can reap the benefits of rapid time to market with new online products, near-zero CAPEX and comparable OPEX when all factors related to the total cost of ownership are considered. Having realized that many content companies underestimate the true cost of on-premises workflows and are unduly wary of the usage-based pricing models typical of the cloud, Frost & Sullivan worked on a white paper in partnership with iStreamPlanet to quantify and compare the actual costs of on premises and cloud-based workflows. You can download and see the results of the study in the paper here.

One surprising finding was that the economics work out not only in use cases related to capacity spikes – such as short-term coverage of sports tournaments or elections, or sudden events that result in viral video consumption – but also in the case of 24×7 linear live channels. In other words, even if a content company had adequate space in their data center to host transcoding and streaming equipment for a full range of 100-200 content channels (which, it turns out, most companies actually do not have), it is more cost efficient to host that workflow in the cloud. The case is even more compelling for small and medium-sized content companies whose viewership is small enough or content volume is small enough that the overhead of on-premises installation would be very difficult to recoup from uncertain online revenues. Another surprising finding is that cloud-based solutions can pay for themselves simply in terms of minimizing opportunity cost associated with the long delays that on-premises deployment inevitably incurs.

All this aside, not all cloud implementations are created equal. When choosing a cloud partner, it’s important to ensure that the solution has been architected from the ground up to be optimized for the cloud and to handle the fluid nature of its underlying computational resources. Broadcast quality SLAs and excellent quality of experience remain must-have features for competitive services, and this can only be achieved when your vendor provides robust industrial-grade solutions. Fortunately for the industry, the maturity of available solutions is growing even as costs continue to fall, making the business case for moving online video workflows to the cloud increasingly stronger. You can download the whitepaper for free here.

Majority Of Mobile Video Viewing Still Under 3 Minutes In Length

This morning Ooyala issued its Q2 2014 Global Video Index Report, providing insights into video viewing trends on mobile, desktop, tablet and TV screens. Not surprisingly to anyone, multi-screen video consumption is growing and in the past year, mobile video viewing has more than doubled to become over 25% of all online viewing. While that’s impressive growth, the data also shows that the majority of users who watch video on mobiles devices are still consuming short-form clips, under three minutes in length. As the report details, viewers are looking to big screens for big chunks of their entertainment. Here are some highlights from the report, but to get the full picture I suggest you download it.

  • On connected TVs, viewers spent 65% of their time watching videos 30 minutes or longer; and over half of that time (54%) was with content longer than 60 minutes.
  • On tablets, viewers spent 23% of their time watching video of 30–60 minutes in length, more than on any other device.
  • 81% of time watched on the largest screen, connected TVs, was with videos longer than 10 minutes.
  • Mobile video share has increased 127% year-over- year and 400% in the past two years.

Screen Shot 2014-09-15 at 11.49.07 AM

Transparent Caching Provider PeerApp Now Has 450 Deployments, New CEO

Transparent caching provide PeerApp has been pretty quiet over the past year, with new startup Qwilt seemingly getting all of the attention in the transparent caching space. Still the leader in the market, based on revenue, PeerApp is looking to accelerate their business and has a made a lot of new executive hires recently, including the appointment of a new CEO in June. This morning, the company announced they have added 50 new customers since the start of the year, and that their solution is now deployed at over 450 network operators and enterprises worldwide. The company also disclosed that many customers are managing 100-500 Gigabit capacity on PeerApp’s platform.

As I detailed in my last transparent caching report, I have some concerns around the long-term viability of the transparent cache as a stand-alone product. Content delivery and Web acceleration vendors are moving towards integrating transparent caching technology into a broader set of Web-optimization platforms. This will create a bigger ecosystem, faster deployment, and more traction for the technology as a whole, but it will also cause transparent caching to no longer be thought of as a stand-alone offering in the market. Vendors will need to move up the stack with their offerings and integrate their platforms into larger delivery ecosystems.

The other problem transparent caching vendors are encountering is the amount of HTTPS traffic (YouTube) that can’t be cached, deployed caches by Netflix directly, and the absolute destruction of per Mbps pricing that makes expanding the business very price sensitive. However, the good news is that the global transparent caching industry is still healthy and I expect it to grow at a compound annual growth rate (CAGR) of 30.2% from 2012 to 2017. Content delivery on the web is constantly changing, requiring caches that must intelligently and dynamically identify and adapt to shifting content access patterns. Higher-quality video is coming, live video is exploding, operators are demanding better QoE and the market for transparent caching solutions is only going to accelerate.

IBC News Recap: HEVC, Cloud Workflows, Media Management, Video Encoding & Optimization

I wasn’t at the IBC show this year, but here’s a run down of the news announcements that I saw. Lots of focus on HEVC as expected, but less talk about 4K compared to last year. If I had to pick one theme from this show it would be cloud based media management platforms. I found two announcements to be particularly interesting; Brightcove’s new stand-alone video player platform, de-coupled from their OVP services and Microsoft Azure Media Services new live streaming platform. I’ll have more details and thoughts on both of these new offerings later in the week.

Inside Apple’s Live Event Stream Failure, And Why It Happened: It Wasn’t A Capacity Issue

Apple’s live stream of the unveiling of the iPhone 6 and Watch was a disaster today right from the start, with many users like myself having problems trying to watch the event. While at first I assumed it must be a capacity issue pertaining to Akamai, a deeper look at the code on Apple’s page and some other elements from the event shows that decisions made by Apple pertaining to their website, and problems with how they set up storage on Amazon’s S3 service, contributed the biggest problems to the event.

Unlike the last live stream Apple did, this time around Apple decided to add some Javascript JSON (JavaScript Object Notation) code to the apple.com page which added an interactive element on the bottom showing tweets about the event. As a result, this was causing the page to make refresh calls every few milliseconds. By Apple making the decision to add the JSON JavaScript code, it made the apple.com website un-cachable. By contrast, Apple usually has Akamai caching the page for their live events but this time around there would have been no way for Akamai to have done that, which causes a huge impact on the performance when it comes to loading the page and the stream. And since Apple embeds their video directly in the web page, any performance problems in the page also impacts the video. Akamai didn’t return my call asking for more details, but looking at the code shows there was no way Akamai could have cached it. This is also one of the reasons why when I tried to load the Apple live event page on my iPad, it would make Safari quit. That’s a problem with the code on the page, not with the video.

Because of all the refresh calls from the JSON-related JavaScript code, it looks like it artificially forced the player to degrade the quality of the video, dropping it down to a lower bitrate, because it thought there were more requests for the stream than there was. As for the foreign language translation that we heard for the first 27 minutes of the event, that’s all on Apple as they do the encoding themselves for their events, from the location the event is at. Clearly someone on Apple’s side didn’t have the encoder set up right and their primary and backup streams were also way out of sync. So whatever Apple sent to Akamai’s CDN is what got delivered and in this case, the video was overlaid with a foreign language track. I also saw at least one instance where I believe that Apple’s encoder(s) were rebooted after the event had already started which probably also contributed to the “could not load movie” and “you don’t have permission to access” error messages.

Looking at the metadata from the event page, you could see that Apple was hosting content from the interactive element on the apple.com event page on Amazon’s S3 cloud storage service. From what I can tell, it looks like Apple set up the content in a single bucket on S3 with little to no cache hit ratio, with poor bucket configuration. Amazon didn’t reply to my request for more info, but it’s clear that Apple didn’t set up their S3 storage correctly, which caused huge performance issues when all the requests hit Amazon’s network in a single location.

As for Akamai’s involvement in the event, they were the only CDN Apple used. Traceroutes from all over the planet (thanks to all who sent them in to me) showed that Apple relied solely on Akamai for the delivery. Without Akamai being able to cache Apple’s webpage, the performance to the videos took a huge hit. If Akamai can’t cache the website at the edge, then all requests have to go back to a central location, which defeats the whole purpose of using Akamai or any other CDNs to begin with. All CDNs architecture is based on being able to cache content, which in this case, Akamai clearly was not able to do. The below chart from third-party web performance provider Cedexis shows Akamai’s availability dropping to 98.5% in Eastern Europe during the event, which isn’t surprising if no caching is being used.

akamaiThe bottom line with this event is that the encoding, translation, JavaScript code, the video player, the call to S3 single storage location and the millisecond refreshes all didn’t work properly together and was the root cause of Apple’s failed attempt to make the live stream work without any problems. So while it would be easy to say it was a CDN capacity issue, which was my initial thought considering how many events are taking place today and this week, it does not appear that a lack of capacity played any part in the event not working properly. Apple simply didn’t provision and plan for the event properly.

Updated Thursday Sept. 9th: From talking to transit providers & looking at DeepField data, Apple’s live video stream did 6-8Tbps at peak. World Cup peak on Akamai was 6.8Tbps. So the idea that this was a capacity issue isn’t accurate and the event didn’t generate some of the numbers I see people saying, like “hundreds of millions” watching the stream.

Updated Thursday Sept. 9th: While some in the comments section want to argue with me that problems with the Apple.com webpage didn’t impact the video, here is another post from someone who explains, in much better detail than me, many of the problems Apple had with their website, that contributed to the live stream issues. See: Learning from Apple’s livestream perf fiasco

Internet Traffic Records Could Be Broken This Week Thanks To Apple, NFL, Sony, Xbox, EA and Others

Thanks to so many large scale live events and large file downloads taking place this week, it’s going to be a huge week of traffic on the Internet with content delivery networks and last mile providers preparing for what’s to come. Tomorrow in particular will be a big day on the net with so many things taking place all on the same day. Apple product announcements always make for a busy day on the web and while iOS 8 won’t be available for download tomorrow, here is a list of all the other events taking place tomorrow, or this week.

  • Monday Night Football (WatchESPN)
  • Apple’s Product Announcement (Tuesday)
  • Microsoft Security Patches (Tuesday)
  • NFL Now/Game Rewind Highlights (Tuesday is busiest day for NFL videos)
  • Yahoo! Aerosmith Concert (Tuesday)
  • Bungie’s Release Of Destiny Game (Tuesday)
  • EA Sports Fifa 15 Game Beta (Tuesday)
  • EA Sports NHL 15 Game (Tuesday)
  • League Of Legends NA/Europe Seed Matches (Tuesday)
  • Xbox Free Game Releases (including Halo: Reach)
  • Sony PS4 White Destiny Bundle (Tuesday)
  • New York Fashion Week Live (Monday-Thursday)
  • Apple’s iOS 8 Download (Probably Thursday)
  • President’s ISIS Speech (Wednesday)

Delivering video over the Internet at the same scale and quality that you do over a cable network isn’t possible. The Internet is not a cable network and if you think otherwise, you will be proven wrong this week. We’re going to see long download times, more buffering of streams, more QoS issues and ISPs that will take steps to deal with the traffic, knowing it will have a negative impact on the user experience. When iOS 8 comes out, some last mile providers are going to struggle and some will rate limit their network connections, as we saw the last time Apple’s iOS download was available. For some ISPs, iOS 7 downloads took up 40% of their traffic. Also, all other content providers are going to have to compete with this traffic and many I spoke to about it are keeping an eye on their quality guarantees and SLAs with their CDNs this week.

As for which CDNs are delivering all this content, I’ll be doing a lot of traceroutes this week, but Akamai, Limelight and Level 3 are all in the mix. I know Akamai will see the most web traffic from news sites covering Apple’s product announcements. Last time I looked, Microsoft’s security patches were being delivered by Akamai, Limelight and Level 3. Yahoo’s concert is being done by Akamai. Level 3 and others do the Xbox releases, Limelight was doing a lot of Sony downloads last I checked and when iOS 8 is available, I expect a lot of it to be delivered by Apple themselves, with Akamai and maybe also Level 3. Many of these events, both live and downloads, push more than 1Tbps via a single CDN, let alone those that use dual vendors, so it’s going to be a very busy week on the web for CDNs.

For all the talk of paid interconnects being such a bad idea, or causing great harm to the Internet, none of what is going to take place this week would be possible if these paid connections between CDNs and ISPs were not in place. So complain all you want, but it’s why the Internet works the way it does and why hopefully, all will go smoothly this week.

I will be collecting as many traceroutes as I can from multiple regions during all of these events, so if you can do a traceroute from your location, please send it to me at mail@danrayburn.com