Server Side Ad Stitching Finds A Home In Live and Catch-up

Server-side ad insertion has normally been about tackling the problems of ad blocking and device fragmentation. But we are now seeing platform shifts that change the sweet spot of ad stitching towards live and catch up TV. Client side ad SDKs have grown in sophistication, the platforms they run have become more capable and the transition to native apps brings tools for platform normalization. This enables publishers to deliver VOD client side ads very effectively. Even with these improvements, client SDKs don’t deliver a live broadcast experience for live or catch up TV. This shift to focus on live imposes some additional requirements outside of ad stitching’s traditional ad block and device fragmentation coverage, such as frame accurate inserts, and high performance in a real-time live broadcast environment.

To frame the discussion it’s important to understand why client side fulfillment is desirable in the first place. Fulfilling ads on the client satisfies two main areas user authenticity and interactivity. Interactive components, such as clickable ads, companions can largely be accomplished with more complex server side ad stitching solutions but as the client side SDK grows in complexity, publishers are quickly left asking why they are not using a client side SDK in the first place. This problem is mostly negated where the player and ad stitching are coming from the same vendor since they will be pre-integrated. Authenticity methods are more difficult to satisfy server side since hybrid systems only give you partial coverage if the server is ultimately making the fulfillment request on behalf of the user.

In a recent discussion I had with Michael Dale at Kaltura, which just announced an ad-stitching collaboration with WeatherNationTV, we discussed why the traditional strong points of ad stitching are losing favor. As Michael pointed out, while ad blocking tools have increased in capability, their ability to target closed platforms such as OTT streaming devices and native applications on iOS and Android has been limited. As video consumption continues it’s aggressive transition to these devices ad blocking is less of a reason to use ad stitching.

Additionally in desktop environments there are emerging solutions such as “secret media” that work around ad blockers while retaining client side fulfillment. Ad blocking on the router level with “Tomato” firmware or AdBlock Plus supplied blacklists, is a complicated technical task, and has been made more complex with the transition to https everywhere. For example iOS 9 recommends using https only for new apps. While lots of press is given to iOS 9 “ad block” support, it probably will have little impact on video experiences taking place in native apps, and ad playback within non-native was already hampered by Apple’s iOS native controls.

Device fragmentation was universally recognized as an inhibitor towards high quality consumer media delivery, and has largely been addressed by several industry infinitives. The transition to native application runtimes has enabled publishers to include their own “players” with first class Ad integration. For example Google’s exoPlayer on Android provides both HLS normalization and a reference DoubleClick integration and online video platforms have heavily invested in native player SDKs for easy video app creation. There is still a valid argument per fragmentation on streaming devices and smart TV’s. But even here, we are increasingly seeing “capable enough” HTML5 runtimes via hbbTV initiative, and on popular streaming devices such as Chromecast and Fire TV. The evolution of these platforms, to include a capable HTML5 runtime, enables syndication of player run-times and player business logic.

Live is a different story. Unlike VOD where fulfillment delays have a negligible impact on the user experience in a live broadcast frame accurate cuts are important as live content comes back from commercial break to immediate critical content. Likewise catchup-TV infrastructure can require block out or ad inserts on-top of existing ads prior to the content being segmented and re-ingested as “VOD” content. The value of client side heuristic for ad beaconing, interactivity are still important so working with a live ad stitching solution that is integrated with the player is important. This integration enables the player to consume ad timing, companion click through and ad viewing beacon targets from the server and handle them appropriately client side. Because native SDK integration is already needed for addressing fragmentation and client side ads, a holistic solution covering both client side and server side ad interactivity is important towards controlling integration costs for publishers that are streaming both live and VOD content.

Because of these unique requirements, live ad stitching is critical component as part of an overall solution that includes native SDKs and client side fulfillment for publishers and service providers that are making ad monetize live and VOD content available within distribution to native devices and set top boxes. And with all the recent vendor acquisitions in the market, especially around live linear workflows, ad stitching for live is going to become even more important in the monetization discussion moving forward.

Sponsored by

iOS 9 Download Traffic Peaks At 13Tbps Across 70 Telco & University Networks

For the past few days, transparent caching provider PeerApp has been monitoring apple.com activity for almost 70 of their telco and university customers across the globe. [Updated 9/21: PeerApp says that the 70 customers do not include their Tier 1 customers.] Not surprisingly, around 1pm ET on Wednesday after iOS 9 was released, combined traffic from iOS 9 downloads spiked to almost 13Gbps 13Tbps. Nearly 80% (over 10Tbps) of this traffic is being delivered through PeerApp’s cache, avoiding congesting of the operator and University network. Looking at individual systems, PeerApp said they are seeing speed improvements on the order to 10-20x; meaning customers served by their caches are getting the apple.com traffic 10-20x faster from cache. One University in particular is experiencing up to 100x speed improvement in downloads.

peerapp-appleIn conversations I’ve had with some ISPs in the U.S., iOS 9 downloads have been accounting for anywhere between 8%-15% of traffic inside their network. Since this isn’t the first time ISPs have had to deal with iOS downloads most are well setup to handle the spike and use caches to deal with the extra traffic. I don’t know the breakdown on the percentage of downloads Apple is handing via their own CDN, but like iOS 8, they are using a combination of their own CDN and third-party CDN providers. Downloads appear to be going smoothly for most consumers, with very few complaints and amongst the 12 devices I personally upgraded to iOS 9, the download and install process was quick and painless.

Thursday Webinar: Delivering Broadcast Quality Video to Mobile Screens

Thursday at 2pm ET, I’ll be moderating a StreamingMedia.com webinar on the topic of “Key Considerations for Delivering Broadcast Quality Video to Mobile Screens“. What is the best way to meet your audience’s expectations for quality live and on demand mobile viewing? The answer is: It depends. Adaptive delivery is complex and implementing the right strategy requires careful consideration of a range of technical and business factors. In this webinar, a technical expert with experience helping companies implement video delivery solutions will guide you through the decision-making process. He’ll share the questions you need to ask to make the right decisions for your organization and use real life use cases to illustrate specific decision points such as:

  • How best to handle transcoding and transmuxing to support multiple formats, devices and screens
  • When to use your own origin and when to leverage the cloud
  • Whether to invest in web servers and storage, or a content delivery network (or two)
  • How monetization impacts your strategy
  • How to evaluate your reach and availability requirements

The latest trends in video consumption, audience expectations, content protection, and access control will also be shared. Register Now to attend this FREE live webinar.

Cedexis Announces New Video Service For Improving QoS

cedexis-logoCedexis, long known for its multi-CDN/multi-cloud solutions for general purpose HTTP and HTTPS traffic, has announced a product called “Cedexis Buffer Killer“, that is specifically aimed at the online video space. The Cedexis video solution allows content owners that use multiple CDNs to shift traffic, in real-time, based on QoS metrics. Cedexis new offering will directly compete with Conviva’s Precision product line.

According to Cedexis its initial set of beta customers have experienced on average a 56% improvement in buffering; 18% improvement in video start-time; and a decrease in video failures of up to 66%. These performance benefits have been realized by a growing list of OTT video customers who are using Cedexis to make real-time load balancing decisions between multiple CDNs and/or origins, avoiding congestion and outages, be they related to CDN, data center or ISP performance issues.

The critical competitive edge to this solution is the real-time performance monitoring provided by the free Cedexis Radar community of 800+ enterprises and every major Cloud and CDN provider in the world. Collecting billions of Internet performance metrics a day provides a real-time map of where the Internet is humming along nicely, and where interconnections between ISPs and cloud service providers, who host and distribute the world’s content, are congested or disrupted. Importantly, this crowd-sourced approach let’s companies who may not have a suitable density of audience in a specific geography or on a particular ISP, take advantage of QoS measurements made by other Radar community members like LinkedIn, Tumblr, Microsoft, Vidme or LeMonde.

Realizing that player integrations take time and money, Cedexis provides a range of deployment options for this new product. Any player can be pointed at the Openmix platform via existing DNS flows to select optimum CDNs or origins, optimum being defined by the OTT video provider as highest availability or throughput, lowest latency, best costs or any combination.

For customers who want to make calls via HTTP, which ensures every player gets the freshest decision data outside of DNS resolver TTLs, Openmix can be called directly from the video player or from the content management system itself. CMS integration is emerging as a preferred method by early customers, as modifying a CMS can be done more quickly costs-effectively than updating a range of video players.

Combining the DNS and HTTP services from Cedexis allows customers to optimize the entire end-user experience, including, website and ad delivery, player downloads and video streaming. Cedexis states that usage from the two services can be aggregated for better volume pricing. Importantly, Cedexis has a simple utility-based pricing model based on the number of monthly queries made to their platform. Cedexis already has a group of interesting customers using it’s services for OTT video delivery such as PBS, Viaplay (division of Viasat/MTG) , ViewLift (Snagfilms and other OTT properties), Ora.tv and MassMotionMedia amongst others.

This emerging market in the video space is interesting because it shows that OTT has evolved into a principle delivery vehicle for corporate value, and the growing realization that single sourcing the delivery of that value represents an unacceptable corporate risk. Further, as shown by technology leaders on the Web, end-user experience is quickly emerging as the end-game for defining success, and real-time optimization solutions like Cedexis provide the visibility and control needed to build the real-time feedback loop between end-user QoS and networks.

If you have not seen the Cedexis Radar community data in action, you can see what their billions of daily measurements reveals via their newest free public map on CDN, cloud and ISP performance at Radar Live.

HTTP/2 Will Bring Significant Speed Improvements For CDN Customers

HTTP2Lately content owners have been asking me about HTTP/2 and what my opinion is on how it will impact the future of the content delivery business. The current HTTP protocol has worked well for a long time and was aligned with the needs of web developers when it was widely adopted around 1997. However, since then, webpages require much more external resources, bigger images, and today’s webpage are significantly different from those in the late 90’s. HTTP/2 brings multiple ways to align the needs of content owners and developers and is the future of how most content will be delivered in the near future.

One of the biggest advantages of the new protocol version is a significant speed improvement of data transfers, mainly due to lower latencies. For average bandwidth speed, it has been proven for latencies to have bigger impact on loading time than the actual connection speed. Simply put, the HTTP/2 protocol enables serving multiple requests in parallel over a single TCP connection, instead of one request per connection in the legacy version. Loading times are also reduced by compressing HTTP headers, transferring binary data (as opposed to textual in HTTP 1.x) and many other features. It also works great for connections that are unstable and have high latencies (typically mobile devices, third world countries etc.), however the difference is visible at almost any network/device. HTTP/2 is currently supported by all major browsers. It is safe with the latest versions of Chrome and Firefox (using TLS), Internet Explorer 11 (on Windows 10), Opera 28+ and Safari 9+.

Since HTTP/2 is still new, there are certain applications (like CDN, for example) where no one has run HTTP/2 at big scale yet. Some CDNs are still testing it (beta), some are thinking about it and some are waiting for others to confirm it is stable. The first CDN I saw to put it into action is CDN77. The company told me they decided to make it live across their entire network only “after extreme load testing” since generally, servers don’t support HTTP/2 by default. You need to activate it manually and make sure it scales. Some CDN77 customers I’ve spoken to are already using it and have reported that it is stable and works fine. The company’s testing showed that for sites which needed 6.5 seconds to load via HTTP 1.x it only takes 2.7 seconds to load over HTTP/2, a 58% improvement.

HTTP/2 is all about speed and reliability. Given the way it works it can send all the resources in much shorter time, and in extreme cases even several times faster, usually around half the time. Implementation of HTTP/2 brings two advantages at once to content owners. It combines the new faster protocol together with the usual advantage of a CDN, serving content from a location closest to the user. HTTP/2 is the future of the content delivery business and by next year, I expect all third-party CDN service providers to roll it out across their network. If you want to see a difference of the two protocols in action, check out this live demo page CDN77 has set up at www.http2demo.io

Google Cloud Announces CDN Interconnect Program With Level 3, Fastly, Highwinds & CloudFlare

Google Cloud has announced a new collaboration with four CDN providers, Level 3, CloudFlare, Fastly and Highwinds in a program they are calling CDN Interconnect. The goal is to allow joint customers of these CDN providers and Google’s Cloud Platform to pay reduced prices for in-region Cloud Platform egress traffic. I’m also hearing that additional CDNs will join the program before too long, giving content owners even more flexibility. For customers using Google Cloud as their origin source, this will lower their delivery costs and should improve delivery performance.

CDN Interconnect allows select CDN providers to establish direct interconnect links with Google’s edge network at various locations. Customer egressing network traffic from Google Cloud Platform through one of these links will benefit from the direct connectivity to the CDN providers and will be billed according to Google’s pricing. Intra-region carrier interconnect traffic will cost Google Cloud customers $0.04/GB in NA, $0.05/GB in EU and $0.06/GB in APAC.

Google notes that this pricing only applies to intra-region egress traffic that is sent to CDN providers approved by Google at specific locations approved by Google for those providers. So not every location a CDN provider has would necessarily be approved for the CDN Interconnect program. You can read more about the announcement on Google’s cloud blog.

Former KIT Digital CEO and CFO Arrested: Charged With Accounting Fraud

It was only a matter of time. KIT Digital’s former CEO and Chairman Kaleil Isaza Tuzman and former CFO Robin Smyth have been arrested and charged with accounting fraud. Both are being held pending extradition to the U.S. and face up to 20 years in jail if convicted on the most serious charge. In 2013 I published a long blog post with details on accounting irregularities at KIT Digital, with info given to me by former employees. Stories of money missing, lack of accounting control and deliberate intent to change KIT’s numbers. At the time I was only able to say there was a “possible” SEC Fraud Investigation in the works, but I knew it was already well underway internally at the SEC.

The SEC was already investigating Kaleil Isaza Tuzman for insider trades and he landed in their cross hairs even more when several law firms filed securities class action lawsuits against KIT Digital. At the time I investigated the company, insiders told me that KIT Digital simply made up revenue that did not exist and counted revenue from contracts that were cancelled or expired. It sounds like that’s exactly what happened as the SEC is charging Kaleil Isaza Tuzman and Robin Smyth with recognizing improper revenue, overstating company assets, and understating losses between 2010-2012. Kaleil Isaza Tuzman is also being charged with conspiring with a hedge fund operator from 2008 to 2011 to inflate KIT’s share price that allowed Kaleil to conceal purchases of his own company’s stock.

The SEC complaint says that the defendants knowingly or recklessly caused KIT Digital to divert nearly $8M into a slush fund, that resulted, among other things, in a phony reduction of receivables. Falsely recognized approximately $1.5M in revenue for a product that was never delivered, nor ever paid for. They are also charged with hiding the loss of $2M with an offshore money management firm and hid the fact they were aggressively trading KIT Digital stock on the open market. In one instance, they took money they said was being used to buy a competitor and instead, gave that money to their customers, who then paid it back to KIT Digital to make it appear as if it was revenue. Reading through the SEC complaint gives you an insight into just how arrogant these guys were, how many schemes they came up with and contains copies of emails where they are openly discussing what they were doing behind the scenes.

There is so much damming evidence against them that even one of the outside accounting firms they used said there was “material weakness” in the company’s internal controls over financial reporting saying that, “KIT digital and Subsidiaries has not maintained effective internal control over financial reporting.” Two former employees told me that some of KIT’s executives instructed them that they needed to be able to “massage the numbers each quarter” and have “more control over the numbers we show”. And between KIT Digital reporting Q4 results and filing their 2011 10-K, $2.14M in cash disappeared that the company could not account for.

KIT Digital was a bomb waiting to go off. The signs were there. The company misguided their revenue, took good will write-offs, missed or delayed multiple SEC filings, restructured their board and management multiple times, delayed putting out public news for days, fired two accounting firms, defaulted on a debt covenant, had cash disappear and acquired more than ten companies. This company screamed warning for a long time. No one should have ever put any trust in Kaleil Isaza Tuzman. I had a back and forth argument with him in the comments sections of a blog post dating back to 2008, when I accused him of making false statements and lies. He responded by saying the company’s “operating/financial results will tell the true story”. They sure have.

Kaleil Isaza Tuzman was arrested Monday in Colombia and Robin Smyth was arrested Tuesday in Australia. Bloomberg lists more details on the dockets with the criminal case being U.S. v. Tuzman and the SEC case is Securities and Exchange Commission v. Tuzman, 15-cv-07057, U.S. District Court, Southern District of New York (Manhattan).