New Study From M-Lab Sheds Light On Widespread Harm Caused By Netflix Routing Decisions

On Tuesday, M-Lab released a new study on the impact of network interconnection on consumer Internet performance. The report entitled “ISP Interconnection and its Impact on Consumer Internet Performance“, details findings based on the speed test results collected by its test servers for various ISPs throughout the country over a roughly two-year period. For those not familiar with M-Lab, they provide the largest collection of open Internet performance data used by the FCC, amongst others, for the Measuring Broadband America program.

M-Lab data shows that around May 2013, suddenly and simultaneously throughout the country, speed test results for many ISPs (AT&T, Comcast, CenturyLink, Time Warner Cable, and Verizon) experienced a sudden and significant decline in performance to a specific set of transit providers (Cogent, Level 3 and XO). Just as suddenly around March 2014 the performance returns to normal for most of these same ISPs. Coincidentally, a few other ISPs who Netflix had negotiated direct Open Connect connections (Cablevision and Cox) did not experience similar decline in performance. The data presented in the study confirms what myself and others have surmised about Netflix being ultimately responsible for the dramatic, simultaneous decline in Netflix performance for all non-Open Connect ISPs.

If you look at the M-Lab measured history of the congestion, you will notice that these timelines line up very closely with Netflix’s migration from 3rd party CDNs onto their own Open Connect platform. The performance impact also matches closely with ISPs that did not agree to provide Netflix with Free Peering while other ISPs that agreed did not experience a performance impact.

96C2FD9F-3407-42DE-B9A0-C7C2971F9D40Looking at Figure 1 from the report (below), we can see that performance suddenly degrades for three of the four major broadband companies in the NY metro area according to an M-Lab test server housed on Cogent’s network in NYC around May 2013 and then performance suddenly improves for all three around March 2014. This tight coordination of impact for multiple ISPs simultaneously suggests that the cause was not something done by the ISPs, but rather by another entity. (Note: I added the heading and arrows to the chart)

36F74188-CE96-4A91-899F-AD8E14F61624What entity might be responsible? Well, figure 2 shows us that the fourth broadband ISP in the NY metro area testing on the M-Lab server on Cogent’s network, Cablevision (the only one of the four with a direct connection to Netflix’s Open Connect CDN) did not experience the same sudden drop/rise in performance over their link to Cogent.

image002

Finally, M-Lab’s report also helpfully includes performance results for all four broadband ISPs in NY from a test server located on a different backbone connection (one that was not providing transit service to Netflix) showing no sudden performance changes for any ISP.

image003

The report also shows that direct interconnection agreements between Comcast/Netflix increased performance for other ISPs. Unless there were performance issues further upstream of the interconnection, there should have been no impact on the interconnection agreement between Comcast/Netflix on other ISP networks. And according to M-Lab’s findings, performance issues on ISPs networks were not due to technical issues but rather the business deals between ISPs. They say, “we were able to conclude that in many cases degradation was not the result of major infrastructure failures at any specific point in a network, but rather connected with the business relationships between ISPs“.

While some may want to take this report as a smoking gun that ISPs are causing congestion, they may forget, not understand, or purposely leave out, the fact that large content providers control the delivery of their traffic and can AVOID congestion. A recent MIT study “Measuring Internet congestion: A preliminary report” pointed out the fact that the ISPs singled out in this report have multiple alternative paths to reach them. The report states that, “Congestion at interconnection points does not appear to be widespread. Apart from specific issues such as Netflix traffic, our measurements reveal only occasional points of congestion where ISPs interconnect. We typically see two or three links congested for a given ISP, perhaps for one or two hours a day, which is not surprising in even a well-engineered network, since traffic growth continues in general, and new capacity must be added from time to time as paths become overloaded.”

Most agree that when Netflix, again, moved their traffic off of these newly congested paths to direct connections, performance improved both for Netflix services as well as other services impacted by this new congestion. What is puzzling however is the timing of this improvement. If you look at the graph above you will notice that all ISPs improved simultaneously in Feb 2014. This is the exact same time that Netflix and Comcast migrated traffic to their direct connection. While it is understandable that Comcast would improve, no one has explained how a Comcast direct connection would improve AT&T, Verizon, and Time Warner unless there were additional problems between the Netflix server and their transit ISPs themselves. When Netflix moved this traffic their congestion within their transit ISPs improved other destinations.

What M-Labs is trying to do is good for the Internet, but they need to expose more of the end-to-end problem. If they truly want to understand Internet congestion and user experience, they need to not only focus on interconnect, but they also should expand their measurement to the quality of transit ISPs and acknowledge the choices content sources make when delivering traffic to their customers. For example, a measurement can identify if there are material differences between a variety of OTT sources such as Amazon Prime, Netflix, Hulu and YouTube on a given ISP. If Amazon Prime HD video quality was excellent, but another source was poor, it would be interesting to determine why that’s occurring, and what options the content provider has to improve their services.

While many were quick to blame ISPs for problems consumers were having with their Netflix streaming experience, we’ve now have a lot of data in the market showing that the choices Netflix made directly impacted the quality of their video and other services as well. Between this new M-Lab data, the interconnection findings published by David Clark at MIT/CAIDA, this data, and a recently published research report that says Netflix is using calls for greater net neutrality to drive down the prices they pay, it’s now clear just how much control Netflix really has over the quality of video they deliver.

Sponsored by

Netflix Keynote Presentation To Detail Encoding Specs, 4K Video Encoding Pipeline

netflixWhen it comes to streaming video online, Netflix has been doing it longer than almost anyone and has some of the best quality video available. If you’ve ever wanted to know how Netflix gets such good quality and how they encode their videos, then you won’t want to miss their keynote on November 19th at the Streaming Media West Show. David Ronca, Director of Encoding Tools at Netflix will detail their encoding service and the key engineering decisions they made around video encoding. Some of the topics that David will be cover include scalability, progressive and parallel video encoding, and Netflix’s 4K video encoding pipeline. Register online using the code 200DR for a “Discovery Pass” and get free access to the keynotes, exhibit hall, discovery track sessions, and receptions. #smwest

Amazon Announces New Streaming Device, Fire TV Stick, Special Limited Pricing Of $19

Fire_TV_StickThis morning Amazon announced a new HDMI streaming device called the Fire TV Stick, which is already available for pre-order and will ship before black Friday. (Nov. 19th) Amazon is offering Prime members, or those who sign up for Prime, special pricing on the stick for only $19, for the next 48 hours, after which it will retail for $39. Many might be quick to compare this to Google’s Chromecast stick, but unlike Chromecast, Amazon’s Fire TV Stick won’t require another device to use it. Amazon’s streaming stick comes with a remote and most, but not all, of the functionality of their $99 Fire TV box.

The Fire TV Stick has a dual-core processor (Broadcom Capri 28155, dual-core 2xARM A9), 1GB of RAM, 8GB of storage, Bluetooth 3.0 and dual-band, dual-antenna Wi-Fi (MIMO). From a CPU standpoint, on paper it beats Roku’s Streaming Stick and Google Chromecast in terms of performance many times over. The Fire TV Stick comes with the same user interface as the Fire TV and has support for all of the current content channels on the Fire TV, with the exception of some gaming apps. Casual games can still be played on the stick, like Angry Birds, but some complex games, requiring more controls aren’t supported.

The remote that comes with the Fire TV Stick doesn’t have the same voice search functionality as the Fire TV remote, but it is compatible with the voice remote if bought separately for $29. Users can also use a free Android app on their phone to search for content with their voice and Amazon says an app for iOS is coming shortly. Non-gaming apps that are written for Fire TV will automatically work on the Fire TV Stick, which is also based on a flavor of Android. Amazon also confirmed that they are on track to have support for HBO Go on the Fire TV before the end of December, with the Fire TV Stick getting HBO GO in the new year.

Amazon told me they have “made a lot” of the Fire TV Sticks in preparation for the pre-orders, but in usual Amazon fashion, won’t disclose how many are available before they sell out or if they expect there to be a shortage during the holidays. Amazon has placed a buying limit of two devices per Prime member, at the special $19 price point.

While Amazon’s new stick is good for consumers, it’s bad news for Roku, which now has even more competition, at a lower price point. I love my Roku box and it still has more premium content channels than any other $99 or less device, but it’s only a matter of time before Amazon’s streaming stick, or Google’s new Nexus Player for that matter, catches up with Roku in terms of content choices. Roku simply can’t compete with Amazon or Google’s marketing dominance and their ability to package these products in with other services.

In the long run, the non-gaming streaming device space is going to be won by Apple, Google and Amazon. They all control multiple devices that tie into a larger ecosystem, make money from many avenues and at some point, Amazon or Google is going to get the price on these sticks down to where they are free. It’s not hard to imagine Amazon giving them away with a Prime membership or Google giving it away with a Nexus tablet or phone. At $19, the price point doesn’t have far to go before it reaches zero.

Just Announced: Cory Mummery, VP and GM of NFL Now To Keynote Streaming Media West Show

nflI’m pleased to announce that Cory Mummery, VP/GM of NFL Now will be the keynote speaker on the first day of the Streaming Media West Show, taking place November 17–19 at the Hyatt Regency Huntington Beach Resort & Spa in Huntington Beach CA. Come hear about their new cross-platform digital product, NFL Now, that packages all available NFL produced live and on-demand video from NFL Films, NFL Network, NFL games and all 32 NFL teams for viewing on multiple devices. Register online using the code 200DR for a “Discovery Pass” and get free access to the keynotes, exhibit hall, discovery track sessions, and receptions. #smwest

Google Launches New $99 Streaming Box With Android TV; Joins Apple & Amazon With Ecosystem Play

player-overview-1024The $99 streaming device market just got a little more crowded with Google’s announcement of their new streaming box, dubbed Nexus Player. Built in partnership with ASUS, it’s the first device to run Android TV and also allows you to play Android games on your TV, with a separately priced gamepad. Unlike Google’s Chromecast USB stick, the Nexus Player includes an app guide, content recommendations and voice search controls. It’s also Google Cast Ready so you can fling content from Chromebooks and Android/iOS devices to your TV. Inside the device is a 1.8GHz Quad Core, Intel Atom chip, 1GB of RAM, 8GB of storage, HDMI out, with WiFi support for 802.11ac 2×2 (MIMO). The lack of an ethernet port is a big downside as I’ll take the reliability of an ethernet cable over WiFi any day. The box is up for pre-order tomorrow and will be in stores on November 3rd.

The initial content options available at launch are limited on the box with the major ones being Google Play, Netflix, Hulu Plus, YouTube, Vevo, Pandora, iHeart Radio and support for Plex. Other apps from those like the Food Network and PBS Kids are also available, but content choices are pretty limited right now. Other boxes, including Amazon’s Fire TV were also limited in their content choices at launch, so we can expect to see a lot more content come to Google’s new box fairly quickly. I expect it won’t take more than a year before Google’s $99 Nexus Player will be very similar to Roku, Apple TV and Amazon’s Fire TV, with regards to content choices.

With so many streaming options in the market when it comes to Smart TV’s, game consoles, connected Blu-ray players and $99 streaming boxes, Google is entering a very crowded consumer market. But the advantage Google, Apple and Amazon have over everyone else is that they all operate and control an end-to-end video ecosystem. In the long run, they will all be the winners in the $99 streaming box market, and devices from Netgear, Sony, Western Digital and others stand no chance with their boxes. Just within the last year alone, devices from Vizio (Co-Star), D-Link (MovieNite Plus), Hisense (Pulse), Sony (SMP-N200) and Seagate (GoFlexTV) have all been discontinued in the market.

Long-term winners in the space will be Apple, Amazon, and Google for dedicated streaming boxes/USB sticks, and Microsoft and Sony with their more expensive gaming consoles. Where this leaves Roku in the long-term is unknown, as they still have the best $99 streamer in the market today when it comes to content choices, but that gap between them and others keeps shrinking. Roku doesn’t have the marketing power of Apple, Amazon or Google so it’s going to continue to be a tough fight for them in a very crowded market.

Thursday Webinar On DASH and Multimedia Streaming

Thursday at 2pm ET, I’ll be moderating another StreamingMedia.com webinar, this time on the topic of, “DASH and Multimedia Streaming.” By now, you’ve probably read enough to understand what DASH is and why it’s important. For Studio-Approved distribution models like OTT and SVOD, DASH is now DRM compatible using the CENC standard with a variety of DRMs for various playback platforms. Learn more about DASH DRM during this DASH Roundtable webinar with presenters from the DASH Industry Forum, BuyDRM and VisualOn. We’ll also have an extensive Q&A session so REGISTER NOW to join us for this FREE Web event.

Web App Acceleration Market Heating Up: Service A Key Requirement For SaaS Providers

Software as a Service presents a tempting value proposition to businesses all over the world. The benefits are obvious – lower TCO and cost savings, flexibility, ease of access as well as relief from maintenance and upgrade hassles. According to recent data from multiple outlets, SaaS adoption rates continue to outperform those of on-premise enterprise applications.

SaaS provides a significant business opportunity to ERP, CRM and business intelligence vendors as well as for companies in the enterprise content management, supply chain management and project management solutions space. These vendors are putting a web front-end to their application portfolio so that their customers’ user base can easily access them from any browser or multiple devices. These offerings are now mainstream, and independent SaaS vendors and CDNs continue to roll out solutions to capture market share. With such fierce competition and an unusually high churn rate, customers of these solutions tell me that SaaS vendors continually struggle to enhance the end-user experience and increase user stickiness.

Similarly, today’s CIOs, CTOs and IT managers also struggle with a plethora of challenges when migrating from an on-premise application to a SaaS based deployment. While cost is a key driver for SaaS deployment, the implementation may also be accompanied by a dip in productivity due to access to the SaaS application being slow if the distance between the provider and the user is more than a few hundred miles or even 10s of milliseconds in network speed. A case in point being the enterprise deployment of cloud based Office 365 subscribers and the post deployment performance issues created by distance, network latency and poor performance.

It is essential for SaaS vendors to live up to the performance benchmarks set by on-premise applications and the technological advancements to optimize them for a global user base. Enterprises with application servers on-premise had multiple optimization strategies in place when it came to accelerating such applications– including MPLS for stable latency connectivity, wAN optimization for bandwidth reduction and application acceleration systems for improved performance.

However, a SaaS application is a web-based application hosted in a data center owned and operated by the SaaS vendor and provides services to multiple enterprises from a centralized location. This application is typically accessed over an Internet connection, and while bandwidth costs have been continually falling, the public Internet is anything but reliable. As businesses grow global, concerns with latency and packet loss only grow bigger, and MPLS and WAN optimization strategies cannot stem the tide to the cloud. And there you have it – a slow running SaaS application and dipping productivity levels. If unchecked, this could cause enterprises to re-evaluate their SaaS deployment decision and perhaps switch back to their on-premise application provider.

In parallel, it’s important to look at the typical life cycle of SaaS vendors as they grow. Most vendors start off small and pick up local or regional customers. Since one of the key drivers for SaaS adoption is global scalability, it is likely for them to see an uptick in the number of international users because:

  1. Their existing customers are undergoing global expansion, or
  2. Their marketing efforts have resulted in increased global brand equity, leading to 
international customer acquisitions, or
  3. They just acquired a huge customer, headquartered in the same region, but with a large 
number of offices overseas.

As the international user base expands, many times, so do customer complaints. And a major chunk of these complaints is due to performance issues such as “asking too long to upload content” or a “slow reporting service”. More often than not, SaaS vendors are not prepared for this eventuality, which in turn leads to churn and a slower growth rate. In order to retain customers, SaaS vendors need to have an application acceleration strategy in place so as to meet performance expectations of customers and deliver the best end-user experience. One of the options that they have is to deploy multiple data centers, duplicate their software stack and mirror data near customer locations. However, this is not a long-term strategy for growth.

SaaS vendors therefore resort to CDN, or more precisely web application acceleration or dynamic content acceleration services. Capabilities of web application acceleration solutions include:

  1. Intelligent routing over the Internet middle mile to choose the minimum latency path
  2. TCP Optimization – to quickly recover from network congestion and packet loss
  3. Persistent connections – to minimize the number of round trips
  4. Connection pooling – for better origin offload, and
  5. On-the-fly compression

These acceleration services are built as an overlay on the public Internet, which is still a major bottleneck. The ‘middle mile’ between the content and the edge location of the CDN provider may be intelligent with the capabilities above but is certainly not dedicated, guaranteed or private. SaaS vendors do experience better response times with these solutions but consistency in application performance continues to elude them, something I hear from many customers.

While the market for these solutions is still small overall, when compared to other segments of the content delivery industry, web app acceleration is one of the most important requirements for SaaS providers that want to compete in the CDN space. The margins on these services are high, customers understand the impact that fractions of a second have on their business and most importantly, they are willing to buy these solutions based on real performance benchmarking and metrics. This is the opposite of how customers buy large file downloads or streaming video delivery. If a video takes half of a second longer to start up, due to a performance problem, in most cases, it won’t impact the content owner in a negative way and they won’t pay more for such a small uptick in performance. But half a second slower of faster when it comes to commerce, CRM and other applications, can mean the difference between making money and losing money.

There is a lot of competition in the SaaS web application acceleration space right now from Akamai, Amazon, Aryaka, CDNetworks, EdgeCast by Verizon, Fastly, Instart Logic, Limelight Networks, Riverbed, Yottaa and others, all vying for a piece of the business. To me, the most interesting up and comers right now are Aryaka [WAN Optimization & CDN Provider Aryaka Carves Out A Niche To Address Enterprises’ Content Delivery Problems], Instart Logic [How Instart Logic Wants to Solve Web Application Delivery], and Yottaa [Retailers: Your End-User Experience Is Lacking, Here’s How to Fix It.] Web application acceleration isn’t simply a hot buzz word, it’s the future of the CDN market and a key service requirement for any SaaS provider that want’s to be successful in the CDN space.