WSJ Report Inaccurate: Content Owners Not Asking ISPs For “Separate Lanes”

Yesterday, a story in the Wall Street Journal created a lot of stir implying that HBO, Sony and Showtime were asking ISPs for their content to be given “special treatment” by delivering it via a “separate lane” within the ISPs network. After speaking to multiple ISPs and some of the content owners mentioned in the story, they tell me the WSJ post is inaccurate and that they don’t expect any ISP would treat their content differently from another.

Those I spoke were confused as to what exactly the WSJ is implying, when terms like “special treatment” are being used, without any definition of what is “special” about the treatment. There is also no agreed upon definition of what a “managed service” is and the article doesn’t detail how they define it. They also reference a “separate lane” within the ISPs network, but there is only one lane into your house on the Internet. Again, lots of buzz words, no definitions.

The article says the reason the content owners would want to do this is to “move them away from the congestion of the Internet.” The problem with this idea is that neither HBO, Sony nor Showtime owns their own CDN. They rely on third-party CDNs like Akamai, Limelight and Level 3 to deliver their content and these CDNs already have their servers inside ISP networks, or connected directly to them via interconnection deals. That’s the main value of using a service based CDN is to avoid congestion, which HBO and others are already doing. In fact, HBO has been doing this with Verizon since 2010, by allowing Verizon to cache HBO’s content inside Verizon’s network. But that content is not “prioritized” or given any “special treatment” of any kind inside the last mile.

The article also says that media companies feel that the “last mile of public Internet pipe, as it exists today, won’t be able to handle the surge in bandwidth use for all the online-video services.” The problem with that argument is that the congestion we see on the Internet isn’t taking place in the “last mile”, it’s taking place at network access points outside the last mile. To prove that, just look at the latest Measuring Broadband America report by the FCC that measures ISPs advertised speed versus delivered speed. The data shows that there is very little congestion in the actual last mile. So the WSJ argument as to why HBO and other content owners would want to do this doesn’t make sense and take into account the technical details of how it all works.

The WSJ article waits until halfway through the piece to mention that no ISP has actually agreed to whatever it is that the WSJ is suggesting content owners want. The article says that Comcast “wasn’t willing to do anything for any one content provider that it couldn’t offer to every other company.” So the WSJ is saying that content owners asked for something that ISPs said no to. But the piece then goes out-of-the-way to make it sound like this is a potential problem, ties in the topic of Net Neutrality but then never defines, what exactly is being proposed. What does “special treatment” mean? Are they implying the “prioritization” of packets? We simply don’t know as they use high-level terms without any definition of how they are applying them.

Another argument the WSJ makes for why content owners would want this is that some content owners don’t want their service to count against the ISPs bandwidth cap. Problem with that argument is that you don’t need a “managed service” to make that happen. Netflix recently struck deals in Australia where their content does not count against the ISPs cap with no “managed services” taking place.

The WSJ also says, “media companies say the costs of guaranteeing problem-free streaming for users are rising.” What they don’t say is whom those costs are rising for? The content owners? The ISPs? The consumer? It sounds like they are saying the costs to deliver video for the content owner is increasing, but in fact, it’s the opposite. Costs to deliver video via third-party CDNs have fallen at least 15% each year, since 2008. (Source: one, two) Also, there is no way to “guarantee” problem-free streaming no matter how much money you spend so that notion is false. CDNs offer SLAs, but they don’t “guarantee” anything outside their network once it hits the last mile. And ISPs only guarantee customer’s access out of their last mile, which is done on a “best effort” basis. For the WSJ to imply otherwise is inaccurate.

ISPs I spoke to made it clear that they are not in discussions with OTT providers to manage their traffic differently from other content owners or provide them with special treatment of any kind. What they think the WSJ might be confusing is the idea of caching content inside their last mile, but again, that doesn’t come with any kind of “special treatment” or prioritization of any kind. The WSJ story uses a lot of generic undefined words that sound very scary, but when you look at the details rationally, you can see that they simply created controversy where none exists.

Sponsored by

Video Platform Provider Voped Looking To Sell Company, OTT Platform Available

Screen Shot 2015-03-18 at 3.15.12 PMThere continues to be a shake out within the tier-2 video platform space, (see Volar Video Selling Stream Stitching & Video Platform Assets) with the latest coming from Voped. I recently heard from Voped President and sole investor Mark Serrano, who tells me that he has decided to offer the platform for acquisition.

Mark tells me the company is already in preliminary discussions with a couple of large companies now, but also wanted to put the word out about their availability considering what’s happening in the space and the technology jump-start that his platform can offer. Voped offers an end-to-end solution to manage, encode, secure, deliver, and monetize video globally on the web, mobile, and other connected devices. So for the right company, acquiring versus buying can give them the advantage of time to market and the extensive experience of the team that built the platform.

Mark sees an advantage to the small size of his team (four original team members; the parent company provides numerous support services separately), in that it will make for an easy transition to bring the technology under a new banner. He says the company has a very efficient turnkey offering and has built it at a fraction of the cost compared to what the large platforms have invested. They have a lot of experience with custom development, from features to larger integrations – such as with Widevine DRM, payment gateways, a turnkey website solution, and custom user interfaces.

For information on Voped’s technology highlights you can check out this PDF deck and for those interested, you can contact Mark Serrano directly.

Free Book Download: Hands-On Guide To Webcasting Production

51UV65ljoKLWebcasting guru Steve Mack and I wrote a webcasting production book entitled “Hands-On Guide To Webcasting” (Amazon), which we’re now giving away as a free PDF download. You might notice that the book was published in 2005 and since that time, webcasting has evolved into the mainstream application it is today. But some of the best practices regarding encoding, connectivity, and audio and video production techniques etc. have never changed. We felt the book could still be a valuable resource to many and we wanted to make it available to everyone, with webcastingbook.com now re-directing to this post.

This book was one of the eight books in my series that combined, have sold more than 25,000 copies, with the webcasting book being the most popular. So we’re happy to have gotten the rights back to the publication to be able to share it with everyone. The help email included in the book still works, so those with questions can still reach out to us, and we’ll try to answer any follow-up questions. You may re-purpose content from the book as you like, as long as you don’t charge for it and you credit the source and link back to webcastingbook.com. Here’s a quick breakdown on the chapters

  • Chapter 1 is a Quick Start, which shows you just how simple webcasting can be. If you want to start webcasting immediately, start here.
  • Chapters 2 and 3 provide some background about streaming media and digital audio and video.
  • Chapters 4 and 5 are focused on the business of webcasting. These chapters discuss the legal intricacies of a webcast, along with expected costs and revenues.
  • Chapters 6 through 8 deal with webcast production practice. Planning, equipment, crew requirements, connectivity, and audio and video production techniques etc.
  • Chapters 9 and 10 cover encoding and authoring best practices. This section also covers how to author simple metafiles and HTML pages with embedded players and how to ensure that the method you use scales properly during large events.
  • Chapter 11 is concerned with distribution. This section discusses how to plan and implement a redundant server infrastructure, and how to estimate what your infrastructure needs are.
  • Chapter 12 highlights a number of case studies, both successful and not so successful. These case studies provide you with some real-life examples of how webcasts are planned and executed, how they were justified, what went right, and possibly more important, what went wrong.

I’ll also be giving away my business book in the coming days.

The Impact Of HTTPS On Caching Deployments In Operator Networks

When Google made the decision in 2013 to have all of their properties and data, including YouTube, move to HTTPS delivery, many have been asking what impact this has had on open caching deployments inside operator networks. Some have suggested that HTTPS delivery is becoming a trend but based on what we have heard from other content owners, and from talking to last-mile providers, I don’t expect this to be a broader industry trend in the long run.

In many cases, we can use the publicly stated plans of large streaming services like Netflix as proof of outlook for the industry as a whole. In short, the decision to stream all content via HTTPS is an expensive one and the business goals of long form video streaming services like Netflix, Amazon, ESPN, and Hulu can be met through more efficient and far less costly streaming infrastructure and best practices. To this point, Netflix publicly stated they would not implement SSL given their assessment that “costs over time would be in the $10’s to $100’s of millions per year” to fully encrypt all their streaming traffic. [Source: one, two]

Indeed, we know that content providers worldwide have adopted best practices to manage content security and consumer privacy for streaming media. Through the use of DRM to protect content rights and URL obfuscation combined with control plane encryption to secure consumer privacy, content providers can meet their obligations to both content rights owners and consumers. These streaming media best practices also support the deployment of open caching solutions in operator networks to optimize online video for both network utilization and Quality of Experience (QoE).  Going forward, content providers will continue to rely on these best practices to scale their streaming offerings worldwide and the majority won’t move to HTTPS delivery.

There will be significant and long-term value in the deployment of open caching as a critical part of the overall open architecture for streaming video. Operators can invest in open caching platforms with confidence, knowing that their investment will continue to deliver value in the form of network cost savings and improved QoE over the long run.

In just a few instances, as seems to be the case with YouTube, some content providers may take the extreme and costly step of encrypting both control and data plane traffic for the sake of consumer privacy. Full SSL encryption is generally considered to be cost prohibitive and few, if any, other content providers can afford to implement such a model. However, even in the case of fully encrypted traffic, it’s a safe bet to expect that content providers will continue to work collaboratively with caching technology providers to support traffic optimization and open caching in last mile networks.

Limelight Launches New DDoS Solution & Research Findings About The Security Market

DDoS and other cyber attacks are clearly on the rise. According to Akamai’s recent State of the Internet Report, between 2013 and 2014, DDoS attacks rose 90%. And not only are the number of attacks rising, but the volume of those attacks is growing as well. Numbers from Radware’s 2014-2015 Global Application and Network Security Report, stated that 29% of attacks are over 1Gbps in size. It’s probably safe to say that attack volumes and frequency will only continue to increase, especially as companies continue to rely on the Internet to conduct their business.

Many organizations already recognize the need for security. According to recent research by Limelight Networks, only 8% of surveyed executives indicated that they weren’t using some sort of security for the delivery of their digital content. What’s more, 76% indicated that the delivery of digital content is “extremely important” to their business.

So what are organizations doing today to mitigate potential attacks that might interfere with their ability to deliver digital content? For many, it’s on-premise equipment (CPE). Of those surveyed in Limelight’s research, 31% are handling the security themselves. Others are employing a hybrid approach, using some CPE combined with cloud-based services. But there are a variety of problems with both of these approaches (pure CPE and CPE plus cloud). First, using any kind of CPE has both CAPEX and OPEX requirements. You not only need to purchase the hardware (redundantly, of course) but you need people to manage, update, upgrade, and operate it. Second, you need excess bandwidth (transit) to support an attack while also handling “good” traffic. Finally, combining CPE with cloud services adds significant complexity to your content delivery architecture.

What’s the alternative? CDN-based security. More than half (53%) of respondents in Limelight’s research plan to rely on their CDN provider to handle content delivery security concerns in the future. And for many customers, it makes total sense for several reasons:

  • Upstream—if an organization is already using a CDN provider to deliver their digital content, detecting and mitigating an attack can come at the network edge, potentially thousands of miles from origin thereby sparing an organization’s network from any potential fallout or impact. When combined with scrubbing, only good traffic is returned to the origin preventing an organization’s bandwidth from being flooded with bad traffic.
  • Absorption—as a distributed network, most CDNs have thousands of servers against which they can spread out an attack, even preventing Layer 3 and Layer 4 attacks (two common DDoS vectors) from ever reaching the origin.
  • Resiliency—with those thousands of servers and terabits of egress capacity, the CDN quickly returns to normal operations in the wake of volumetric DDoS attacks. Even while under duress, the CDN can still continue to provide accelerated content delivery services.

Last week, Limelight announced its CDN-based security offering—DDoS Attack Interceptor. This solution, integrated directly with the Limelight’s content delivery services, provides proactive detection with mitigation technology in the cloud protecting customers against the downtime, loss of business and brand reputation impact associated with DDoS attacks. The solution is virtually transparent to customers and from a high-level, works the following way:

  • Prior to an attack, Limelight’s detection technology is constantly fingerprinting a customer’s traffic to learn what “good” traffic looks like. This fingerprint is sent continuously to “off-net” scrubbing centers. According to Limelight, the scrubbing centers are in different data centers and do not share bandwidth with Limelight’s delivery POPs so that the attack traffic does not share resources with the good, or clean, traffic
  • An attack presents itself against a target protected by Limelight
  • The Limelight CDN begins to absorb most of the attack while, at the same time, proactive monitoring detects the DDoS attack and notification alarms are raised in the network operations center
  • The customer is notified that they are under attack. If the attack is small enough and the customer has enough bandwidth to handle both good and bad traffic, they can opt to just let the CDN do what it does best. But if they don’t want to chance that the attack volume will increase, or if they don’t have the resources to handle it, they can opt to have the traffic scrubbed
  • When scrubbing is enacted, traffic is rerouted to the off-net scrubbing centers
  • The scrubbing centers already have a very detailed fingerprint of good traffic, so they may immediately begin aggressively mitigating the attack without having to be configured manually and without a lengthy “learning” period. The scrubbing centers return the clean traffic directly to Limelight’s CDN for delivery as usual using dedicated network interconnects for increased performance.

Limelight’s detection system constantly monitors for malicious traffic. However, since this monitoring is not happening in-line, Limelight claims it has no performance impact on a customer’s traffic. The detection covers the broadest range of DDoS attacks—both infrastructure as well as application layer attacks. According to Limelight, their solution can prevent certain zero day attacks using “behavior-based” techniques that compare measured baselines of both volume and patterns to more intelligently differentiate good traffic from bad.

It’s clear from the research that not only will DDoS attacks continue to rise (both in scope and scale) but that executives are worried about how to mitigate them. When the results can be loss of revenue, everyone starts to pay attention. And because the CDN as a cloud-based security solution provides a number of benefits over CPE or hybrid architectures, it’s no wonder that the major CDNs (Level 3 and EdgeCast by Verizon were the most recent before Limelight) have all added the service to their portfolios. It good to see Limelight moving up the stack with their product portfolio and offering more value-added-services, like security, to help them diversify their revenue away from purely storage and bit delivery. As DDoS and other attacks continue to grow in size and sophistication it will be interesting to see how these services evolve in an otherwise crowded security market with many different approaches and solutions to the DDoS problem.

How To Quantify The Value Of Your CDN Services

As mobile applications become more sophisticated, many congestion points have been identified which have given rise to a number of specialized solutions to resolve them. The primary solution for working around Internet congestion and slow-downs has long been the edge delivery and caching provided by content delivery networks. But those tactics have become commoditized, with asset delivery performance becoming table stakes delivered as a service. As a result, vendors have been working hard to offer true performance solutions, outside of storage, large software download and streaming video delivery services.

Over the past few years, the CDN market has spawned a number of specialty solutions to overcome specific challenges in the form of video streaming, web security, and dynamic applications. It has been well documented that web and mobile application performance is critical for e-commerce companies to achieve maximum transaction conversions. In today’s e-commerce landscape, where even milliseconds of latency can impact business performance, high CDN performance isn’t a nice to have, it’s a must have. But the tradeoffs have led to a polarizing effect between business units and the IT teams that support them. Modern marketers and e-commerce practitioners focus on engaging users with third-party content in the form of social media integration, localized reviews, trust icons and more, all of which need to perform flawlessly across a range of devices and form factors to keep users focused, engaged and loyal. The legacy attitude of one-size-fits-all for a CDN has become outdated as businesses seek out best-of-breed solutions to keep them competitive and to drive top-line growth. This is one of the main reasons why many customers have a multi-CDN strategy, where they might use one CDN specifically for video streaming, but another for mobile content acceleration.

One of the primary challenges in all of this is in arriving at measurable proof of the business impact. Historically, it has been extremely hard for e-retailers to quickly analyze the effectiveness of the solutions they’ve put in place to help drive web performance. Enterprise IT departments often find it difficult, if not impossible, to prove the benefit that their efforts have on customer satisfaction and top-line growth because analytic tools have historically been siloed by business specialties – IT has Application Performance Management (APM) tools and the business units have business analytics solutions. Whereas marketing and e-commerce teams have a variety of A/B testing solutions at their disposal, the IT team often struggles to show measurable business improvements.

Last week, adaptive CDN vendor Yottaa unveiled a new A/B testing methodology called ValidateIT that enables enterprises to easily and instantly demonstrate business value from their CDNs and other web performance optimization investments. Yottaa developed the methodology in 2013 and has been using it successfully with many of its customers since then. Through ValidateIT, enterprises can predictably and accurately split traffic in real-time, allowing them to verify the immediate and long-term business benefits of optimizing their web applications. As the first vendor in this market that I know of to offer this type of methodology, Yottaa is enabling enterprises to make an informed and confident buying decision by demonstrating the business value of web application optimization

But Yottaa and its customers are not the only ones to have “validated” ValidateIT, the methodology has also earned a certification from Iterate Studio, a company that specializes in bringing business-changing technologies to large enterprises. Iterate Studio curates, validates and combines differentiated technologies that have repeatedly delivered positive and verifiable business impact across a broad set of metrics. Working together with customers, Yottaa applies the ValidateIT methodology to split, instrument and measure web traffic using trusted third-party business analytic tools. The important aspects of the methodology include:

  • Control over the flow of visitor traffic. Yottaa typically splits traffic 50/50 in proof-of-concept scenarios to highlight the benefit their technology is having over an existing solution. In head-to-head “bake-off” scenarios, Yottaa can split the traffic into thirds or more, depending upon the competition.
  • Conducting a live, simultaneous A/B test. Online businesses frequently say that it’s impossible to accurately compare two different time periods to one another because of the variables that would impact the results. Campaigns, seasonality, breaking news and events, and any number of competing factors can influence visitor behavior. So Yottaa ensures that ValidateIT highlights the business-impacting results of their solution in real-time by arbitrarily sending visitors to the incumbent solution and the Yottaa (and possibly other competing vendors’) optimized solution and then measuring the results. This eliminates objections with regard to performance versus content or campaigns, as arguably all things are equal but the web performance optimization techniques applied to the visitor sessions.
  • Leverage in-place third-party analytics solutions. IT vendors have attempted to bring proprietary business analytics to market, but Yottaa felt it was important to lean on the business analytics solutions companies already use to ensure a credible test and validation. Plus, by using existing business analytics, marketing, e-commerce and IT leaders can leverage any existing custom metrics, analysis methodologies, and reports to drill-down into the details.

The most legitimate use case for Yottaa’s solution is that you don’t know whether your one-size-fits-all CDN solution is right for you or whether you need a specialty CDN until you actually measure, evaluate and analyze the results. That’s the reason for and beauty of ValidateIT and why the company offers it at no cost. It’s free as part of their solution validation process because they want 100% of companies to better understand which solutions in the market truly work versus one’s that don’t. It’s a nice tool to arm enterprise buyers with to show them the real business benefit vs. relying on a blue-chip logo.

The Code Problem for Web Applications & How Instart Logic Is Using Machine Learning To Fix It

The adoption of mobile devices with powerful CPUs and full HTML5 browsers is enabling a new wave of increasingly sophisticated applications. Responsive/adaptive sites and single page applications are two obvious examples. But the increased sophistication is also creating new performance bottlenecks along the application delivery path. While the industry can continue to eek out incremental performance gains from network-level optimizations, the innovation focus has shifted to systems that are application-aware (like FEO) and now execution-aware. It’s the new frontier for accelerating application performance.

To deliver web experiences that meet these new world demands, developers are increasingly using JavaScript in web design. In fact, according to httparchive, the amount of JavaScript used by the top 100 websites has almost tripled in the last three years and the size of web pages has grown 15 percent since 2014. The popularity of easily available frameworks and libraries like angular.js, backbone.js and jQuery make development easier and time-to-market faster.

Unfortunately, there is a tradeoff for these rich web experiences. As web pages become bloated with JavaScript, there are substantial delays in application delivery performance — particularly on mobile devices where there are smaller CPUs, memory and cache sizes. It’s not uncommon for end-users to wait for seconds, staring at a blank screen, while the browser downloads and parses all this code.

A big part of the bottleneck causing these performance delays lies within the delivery of JavaScript code. When a site loads and a browser request is made, traditional web delivery approaches respond by sending all of the JavaScript code without understanding how the end users’ browsers will use it. In fact, many times, more than half of the code is never even used. Developers have tried to mitigate this challenge by turning to minification – an approach that removes unnecessary data, such as white-spaces and comments. But this approach provides only minimal benefits to web performance.

Now imagine instead, if the browser could intelligently decide what JavaScript code is actually used and download that code on-demand. While the performance benefit could be substantial, demand loading code without breaking the application would be a very challenging problem. This is exactly what a new technology called SmartSequence with JavaScript Streaming from Instart Logic does. It’s the first such innovation that I have seen that applies machine learning to gain a deep understanding of how browsers use JavaScript code to optimize its delivery and enhance performance.

By using real-time learning coupled with a cloud-client architecture, their technology can detect what JavaScript code is commonly used and deliver only the necessary portions. The company says this approach reduces the download size of a typical web application by 30-40%, resulting in dramatic performance improvements for end users. With this new method, developers can now accelerate the delivery of web applications even as the use of JavaScript continues to rise.

For web and application developers, this gives them the freedom to push the boundaries of web development without sacrificing performance, opening up endless opportunities for revolutionizing web and mobile applications. The way Instart Logic is looking to solve this problem is interesting as I haven’t seen this approach in the market before, so it’s definitely one to watch as it evolves. For more details on the technology, check out the company’s blog post entitled, “Don’t Buy the JavaScript You Don’t Use.”