Is Adobe Paying The NFL To Use Flash?

Last night, the NFL and NBCSports.com broadcast the first of 17 free games this year dubbed "Sunday Night Football Extra" using Adobe Flash video. Some industry people I spoke to seemed surprised that the NFL and NBC were using Flash considering that NBC just did the Olympics in Silverlight. Others are suggesting that Adobe, which announced the games with a joint NFL press release, might be helping to subsidize the cost to the NFL and NBCSports.com by covering some of the bandwidth costs associated with broadcasting the games online.

While Adobe would not comment on the financial details of the deal with the NFL, it would not surprise me if Adobe is taking the same approach Microsoft’s took with NBC for the Olympics, by helping to cover the content delivery costs. Some might ask, if Adobe is willing to pay to get the NFL to use Flash, why didn’t Adobe pay to keep Microsoft’s Silverlight platform from being used for the Olympics? The answer is, Adobe didn’t need to. Outside of the NBC Olympics website, most of the other portals around the world were already using Flash video for the Olympics.

But for the NFL games, they represent a huge opportunity for Adobe down the road. If the games are successful, you could imagine the NFL really ramping up their online video strategy and it would be in Adobe’s best interest to make sure a property like the NFL uses their platform moving forward. There are only a few major sports leagues in the U.S. and the NFL probably has one of the most loyal fan bases around, outside of car racing, not to mention one of the largest marketing and promotional arms. While I have not been able to confirm that Adobe is in fact helping cover the costs of broadcasting the NFL games online, it’s a smart move on their part if they are. It will be very interesting to see if Adobe and Microsoft start bidding on some of the same high-profile events down the road as more main-stream content gets broadcast live over the web. 

Sponsored by

Speakers Wanted: Evaluating and Choosing The Right Methods Of Video Delivery

Due to some last minute changes, I now have two open spots for a round-table session at the Streaming Media West show entitled "Evaluating and Choosing The Right Methods Of Video Delivery" which takes place on Thursday, September 25, from 2:00-3:00pm. I am looking for two customers who can talk about how they deliver video today and the pros and cons of the many different delivery techniques available on the market. I am not accepting any vendor submissions for these speaking spots but vendors are welcome to submit one of their customers. If interested, please e-mail me right away as these spots will go fast.

Thursday, September 25, 2008
Evaluating and Choosing The Right Methods Of Video Delivery
With all the various means of distribution and protocols available for video today-CDN, P2P, streaming, progressive download-there is still no single solution that will meet all customers’ needs perfectly across all platforms and devices. Learn the various methodologies for content distribution, as well as the pros and cons of each type. Speakers will also discuss which methodologies apply best to which platforms and geographic locations based on type of content, length and format of video, and target audiences. Panelists will also provide you with guidelines and formulas for determining the best single and/or hybrid solution for your online video distribution needs.

Google’s New Business Video Offering Not A True Enterprise Product

On Tuesday, Google added video sharing to their paid business version of Google Apps calling the offering Google Video for Business. While some reviewers are calling it an "enterprise" video offering, it’s not even close to being something a real enterprise company would use. While the definition of what classifies a company as an enterprise organization can be debated, the majority of users for Google Apps are not Fortune 500 corporations, which is what I classify as enterprise.

For what the product does, share videos inside a corporate network, it works well. But the idea that this is going to be a product to replace enterprise video offerings in the market is not the case and I’m surprised that Google would be quoted saying otherwise. Matthew Glotzbach, management director for Google Enterprise is quoted as saying, "video hasn’t been a big part of the enterprise space because the costs
to build a video infrastructure behind a firewall are enormous
." Say what? The enterprise market has been using and deploying online video since around 1996 and continues to build out their infrastructure as the costs are quite cheap for the value it brings. There are literally hundreds of examples and case studies of enterprise corporations who have been using online video for years.

Dave Girouard, president of enterprise, for Google is quoted as saying, "YouTube has enabled millions of consumers to easily capture and share video at an unprecedented level, yet corporate video has remained expensive and complicated." Corporate video is not expensive and complicated. Why is Google trying to make it sound like the enterprise market does not know what they are doing? Enterprise companies have been using video, and doing it well for many, many years. It’s not hard to find use cases.

The key here is that Google Video is very limited collaboration tool, not a video distribution tool. It does not support live, has a cap on the file size, has very limited functionality and you can’t use it for sharing video with anyone who doesn’t have
log-in accounts on your Google Apps domain. Not to mention it has no real functionality for creating video, only sharing it. Some blogs I read are thinking Google now competes with Cisco’s Enterprise TV offering or other true enterprise video platforms, which is not the case. Some are even comparing the Google offering to Brightcove, which baffles me as they are not even in the same ballpark in terms of functionality.

Another reason that enterprise companies won’t use the service is that they won’t trust Google. Some of the Enterprise companies I have asked about the service have already said they don’t trust the same infrastructure that serves YouTube to deliver their content securely, let alone without buffering issues. I know that for me, many times it still takes ten or fifteen seconds for a clip on YouTube to start, even when the video is only thirty seconds in length. Other users are still having the same problems I have seen even since Google acquired YouTube.

The bottom line, Google’s new video sharing functionality for business is an offering that will work well for it’s sole purpose, limited video sharing. But please Google, don’t try and make it sound like no one in the enterprise vertical was doing successful video delivery until YouTube came along. That’s just not the case.

Amazon’s New Video On Demand Streaming Service Using H.264 At 1.2Mbps

Center1videoondemand_v265907500_
Yesterday, Amazon launched their new streaming media based service for their newly branded Amazon Video On Demand offering. While there is not too much to review as the service is pretty straight forward, it is interesting to see that Amazon is using H.264 and encoding content in variable bit-rates, up to 1.2Mbps.

The videos are being delivered via Limelight Networks and not from Amazon’s own Amazon Web Services network. While HD quality content is not available today, I’m really interested to see when Amazon adds HD and what bitrate will be used for streaming.

While some users have been commenting that the quality of the stream is not DVD quality, it’s not suppose to be, nor does Amazon set customers expectations incorrectly with any promise of "DVD like quality". It’s a good move on their part as we have seen other content owners compare their video to "DVD quality", when in fact, it isn’t. The one thing I think Amazon does need to improve on is the seek functionality when skipping ahead in the video. For me, it takes too long, more than two or three seconds, for the video to start back up when I skip ahead.

When It Comes To Content Delivery Networks, What Is The “Edge”?

When it comes to content delivery networks, there are a lot of words we use in the industry that are difficult to define. Words like performance, scalability and quality are used everyday as is the term the "edge". But depending on who you ask, definitions of what the "edge" is, and the role it plays in delivering video vary greatly.

For starters, the "edge" is really not a meaningful word if you are trying to define
how a CDN is architected and where it distributes traffic from. It has become a misused term that many of the CDNs use to indicate that traffic is coming from the closest location to the user. Just because the content may be coming from the closest location to the user does not guarantee quality. And in fact, many times, the content is not even being delivered from the closest location even though the CDNs says it is. You also have the "assumption" by many in the indsutry that CDNs cache all video or replicate content at every "edge" location they have, which is simply not the case.

Customers need to ask CDN providers where their servers are physically located that are distributing the specific content the customer is concerned with. You have to ask the CDNs where are you actually streaming that video from? In most cases, you can do simple trace routes to see for yourself. As an example, there was a lot of debate the past few weeks about the BBC’s iPlayer traffic and what impact that is having on ISPs. But if you do a trace route for iPlayer traffic today, you will find that a lot of it is coming from CDN servers outside the UK. Almost none of the traffic comes from “within” an ISP network, which is where most CDNs classify the "edge" to be. There are a couple of reason for this.

When moving small objects off a CDN, the latency associated with the distance from the CDN server to the consumer’s computer dominates the speed with which that image loads. As such that server needs to be placed as geographically close to the consumer as possible. Those images are tiny so those servers are configured with minimal storage. In addition you can afford to replicate those objects on many, many servers because the total storage costs are inexpensive. But comparing that to a large object like a video, the latency becomes irrelevant due to the overall time it will take to move the whole object. There is an impact on start time, but storage now becomes a much bigger cost.

CDN providers who originally built hugely distributed systems with little storage cannot make use of many of those previously deployed servers as they cannot store large libraries of video, in some cases not even more than a handful of videos. But, you wouldn’t want to in any case as you don’t want to replicate the video’s unnecessarily. A more centralized architecture with very large storage (only replicating for actual demand) is much more efficient. The number of locations in which you place servers is then mostly economically driven. It is a trade-off between storage and bandwidth consumption and it’s a balance based on how many objects you are distributing from the library and the popularity distribution through that library. While most CDN providers all talk about how "unique" their networks are, nearly every CDN has almost the same architecture for distributing large objects, whether cached or streamed.

Another reason almost none of the traffic comes from within an ISP network is DNS resolution. The ability of a CDN to localize traffic is somewhat limited by the resolution of the ISPs DNS. Some ISPs will not enable resolution beyond the whole ISP itself. So the whole issue of placing CDN servers within ISPs that cover large geographies networks becomes pointless.

In addition, CDN load balancing plays a big role. CDN providers determine where individual objects are served from based on many factors. The sophistication of the particular CDN’s algorithms will determine how many factors are taken into account. This is a real-time dynamic system in most cases and factors like performance of connected networks and performance of the CDN (load balancing due to demand) and costs to the CDN provider will be taken into account. This is fully under the control of the CDN provider and has nothing to do with the ISP. Even if an ISP houses a CDN server there is absolutely no guarantee it will actually be used. And as mentioned above, in relation to large objects and cost, it is most unlikely to be used.

One final point is that a CDN server placed inside an ISP network needs to be “filled”. The cache fill is data from the CDN’s origin (or the CDN’s customer’s origin). In 99% of cases this fill will come from outside the ISPs network. The cache hit ratio then should become a very important factor for the ISP. But how many think about that? The cache fill data plus the cost to house and power the CDN’s server is borne by the ISP in many cases. However, large object traffic, video, is what is causing cost increases for all ISPs. But if video is not being served from the CDN servers within the ISP network is there a real benefit to having them there?

The bottom line is that like many other topics in the content delivery market, people assume they know what terms means, how things work or more importantly the impact they think it is having on themselves, ISPs or other content owners. Content delivery networks as a whole need to do a much better job explaining exactly how they deliver video. Too many are so concerned with not giving out technical details, but it’s exactly what we need from the industry so we can educate customers and start to debate in an open manner how one network can operate more effectively than another. Over ten years later, there are still too many confusing questions about the content delivery business and trying to figure out how all of this works.

While I have a good understanding of the technology, I don’t pretend to be a network engineer who builds networks for a living. I’d like to see a really good discussion take place in the comments section with feedback from the CDNs directly as well as those who work at ISPs. This is a topic many want to know more about and one that many could benefit from with additional input.

Moderator Wanted: How Old Media Is Embracing Online Video and New Media

One of the moderators at the Streaming Media West show is no longer able to moderate the panel entitled "How Old Media Is Embracing Online Video and New Media" which takes place on Wednesday, September 24 from 4:15-5:15. I am looking for someone who would like to moderate the discussion with the list of panelists below. Ideally, I am looking for a fellow blogger or member of the media who is already covering this topic and can bring their expertise to the discussion. If interested, please e-mail me right away so we can finalize the details.

How Old Media Is Embracing Online Video and New Media
This session will discuss how converging media technologies are redefining traditional distribution methods, how interactive and on-demand services are changing, and how entertainment and news video is being consumed. Come hear from some of the leading publishers, broadcasters, and advertisers about the impact that video and new media is having upon their business models.

  • Panelist: Evan Young, Director of Broadband Services, TiVo
  • Panelist: Evan Hansen, Editor In Chief, Wired.com
  • Panelist: Dan Goldman, Executive Director, thirteen.org, Thirteen/WNET
  • Panelist: Stephen Chao, CEO, Co-Founder, WonderHowTo.com

If you are a fellow blogger or member of the media and can’t moderate this session, but would like to attend, press registration is open.

Patent Details Emerge In Level 3’s Suit Against Limelight Networks

While I have not yet seen any detailed documentation or records filed with the court regarding Level 3’s patent infringement suit against Limelight Networks, with trial slated to start on October 14th, I have been able to confirm that the Level 3 patents at the heart of the suit are 7054935, 6654807 and 6473405.

Patents 807 and 935 talk to the same abstract, which is "Resource requests made by clients of origin servers in a network are
intercepted by reflector mechanisms and selectively reflected to other
servers called repeaters. The reflectors select a best repeater from a
set of possible repeaters and redirect the client to the selected best
repeater. The client then makes the request of the selected best
repeater. The resource is possibly rewritten to replace at least some
of the resource identifiers contained therein with modified resource
identifiers designating the repeater instead of the origin server
."

Patent 405 talks to the same idea of routing traffic to the best source through a selection process. While the abstract of patent 405 is similar to patents 807 and 935, patent 405 has a more detailed abstract that talks to measuring traffic on the network and states "…. is
based on real-time measurement of costs associated with the alternative
paths, in response to a user request for transmission of message data
to a destination on the network. Cost metrics include delay,
throughput, jitter, loss, and security
."

Reading through the filings, I notice that patents 807 and 935 specifically talk to HTTP delivery, so one has to wonder if any of Limelight’s traffic that is not delivered via HTTP, for instance streaming video via RTSP or RTMP, would fall under violation of the patents. It could be a similar case to the Akamai patent suit where only a portion of Limelight’s business falls under the technical description of the patents in question.

All three of the patents were filed in 2001 or 2002 which makes them fall under the Digital Island/Cable & Wireless time frame. However, it is interesting to note that the 807 and 935 patents list employees from the Sandpiper days as the inventors, who are now employed at Level 3.

I don’t have enough of the details from Level 3 or Limelight, or access to all of the data to make any guess on whether or not Limelight is or is not in violation and whether or not Level 3’s patents will hold up in court. And at the slow rate that patent infringement cases move, I don’t expect we’ll hear any real information one way or another from this case for at least a year or more.

Many of the content delivery related patents that are going to court these days sound awfully broad and it’s getting harder and harder to decipher exactly what these patents mean, what type of content they cover and what type of data transmission they are referencing. While there have been many content delivery networks over the past ten years that have sued one another, none of the suits have yet to have a major impact on any one vendor. That could change years from now when the content delivery market truly grows and as more companies, like Level 3, make intellectual property a big part of their strategy.

Note: As my bio states, while I have worked as an expert on various patent suits pertaining to IP based video, I am not working on any case involving Akamai, Limelight or Level 3. And while I have been asked by firms in the past to work on some CDN related cases, I have never worked on any lawsuit involving any content delivery network.