Not All “HTTP Streaming” Is Created Equal, Nor Is It Always Actually Streaming

Tim Siglin has an excellent article over on StreamingMedia.com today that explains the differences between delivering video via HTTP from a web server and delivering video via HTTP from a streaming server. And he's dead on accurate when he writes how confusing it can be for content owners when CDNs use the term "streaming" to define video delivered via HTTP. In most cases, I think some CDNs are trying to say that their HTTP delivery services can mimic some of the functionality that streaming provides, but they do a poor job of explaining the differences to the customer. Tim's article gives a clear explanation of the differences between the two and how it relates to the Windows Media and Flash platforms.

Sponsored by

Layoffs Not Affecting All Vendors: CDNs Have Over 200 Open Positions

While some vendors related to the online video industry have started reducing their headcount and laying off part of their staff, vendors in the content delivery industry are still hiring like mad. A quick look at their website’s shows more than 200 open positions amongst the bunch. These openings are no surprise considering how much money the vast majority of the delivery networks have raised and the rate at which they are trying to expand. So if you are looking for a new job in the CDN space, hit up the links below. Not on the list? Send me the link to your open jobs and I’ll add them.

Akamai, 58 openings:
http://www.akamai.com/html/careers/current_openings.html

Limelight Networks, 14 openings:
http://www.jobing.com/cc/limelight-networks120

CDNetworks, 1 opening:
http://www.us.cdnetworks.com/about/careers.php

BitGravity, 25 openings:
http://bitgravity.com/about/careers/

Highwinds, 5 openings:
http://highwinds.com/careers.html

AudioVideoWeb.com, 2 openings:
http://www.audiovideoweb.com/employee_op.html

Mirror Image, 2 openings:
http://www.mirror-image.com/site/company/Careers/tabid/89/Default.aspx

Level 3, has lots of openings and at least 12 related to video:
https://recruiting.level3.com/ENG/Candidates/default.cfm?szCategory=JobList&szFormat=search

EdgeCast, 4 openings:
http://www.edgecast.com/cdn_careers.htm

Streaming Media Hosting, 3 openings:
http://www.streamingmediahosting.com/careers.htm

EdgeStream, 1 opening:
http://www.edgestream.com/about_careers.html

Move Networks, 12 openings:
http://www.movenetworks.com/company/careers

Digital Fountain, 7 openings:
http://www.digitalfountain.com/careers.html

Internap, 15 openings:
https://jobs-internap.icims.com/jobs/search?ss=1&searchLocation=&searchCategory=

Panther Express, 4 openings:
http://pantherexpress.com/careers/

Pando Networks, 1 opening:
http://pandonetworks.com/jobs

Velocix, 5 openings:
http://velocix.com/aboutus_careers.php

Voxel.net, 5 openings:
http://voxel.net/about/jobs

AT&T, tons of openings and at least 12 related to video:
http://www.att.com/gen/careers?pid=10631#

Vusion, 4 openings:
http://vusion.com/company/jobs/

BitTorrent, 1 opening:
http://www.bittorrent.com/company/jobs/

Amazon, at least 4 jobs related to AWS:
http://www.amazon.com/Careers-University-Recruiting/b?ie=UTF8&node=203348011

Grid Networks, 1 opening:
http://gridnetworks.com/about/careers

NFL’s Live Streaming Leaves A Lot To Be Desired: Capping Users, Poor Video Quality

When the NFL announced it would be streaming seventeen games this year on NFL.com and NBCSports.com, dubbed "Sunday Night Football Extra", many were excited to see what kind of video offering the NFL had in store for their fans. With the NFL having now completed nearly half of their broadcasts, the user experience has been anything less than what I would call a quality offering for a variety of reasons.

Nfl
For starters, I can't figure out why the NFL puts people into a waiting room? In a conversation with the NFL yesterday, they explained that in order to provide the best user experience, they are limiting the number of users who can watch the game at the same time. They also commented that since the wait time is usually very short, I waited between 3-5 minutes this past Sunday, it's not that big of a deal. But the question is, why cap the number of people at all? Through the first four broadcasts, each NFL game averaged around 125,000 total unique viewers online. That averages out to about 50,000 simultaneous users online at any given time. Considering the NFL is streaming the games between two content delivery networks (CDN), Limelight and Akamai, why the limitation? Akamai and Limelight combined can clearly support way more than fifty thousand streams and capping users is not providing any better experience for those who are already watching. Is the NFL simply trying to reduce their cost to broadcast the games, by capping users and keeping their bandwidth bill lower? To me, the rational for capping users just doesn't make sense.

In addition to the strange capping policy, the video feed is also being encoded using less than optimal settings that reduce the quality, not improve it. The NFL confirmed that the encoding is a multi-bitrate file, up to 980Kbps and I have confirmed from others it is being done in H.264. No problem with the H.264 part, but why is the video letterboxed? They are simply wasting bandwidth and video quality with the black bars at the top and bottom of the video window. And if you remove those bars, the aspect ratio of the video window is 490×280. The average aspect ration for a video that is being encoded up to 980Kbps would be 640×480. So why is the NFL not converting the broadcast signal correctly before encoding and why is the window size so small for such a high-quality bitrate? They should be taking the anamorphic feed and stretching it back to the 16:9
ratio and cropping out the black in the encoder to not waste bandwidth.

In addition, as many bloggers have stated before, why is there no full-screen option? For a 980Kbps feed, you could get some fairly decent full screen video quality. But even at the window size they stick to, the video quality for me last weekend was poor. I could not read the numbers on the bottom of the video screen that gave the score and there was way too much pixelation. And being on a 20Mbps FiOS connection and a new MacBook Pro, my connection and computer were not the problem.

As some have pointed out previously, switching between camera angles is a pain as you have to watch a video ad before it switches. So if you want to switch angles in the middle of a play, then you are out of luck on seeing how the play ends. What incentive is there for users to then switch between all of the angles? Why have the different angles as an option if you are going to make switching between them such a bad user experience?

I know that from day one, the NFL has clearly stated that this online video offering is an "experiment" to test the idea of potentially bringing more games online down the road. But even as an experiment, you would think the NFL would want to provide the best quality user experience possible? All they keep saying is that the steps they have taken, like the capping, is to increase the "user experience", yet it's doing the exact opposite. What do you think of the NFL's streaming experience to date?

Call For Speakers Now Open For Streaming Media East 09

Smeast_logo
The call for speakers for the Streaming Media East show, taking place May 12-13th 2009, at the Hilton Hotel in NYC, is now open. The deadline to submit is December 1st and all speaking requests must be submitted via the online form at: www.streamingmedia.com/east/speakerinfo.asp

I cannot stress enough how important it is to get your submission in on time. Last year, we had over 800 speaking submissions and 110 actual speaking spots. If you are interested in possibly moderating or organizing a session of your own, please contact me immediately.

How To Create A Customized Flash Video Player

For those who wanted to attend the canceled session at the Streaming Media West show entitled "How To Create A Customized Flash Video Player", Adobe has nicely recorded the content from this session at their office and made it available online. You can see the archived presentation on Adobe's website. The presenter, Kevin Towes, has also posted his contact info in the presentation should you have any follow up questions.

Some Venture Capitalists Need To Blame Themselves, Not The Economy

I keep seeing reports, like this one on the New York Times blog, entitled "Venture Capitalists’ Confidence Plummets to an All-Time Low." Of course, many VCs want to simply blame the current economy and point their finger at the global financial troubles taking place for the reason their investments may not be looking so hot. And while some VCs are legit in saying that, to me, it seems a vast majority of them should be blaming themselves, and not the economy.

I speak to many VCs and while some of them are very knowledgeable of the market and product vertical they are investing in, many aren’t. Some have no idea how big the market opportunity it, who the major competitors are and what their product offerings look like for the industry they are investing in. How many times have we seen VCs give a large chunk of money to someone who has no real business model, no business experience, yet says they have really great technology so that’s why they invested. And I’m not talking about the content delivery business here. Think about what we saw two years ago in the UGC market, or what we’ve seen in regards to the number of "Internet TV Platforms" who have raise a lot of capital, but have almost no revenue.

Lately, companies have been coming out of the woodwork approaching me about new compression technology they have, new codecs etc… all of whom have already gotten lots of money, but literally have no idea how they are going to turn their technology into a business. All they keep wanting to talk about is their technology without the understanding that it is worth nothing if they don’t have a way to monetize it. Or they say things like, "my technology is going to compete with Flash". Ok, good luck.

Of course, we have seen this before. Many VCs are making multiple bets and hoping some of their investments pay off to help cover the ones that don’t. I understand that is how the game is played. But the idea that many VCs simply point to the economy as the sole reason why they have no confidence in any industry is wrong. VCs need to start doing a better job of truly understanding the market opportunity and competitive landscape they are investing in. The most common responses I get from VCs when I ask them why they invested into a particular company is usually, "the founders have great technical backgrounds", "many of the executives have PhD’s", or "they have the best technology we have seen". What about the business experience of the executives, the market opportunity for the product/service and the business model for the company?

Akamai & Limelight Say Testing Methods Not Accurate In Microsoft Research Paper

Last week, I posted about a new technical research paper entitled "Measuring and Evaluating Large-Scale CDNs" that was put out by the Microsoft Research division and the Polytechnic Institute of NYU. The purpose of the study was to conduct extensive and thorough measurements to compare the network performance of Akamai and Limelight Networks.

Some noticed that I did not take any stance either way on the findings of the paper, which was mainly due to the fact that I am not a network engineer and don’t pretend to know everything that is involved in properly testing a network. That being said, both Akamai and Limelight Networks responded to my requests to review the paper and provided me with their comments. Both agreed that there is a lot more to properly testing a network than just the two aspects of CDN performance the paper looked at. Limelight has posted their response to the paper on their blog and Akamai response is as follows.

Based on our internal review of the whitepaper, we believe that there are a number of stated conclusions that are incorrect. These include:

1. Akamai is less available
This conclusion is false. The researchers tested the responsiveness of a single server and group of servers independent of our mapping system – note that this is not the same as measuring the general availability of Akamai’s content delivery services. Because Akamai’s software algorithms will not direct traffic to unresponsive machines or locations, all their conclusion really points out is that a portion of our network is not in use at any time. (This may be due to hardware failure, network problems, or software updates.) We believe that any measurement of availability must take into account Akamai’s load balancing, and that if specific IPs are being tested, then the researchers are not doing so.

2. Akamai is harder to maintain
This conclusion is also false. While Akamai has more locations, and more machines, the power of the distributed model with automatic fault detection means that Akamai does not have to keep every machine or location up and running at all times.  It is incorrect to infer from the fact that some servers are down that Akamai’s maintenance costs are higher.

3. With marginal additional deployments, Limelight could approximate Akamai’s performance
We believe that this conclusion is also false. In our opinion, the research team’s performance testing methodology likely overstates Akamai’s latency numbers. This is because any of our server deployments in smaller ISPs that do not have an open-resolver nameserver would have been missed in their discovery process. It is important to note that these are also the locations where we get closest to the end users.  If those locations were discovered by their research, we would expect the average latency numbers derived from the measurements to be lower. If they are missing some of our lowest latency deployments, then naturally the average, median, 90th and 95th percentiles will change for the better. Because these deployments are the best examples of our "deploy close to the end user" strategy, missing them affects our results more than it would Limelight’s. The networks most likely missed are either smaller local ISPs in the U.S. and EU, or providers in specific countries. These are exactly the places where we’d expect Akamai to have very low latency, but Limelight to have higher latency (especially in Asia, etc.) As such, we believe that the research team’s measurement method ultimately under-represents our country, "cluster", and server counts because they missed counting these more local deployments that do not have open-resolver nameservers.

4. After testing akamaiedge.net, they concluded that Akamai uses virtualization technology to provide customers with isolated environments (for dynamic content distribution)
This conclusion is false. The akamaiedge.net domain is used for Akamai’s secure content delivery (SSL) service, used by WAA and DSA. While these services do accelerate dynamic content, Akamai is not using virtualization technology to provide customers with isolated environments – ultimately, the research team reached an incorrect conclusion after observing how we handle hostname to IP mapping for secure content. The measurements done also concluded that akamaiedge.net servers were in a subset of locations as compared to the larger Akamai network – this is correct, as our SSL servers are hosted
in extremely secure locations.

Furthermore, while the akamaiedge.net network is in fewer locations than the akamai.net network, it is still in more locations that Limelight’s entire network. In addition, the measurements done for this network also under-counted the number of servers and locations. Finally, the whitepaper did not provide figures on CDN delay for this network, only DNS delay.

It is important to reinforce that the "per server" and "per cluster" uptime and availability measurements in the whitepaper that show Limelight as more "available" bypassed Akamai’s mapping system. As such, even if our mapping system never would have sent traffic to a location, they are counting us as unavailable. 

Having a more distributed model (as Akamai does) de-emphasizes the importance of any one location, so much so that we can have entire locations down without impacting performance. Similarly, the researchers don’t sufficiently consider the penalty associated with an unavailable Limelight cluster. One down location in Japan, when it is the only region in Japan, would ultimately have a much greater performance impact than having one of 20 locations in Japan become unavailable.

Additionally, it is also important to reinforce that the research performed did not measure general performance of Akamai’s services (as we would do for a customer trial), but rather DNS lookup delays, and the delay to reach the server selected by Akamai’s mapping system – these are only two components of a full performance measurement. By unintentionally filtering out many of the best examples of our "deploy close to end user" strategy, the research team has grossly misrepresented our availability numbers and also over-estimated our latency.