Q&A With John Dillon, CMO of Hybrid CDN Velocix

Following up on my post last week about hybrid-based CDN Velocix, I spent some time chatting with their CMO John Dillon about the company’s hybrid offering and what he’s seeing from P2P customers in the Europe and how that compares with the U.S. market.

Question: When do you expect the company to be profitable?

John: We don’t make any forward looking statement about our financial position.  Velocix is privately held and backed by two of Europe’s leading venture capital firms – 3i and Amadeus.

(Note from Dan: To date, Velocix has raised just over $40 million and is not yet profitable. I estimate they will do between $6-9 million dollars in
revenue for 2008.

Question: How is the European market for P2P services different than the U.S. market?

John: Outside of the UK, the understanding and level of interest in the use of P2P technology for delivery of legitimate commercial service is fairly consistent. It it widely accepted that this technology will play a fundamental role in shaping the future of online video. The reality is that the majority of the accounts we have today are interested in P2P but are using our traditional http and streaming services. Right now, they just want to get launched so that they can start to build an audience.   

The benefits of Hybrid P2P is not always obvious at the outset.  It is only when online services begin to gain traction with significant audiences, that the cost of delivery and scalability become significant factors. This is when P2P begins to look increasingly attractive as an option. In the UK however, there is a micro-climate around a number of the major broadcasters. The BBC, C4 and Sky have all successfully launched P2P-download based catch-up TV services. 

Question: What percentage of your revenue comes from the U.S. today, and how do you expect that to grow moving forward?

John: Our business splits out at approximately 40% U.S., 50% EMEA (Europe, Middle East, Africa), 10% Asia Pacific. The US is a key growth market for us. As a European headquartered company, it is fair to say that we currently have a stronger market presence in our home market than elsewhere. However, we have just secured a number of key strategic wins in the U.S. and will be looking to accelerate our growth plans in this geography off the back of these deals.

(Note from Dan: Some of these U.S. based wins John references are significant and this is not a case of the CMO just giving marketing speak. I’ll detail some of these wins at a later date, when I am allowed to talk about them.)

Question: Why do you think so many broadcasters in UK had started to use P2P in some form, but no major broadcasters in the U.S. have adopted it as of yet?

John: I referred to this phenomenon earlier as a micro-climate surrounding leading broadcasters in the UK. The reality is that these guys were the pioneers. There were few if any other examples for them to follow at the time. They were blazing a new trail. In late 2005, the BBC began, what was at the time, one of the first commercial P2P trials. The first to launch however was UK satellite TV provider Sky, with their Sky by Broadband service, subsequently re-branded as Sky Anytime. Next to launch was Channel 4’s with their 4oD catch-up service. Finally, and arguably most significant of all was the launch of the BBC iPlayer service last year, augmented with streaming services this past Christmas.

I’m guessing that in these early days, ideas and plans were shared and they all ended up taking a very similar approach. What is interesting to note is that looking forwards, they plan to officially collaborate together, learning from their collective experiences to-date, to unify their approach with a project announced and code named Kangaroo.

Question: What is the barrier to entry for CDNs to make their
stand alone CDN offering a hybrid one and what is the cost/development

John: A number of traditional CDNs have made noises about hybrid-P2P. Some
have made technology acquisitions and others have formed strategic
partnerships. Few have actually launched commercially available
services however, and little if any focus or marketing is evident. This
is most likely due to both economics and technology.

Firstly economics. Say for example, a major customer of a
traditional CDN provider could achieve 30% peer efficiency (30% of
delivery services from peers rather than from CDN caches) on average.
This would be a 30% reduction in revenue and a significant reduction in
profit contribution for the CDN. Significant market uptake would
challenge quarterly driven publically quoted CDN providers, placing
intense pressure on existing business models and cost structures.

From a technological perspective, bolting a P2P client network onto
an http caching infrastructure is clunky at best, with caches providing
“fill-in” via http byte-range requests. Custom routing and delivery
logic and algorithms are required both at the client and server end, to
force the network into performing unnatural acts to fulfill the
delivery requirement. Hybrid-P2P required a company to have the
appropriate business model and a network architected in a fundamentally
different way. It took us roughly 18 months to build out our network so
barriers to entry are significant.

Question: Please explain a little bit about how your network was built to support P2P from day one and how that is different from the other providers.

John: From the outset, we wanted to create a CDN optimized for delivery of large assets such as video, software and games. We realized very early on that the http protocol is not great for this. Http is ideal for serving web pages where connections are maintained for a few seconds. Even if a request fails, simply clicking refresh is acceptable and usually fixes the problem. There are two fundamental limitations:

1) For delivery of larger assets, like video for example, connections can last anything from a few minutes to several hours. Clicking refresh for a failure mid way through is not an acceptable option. Http is a single source protocol that represents in a single point of failure.

2) CDNs essentially replicate popular content in cache severs located around their networks. This is relatively straight forward for small files, but becomes problematic for larger multi gigabyte files. A significant shift in traffic profile can blow existing CDN routing and caching algorithms out of the water! Distributing and storing multi gigabyte files on a caching server network is a major operational challenge, particularly for traditional CDN providers who created and optimized their networks for website acceleration. 

What is interesting to note is that http limitations are essentially where P2P’s strengths lie. P2P is a proven technology optimized for delivery of massive files over time, rather than web pages in a few seconds. P2P protocols are designed to take content from multiple source locations rather than a single source, eliminating a single point of failure. Also, rather than the entire file being the smallest unit of currency, P2P slices large files into thousands of pieces, making them much easier to propagate across highly distributed networks.

These observations served as the design goal and fundamental architectural principle for the build-out of our CDN. Within our network, our cache and storage servers communicate using P2P protocols. All routing and management intelligence is based on P2P principles. We essentially have a high performance P2P Cloud network, where the peers are high performance cache servers. 

When a file is requested, our network uses sophisticated cache selection algorithms to identify a number of suitable cache servers for the delivery. This selection is made using both performance and economic criteria to maintain the required delivery speed at the lowest possible cost. These delivery cache servers communicate with each other and also “chatter” with other servers on our network, to make sure they have the content required to service the delivery need.  The delivery process dynamically blends content and bandwidth from the selected cache servers, ensuring that the resulting bitrate meets committed service levels. If at any time, performance from a delivery cache server degrades, either the others up their output or an alternative cache server is brought on stream.