Netflix & Level 3 Only Telling Half The Story, Won’t Detail What Changes They Want To Net Neutrality

Late yesterday, Netflix’s CEO published a blog post making the argument that “stronger” net neutrality is needed and that Netflix will “fight for the Internet”. This follows a post earlier in the week from Level 3 in which they said some ISPs are playing “chicken” with the Internet and purposely causing “congestion”. Unfortunately, both companies are only telling half the story when it comes to this subject and are leaving out some very important facts. They are also making some statements that are factually inaccurate, contradicting themselves and providing nothing in the way of clarity to the topic. [If you want to know how the interconnection deal between Comcast and Netflix deal works, from a technical level, with numbers on what it costs, read my other post: Here’s How The Comcast & Netflix Deal Is Structured, With Data & Numbers]

If anything, Netflix, Level 3 and Cogent are doing the opposite and muddying the conversation by using vague, generic and high-level terms, with no definition of what they mean or how they think they should be applied. Even worse, they aren’t detailing any business or technical alternatives on how they think the problem could be solved. They are all doing a lot of posturing and complaining in the media, yet to date, none have outlined any detailed proposal on how they would like to see interconnection relationships regulated or how they want net neutrality rules changed.

Netflix’s main argument in their post is that in this day and age, they feel “stronger” net neutrality rules are required. While that’s their main point and a valid one to make if they want to argue it, they don’t detail how it could be made stronger. They use the term “stronger” twelve times in their post, yet never once define it. Generic phrases aren’t what we need on a topic that is already very confusing. We need clear, concise and well articulated details on how things could change for the better. What exactly does Netflix want to see done and why can’t they come right out and say it? It bothers me that Netflix is telling us that they “will continue to fight for the Internet”, as if they are championing the fight for all of us, but won’t tell us how and what they want as the end result. Netflix does say that ISPs “must provide sufficient access to their network without charge”, but once again don’t define what “sufficient access” means or how it should be measured.

What also makes this complicated is that Netflix is only highlighting the things that benefit their argument and isn’t telling the whole story. In some cases, they are also making statements that aren’t accurate. Netflix likes to make it sound like they have no choice when it comes to sending their traffic into the ISPs networks, when in fact, they have many choices. The transit market is extremely competitive, with at least a dozen major providers who offer transit services at different price points and with different SLAs. Netflix could use multiple providers to connect to ISPs and could also use third party CDNs like Akamai, EdgeCast and Limelight, who are already connected to ISPs, to deliver their traffic. In fact, this is how Netflix delivered 100% of their traffic for many, many years, using third-party CDNs. Netflix likes to make it sound like there is only one way to deliver videos on the Internet when in fact, there are multiple ways. No one who understands how the Internet works would debate this.

Netflix’s whole argument is that ISPs are purposely letting their peering points get congested but what Netflix isn’t talking about is some of the stuff they have done behind the scenes to make matters worse. Saturating a peering point can easily be prevented if you buy transit from multiple providers, which Netflix does. But the reason Cogent is the one transit provider we always seem to hear about is because Netflix continued to push their traffic through Cogent even though they knew it was already congested. Even though Netflix was buying transit from multiple providers, it wasn’t routing around capacity issues, like all the other CDNs do. So why did Netflix continue to push their traffic through Cogent even though they knew the link was congested? That practice is abnormal for any CDN to do as it impacts the quality of the video being delivered. Remember, no ISP decides how the traffic comes into their network or which transit providers Netflix uses. When you use third party CDNs, they also buy transit from multiple providers so that the can route traffic in real time around places where there is congestion.

In reality, the blame could fall on Netflix for continuing to send traffic over a link they know is congested, when alternatives exist in the market. The blame could also fall on the transit provider who sold Netflix capacity that they know they can’t deliver based on their current peering arrangements. It could also be that the broadband service provider isn’t upgrading a link that is still under peering compliance, per their peering policy. Laying the blame on the ISP isn’t always the case or always accurate. It may be at times, but the company that should be blamed will be different depending on the business situation of the companies involved.

So as much as Netflix wants to make this into a net neutrally issue, it’s a business issue. Netflix has alternatives, they chose not to use them. One could also argue that, by Netflix not routing around the performance issues with Cogent, the end result is that it forces the ISP to take angry calls from consumers. And if the ISP gets enough of those calls, maybe the ISP would then agree to join Netflix’s Open Connect program and allow Netflix to come into the ISPs last mile to place their own servers. Netflix’s motives in this whole argument is to protect their business, which is fine, but then they should not portray their argument as one where they are “fighting for the Internet”.

In Netflix’s post they said that Cablevision is “already practicing strong net neutrality”, but neglects to remind us that Cablevision is in Netflix’s Open Connect program. So is that what Netflix defines as “strong” net neutrality, any ISP that agrees to Netflix’s terms? And those that don’t join Open Connect and don’t agree to Netflix’s terms, are those the ISP that have “weak” net neutrality principles? To me, that sounds more like Netflix defining who is “strong” or “weak” based on Netflix’s own business terms and not true net neutrality. Netflix also calls out Comcast as “supporting weak net neutrality”, but the fact is Comcast is the only ISP that legally even has to follow the rules, due to their purchase of NBC.

Netflix also mentions in their post that strong net neutrality prevents ISPs from charging a toll for interconnection to services like “Netflix, YouTube, or Skype” and intermediaries such as “Cogent, Akamai or Level 3”. So why aren’t Google, Akamai, Yahoo!, AOL, Facebook, Twitter, Microsoft, Apple, Limelight Networks, EdgeCast and other CDNs and content owners who have built their own CDNs also complaining as Netflix is? Netflix is the only major content owner whom we have heard from, in a public forum, that thinks interconnection should be covered under net neutrality rules. It’s interesting to note that Netflix mentions YouTube by name, but the vast majority of Google’s content is already delivered inside the last mile via GGC (Google Global Cache). Not every ISP, but the majority of them, have been happy to work with Google and place Google caches within their network. So one has to ask themselves why Google has success working with ISPs, but Netflix isn’t.

Moving on to Level 3’s blog post, they too use a lot of generic statements saying they want “reasonable terms” when it comes getting capacity from ISPs but don’t define what that means. Level 3 could outline technical alternatives, like bringing the content further into the last mile at their own cost, but to date, they haven’t proposed such a plan. Level 3’s post says that the way the Internet works, “providers must spend money and connect their networks together.” So on one hand they outline how it needs to be done, but then also argue no payment should be made.

If you thought that was confusing, just think about this. When Cogent wanted to send more traffic into Level 3’s network than Level 3 was sending in return, Level 3 told Cogent it had to pay. Level 3 said, “There are a number of factors that determine whether a peering relationship is mutually beneficial. For example, Cogent was sending far more traffic to the Level 3 network than Level 3 was sending to Cogent’s network. It is important to keep in mind that traffic received by Level 3 in a peering relationship must be moved across Level 3’s network at considerable expense. Simply put, this means that, without paying, Cogent was using far more of Level 3’s network, far more of the time, than the reverse. Following our review, we decided that it was unfair for us to be subsidizing Cogent’s business.” You read that right. Level 3 thinks Cogent should have to pay them, because the balance of traffic isn’t equal, but Level 3 doesn’t think it should have to pay Comcast, even though that traffic is also lopsided. Confused yet?

Level 3 did talk about wanting to implement a peering system based on “bit miles”, but that’s a new and unknown unit of measure that has no common definition in the industry, that I can find. Level 3 proposes that many of its CDN bits travel very few bit miles because they offer to place CDN servers in or near ISP networks so there is very little burden on the ISP. That makes sense on paper, but it ignores the whole balance of trade or investment value proposition on which peering is based. The CAPEX and OPEX differentials between building and growing a broadband network and building a CDN are immense. For the bit-miles approach to gain traction, it would likely require an organization like the IETF to define precisely what bit miles are, how they are measured and how an exchange of traffic could take place using such an approach. Defining a new measurement, basing new rules off of it and trying to enforce those rules without industry acceptance seems impossible.

My goal when writing about this subject is to try and bring some transparency to what is taking place, which is hard to do when the companies involved want to be vague and all have business motives behind their decisions. This whole debate is not going to move forward until these companies start detailing what they actually want, the real impact it will have on their business (with numbers) and give specifics on exactly what should be regulated how it should me measured, monitored and priced. Until that happens all they are doing is confusing the media, consumers, legislators and keeping this debate from moving forward.

Any company that want’s new rules, regulations and the government to get more involved in their business, without defining what they mean, better be ready to have to deal with some laws that might get passed in the end that they didn’t actually want. In my opinion, these companies that are asking for regulation, but not providing the details on what they really want are asking for potential trouble. There are a lot of smart people at these companies and all of them should have already proposed detailed alternatives on how this could be solved. I know that some of them plan to make additional filings today, but from what I have already seen and heard, none of these filings will detail or outline any real alternatives to the situation.

[My first post on the deal can be found here: Inside The Netflix/Comcast Deal and What The Media Is Getting Very Wrong]

Disclaimer: It’s nearly 2am ET and I need to get to bed, so apologies if my eyes missed some grammar errors. I will re-proof it in the morning.

Sponsored by

Netflix’s CEO Publishes Post Calling For “Strong Net Neutrality”

Netflix’s CEO Reed Hastings just published a post on Netflix’s blog saying the Internet needs “strong net neutrality” and explains his case as to why. Problem is, he does not suggest any alternatives on how to fix the problem. I’ll have more thoughts on the topic in a follow up blog post shortly.

Global Licensed/Managed CDN Market To Reach $60M In 2014

Screen Shot 2014-03-19 at 10.31.24 AMIn addition to my role at StreamingMedia.com, I’m also a Principal Analyst at Frost & Sullivan and I’ve recently released my latest report on the size of the “Global Licensed/Managed CDN Market“. The report details the market drivers, restraints to market growth, product and pricing trends, competitive landscape, and market forecasts and trend analysis broken out by region of the world for the next four years.

Outside of commercial deployments, nearly all large scale telcos have already deployed or are building out CDNs internally to handle the flow of video across their network. It’s a move they all had to make for cost savings and quality of experience (QoE) benefits and while most of them are simply deploying boxes from major hardware vendors, or building it on their own, some telcos are working with vendors who offer a licensed and/or managed CDN offering, so they don’t have to start from scratch.

While there was an uptick in the number of Licensed CDN (LCDN)/Managed CDN (MCDN) deals in 2013, the market opportunity is very small, at just over $60M this year globally. The market for LCDN/MCDN services will never be large and there are only a few vendors who even offer the service, with most of them doing it for ancillary benefits to their core business and not for generating a lot of revenue from licensing the software or doing a managed CDN build out. The LCDN/MCDN model is still evolving and its success as a stand-alone offering is still to be determined. The market will never be a large opportunity in terms of revenue, but LCDN/MCDN is starting to become more of a check box for CDN service providers to help strengthen their core business.

The key takeaways from the report are:

  • Most telcos and carriers are not expected to offer commercial CDN services, but will likely leverage LCDN/MCDN for internal CDN deployments
  • Very few vendors offer a true LCDN/MCDN solution; those who do, provide it as it offers ancillary benefits to their core business
  • LCDN/MCDN is a small market today, just over $60M in total revenue with at most, two dozen or so deployments
  • LCDN/MCDN offerings are mostly being sold by commercial CDNs, but over time, should be offered by hardware vendors already selling to telcos and carriers
  • It won’t take long for LCDN/MCDN to go away as a stand-alone offering and instead, be sold as part of a larger solution set

Copies of the report are available to any customer who has a subscription to Frost’s Digital Media research service and anyone interested in getting a subscription can contact me for more details. Also, while many research analysts at other firms won’t talk to someone unless they are a customer of that firm, I have and always will talk to any company who is interested in getting more details on any aspect of the video, streaming and content delivery ecosystem. You don’t have to be a customer of Frost & Sullivan for me to take your call and do a briefing with you, so call anytime.

How The Olympics Were Streamed Online: Q&A With Microsoft & iStreamPlanet

Last week StreamingMedia.com hosted a webinar with Microsoft and iStreamPlanet talking about the backend technology and platforms that powered the online streaming of the Winter Olympics. During the Q&A portion of the event, the companies gave out details on server technology, protocols, encoding bitrates, transport services and other pieces of information. Below are their answers to some of the most frequently asked questions during the webinar.

Question: How many peak concurrent connections were recorded during the Sochi Olympics?
NBC Sports hasn’t officially confirmed their peak concurrent viewership numbers, but the New York Times did report that the USA vs. Canada men’s hockey game viewership peaked at 850,000 concurrent viewers before the end of the game.

Question: What was your highest & lowest bit rate per device?
The highest bit rate was 3.5 Mbps (1280×720) and the lowest bit rate was 200 kbps (340×192). Alex Zambelli’s blog contains additional information about the Olympics encoding specifications.

Question: How many total GBs of live video were delivered from opening to closing ceremonies?
NBC Sports reported that 10.8 million hours of streamed video were consumed during the Sochi Winter Olympics, of which 80% were live streams consumed via NBCOlympics.com and NBC Sports Live Extra apps. Those numbers are cumulative and NBC hasn’t officially stated how many hours or gigabytes of video were actually delivered.

Question: What is the latency you have for the live streaming?
Latency depends on multiple variables in the workflow, including the geographical distance between the content acquisition and the content delivery, the encoding buffer size, the duration of the media segments (fragments), CDN caching and edge structure, and the player buffer duration. Typical end-to-end latency observed in HTTP-based adaptive streaming can range anywhere between 15 seconds to over a minute.

Question: Why use Flash Player and not Silverlight since you guys used Windows Azure Media Services?
The customer will often determine the playback platforms and formats and in this case NBC Sports chose Adobe Primetime and Flash for the client experiences. Both Aventus and Windows Azure Media Services are capable of delivering live video in multiple HTTP-based adaptive streaming formats, such as Smooth Streaming, HDS and HLS.

Question: What kind of redundancy was used to ensure that feeds continued streaming despite any technical failures?
There were multiple levels of redundancy built into every aspect of the live video workflow, from the content acquisition to the content delivery. A few examples: content was acquired via IP over fiber networks but for key events satellite back up was ready. iStreamPlanet replicated the streams at Switch SuperNAP and sent them to Azure U.S. East and West data centers for geographical redundancy. Finally, within each data center Aventus used full VM redundancy to ensure uninterrupted publishing even in the case of partial channel failures.

Question: Can you give us some detail about the transport services necessary to convey the live streams from ingest to CDN?
The live feeds from Sochi were delivered over a private IP network from Sochi Olympics Broadcasting Services (OBS) to NBC’s facility in Stamford, CT; to iStreamPlanet’s data center at Switch SuperNAP in Las Vegas, NV; and to Microsoft’s Windows Azure U.S. East and West data centers.

Question: What is the orchestration layer used to deploy and configure additional channel instances in the cloud?
Aventus has built in orchestration layer to deploy and launch Aventus channels while the details of the Azure cloud orchestration layer used to launch Window Azure Media Services channels will be made available when Windows Azure Media Services officially launches their Live service.

Question: How much consideration is given to content protection/DRM and how is it achieved?
Our customers give a lot of thought to content protection and content security since most of the content we stream is considered premium content. Aventus supports PlayReady DRM and AES-128 encryption although NBC Sports chose not to use these technologies for live streaming the Olympics. NBC Sports used CDN token authentication in addition to TVE authentication to prevent unauthorized access to video streams.

Question: What KPIs were used in the monitoring? Was the dashboard simply viewed by humans?
Both Aventus and Windows Azure Media Services relied on their own telemetry systems, respectively, to monitor system performance. Due to the high profile nature of the event additional human monitoring was provided by iStreamPlanet and Microsoft operations teams.

Question: Where were the ad triggers inserted (SCTE?) and did the player insert ads or the stream leaving the cloud?  Was Freewheel the ad insertion tool?
iStreamPlanet built a video CMS tool used for ad insertion. Ad markers were inserted into encoded streams via Aventus web-based API, and converted to HLS and HDS-compatible ad markers by Windows Azure Media Services. Ad insertion was then performed client-side by the respective player apps.

Question: What was the origin encoder at iStreamPlanet?  Elemental?  Cisco Media Processor?  Digital Rapids?
None of the above. The Olympics video feeds were ingested and transcoded to multiple bitrates and packaged into Smooth Streaming for delivery to Windows Azure using iStreamPlanet’s proprietary video encoding solution Aventus. Aventus is a cloud-based live video encoding solution, built for the cloud from the ground up, capable of delivering live events and live linear channels in a scalable fashion while utilizing commodity x86-64 hardware.

Question: How was Android supported? Was RTSP in the mix for the native android browser or was HLS in Chrome utilized  or was an app solely relied on?
Android was supported via a Google Play store app which consumed HLS streams.

Question: What server technology was used to deliver the live multiprotocol streams?
Windows Azure Media Services was used to dynamically remux video into Apple HLS and Adobe HDS format for delivery to compatible devices.

Question: What processing is done at Switch?
In the case of the Olympics the streams were aggregated and replicated at the Switch SuperNAP data center and then sent to Azure U.S. East and West for processing. iStreamPlanet also uses Switch SuperNAP for hosting its own private Aventus SaaS offering.

Question: Was there a way to prevent viewers from recording live or on demand content?
Using content protection solutions such as PlayReady DRM it is possible to restrict media playback to only secure playback platforms over certified output paths (e.g. HDCP/HDMI). This type of content protection was not used by NBC Sports for streaming the Sochi Olympics.

Question: Was this technology used solely for NBC or was it the streaming infrastructure for all global streaming (i.e. CBC in Canada, Network Ten in Australia, etc)?
iStreamPlanet Aventus was used solely for NBC Sports live streaming video coverage of the Sochi Olympics.

Question: What device or appliance was used to deliver the video from NBC Sports Stamford to iStreamPlanet?
NBC Sports used Ericsson video encoders to create broadcast-quality MPEG-2 Transport Streams for delivery to iStreamPlanet.

Question: Where there any major issues or failover situations during the games? I know the infrastructure was built to span multiple datacenters for redundancy.
The solution performed extremely well and there were no failovers required, however, the solution was architected to fail over to new VMs and servers with minimal to no interruption to the streams.

Question: What kind of CMS was used and how do they relate to each other?
iStreamPlanet worked with NBC Sports to create a custom CMS for the Olympics to perform ad and slate insertion. Other CMS solutions were used by NBC Sports to manage live and on-demand video assets.

Wednesday Webinar: Best Practices For Designing Multi-Device Video Experiences

Tomorrow, at 2pm ET, I’ll be moderating another StreamingMedia.com webinar, this time on the topic of, “Best practices for designing multi-device video experiences.” Delivering a consistent brand experience across multiple devices is paramount, but it is becoming harder to control as some new platforms dictate their own User Experience (UX) and User Interface (UI). There is also the question of relevancy of features and content. Should they be exactly the same across all devices? Or should context of use dictate a different approach? This and other topics will be covered in the webinar with Leigh Brett, VP of Experience & Design at Piksel. Other subjects to be covered include:

  • Brief overview of what UX is and isn’t
  • Multi-device design techniques
  • Designing for context of use
  • Content and feature strategies
  • Delivering design in agile environments

REGISTER NOW to join us for this FREE Web event and learn the best practices in ensuring a ‘familiar’ video-centric service spanning a multitude of devices, as well as how you determine what features add value to your customers amidst content provider and technology constraints.

Recent Live Event Failures Due To Technical Issues With Vendors, Not “Excessive Traffic”

We’ve seen quite a few web events have some pretty big streaming failures lately including the Oscars, Golden Globes, WWE Network and last night’s True Detective finale on HBOGO. After these kinds of failures, the companies involved are quick to point to a large volume of traffic as the reason for the failure when in fact, that’s not the case at all. Technical issues with vendors, a breakdown in technology and lack of planning are the true cause of viewer’s frustration. Today HBO said that the problems viewers encountered last night was due to “an excessive amount of traffic“, which is not the real reason there was a problem.

Like almost all content providers, HBO relies on third-party CDNs for the delivery of their videos and these CDNs are in the business of handling large scale live events. That’s what they do. So making it sound like these CDNs can’t handle the traffic caused by these events, from a bandwidth or server capacity perspective is not the case. They have the resources to handle it, but many times, technical issues in the video ecosystem keep these events from happening without problems and more often than not, poor planning also plays a part in the failure.

However, it’s not always CDNs that are the cause of the problem, it can also be issues with other platforms in the video chain that all have to work perfectly in order to make the live streaming event a success. In the case of the Oscars, ABC said the live streams were down nationwide “due to a traffic overload/greater than expected“, but that’s not what happened. In fact, the live stream going over CDN provider EdgeCast and Akamai (Updated 3/12: Akamai did not stream the Oscars but was the CDN for the website) was fine, but Verizon had an issue with the signal acquisition portion of the event. Verizon’s Uplynk software runs on Amazon Web Services platform, which encountered a problem during the broadcast. So it had nothing to do with a “traffic overload” on the CDNs or them not being able to handle the capacity of the live stream itself.

In the case of the recent problems the WWE had when they launched their new streaming platform WWE Network, the company outsourced the infrastructure requirements to Major League Baseball Advanced Media (MLBAM), which had problems with their ecommerce infrastructure that kept people from being able to sign up for the service. The day the service launched, WWE said that MLBAM “was overwhelmed and its systems have been unable to process most orders since 9 am due to demand for WWE Network.” In reality, the “demand” is not what caused the issue, this wasn’t even a live event, but rather technical issues with MLBAM infrastructure and software. And keep in mind, MLBAM does not own their own infrastructure, they run their software on top of other cloud and CDN providers.

For the media reports that say these live events are failing because there is so much demand, that’s not true. Remember that the live webcast of the Oscar’s was only available in eight cities in the U.S. and you had to be a subscriber to cable TV. In the case of HBOGO, last week HBO’s CEO said that only a small percentage of subscribers to HBO actually use the HBOGO service. So user demand and traffic to these live and on-demand events is not what’s causing them to fail and the number of people trying to watch the video streams aren’t that large. It simply comes down to the technology platforms that are being used which are not as reliable as broadcast TV. Media reports implying that a “broadband shortage” or “overwhelming popularity” is what’s causing these failures is not accurate.

The fact is the Internet will never be as reliable or scalable as broadcast TV distribution. Some don’t like to hear that, but that’s reality and you can’t argue with it. The Internet was not built to handle large scale live video events with the same reliability, quality and viewership numbers as broadcast TV. We’ve been live streaming events over the Internet for 20 years now and in that time, the average live video stream has only grown in quality from about 300Kps in 1999 2000/2001, to an average of 1.5Mbps today. I would argue that’s not a lot of quality improvement over the past 15 years, especially when compared to how much broadcast TV has improved quality wise during the same time.

New Speaking Spots Open: 4K, Video Monetization, Advertising Spend, Live Streaming

Some new speaking spots have just opened for the Streaming Media East show, taking place May 13-14 in NYC. In addition to these speaking spots, I also now have room to add two more sessions to the program. If you want to organize and moderate a session around a particular topic, please contact me. One 60 minute round table panel spot is open and I am accepting all suggestions from everyone on topics and pitches. The second round table panel will be about streaming live events, specifically around content like gaming from services like Twitch.tv and others. I am looking for a moderator and speakers for this session as well.

In addition, some of the moderators in the program are looking for specific speakers for their sessions, so contact me if you are interested in any of these openings:

4K Streaming: Cost, QoS, and Cutting Through The Hype
It seems every year the online video space is inundated with the next big thing and a push toward the latest technology must-have. In years past, we saw pushes for 3D streaming and HEVC; this year, it’s all about 4K. This session will cut through the 4K hype and discuss what real-world impact 4K could have and what the requirements really are to stream in 4K. Hear from experts from various sides of the industry to get some clarity on what 4K will cost content owners to implement, how QoS will be addressed, and what the future may hold for 4K streaming.

Looking for one content owner/distributor/syndicator specifically.

How Advertisers Can Master the Spend Between Television and Digital Video
This year’s TV Upfront marks one of the first years that a common set of measurement metrics is available in the advertising world for both television and digital video. Agencies and brands alike are now faced with the challenge to create a holistic spending strategy that brings the most value from both online and television ad dollars. In this session, speakers will explain how technology is now enabling businesses to strategically map cross-screen advertising spend in order to maximize campaign ROI. The panel will discuss how these strategies are effectively optimizing campaign reach and influence, as well as combating struggles of fragmenting reach and inventory restraints that have historically come with television-only campaigns.

Looking for moderator and accepting all speaking submissions.

Paid Media on YouTube: Strategies for Brands
Most consumer brands have developed some presence on YouTube, but paid media on the entertainment behemoth remains a mystery for many. Should YouTube be treated as an extension of search or television, performance or branding? Is cost-per-completed-view an appropriate performance metric? With easy access to TrueView ads through AdWords, should media agencies oversee these budgets? This session will discuss multi-channel networks and YouTube technology partners, looking at how they add value and how brands should work with a third party to manage content on YouTube.

Looking for speakers who are content owners, agencies or brands.

Achieving Video Advertising Campaign Goals Through Data
Traditional video ad buys were broad, untargeted and inefficient. However these days with the move to programmatic buying, and the expansion of the video market, a whole new wave of efficient buying has emerged. Based on a campaign’s key performance indicators, be it clicks, or completes, far greater performance can be achieved at a much lower cost. This session will look at the leading ways to slice and dice audiences by demographics, psychographic, behavioral and contextual data. It will also explore the most important metrics for engagement and performance as viewed by brands, agencies, and networks.

Accepting all speaking submissions.

The Economics of Mobile Video: Building a Profitable Business
Delivering premium content to mobile devices has moved from sideshow to the main event. Publishers and broadcasters are now tasked with making streaming profitable on all screens, which involves a lot of complexity. From the expense of creating a high-quality experience for multiple devices and platforms, to the complexities of transcoding, ad insertion and measurement, monetizing mobile video remains challenging. This session will discuss which approaches are working in the market, what type of content consumers are viewing, how big of an opportunity mobile is to premium content owners and how they can build an effective mobile video strategy.

Looking for content owners/distributors/syndicators specifically.