Survey Of Over 400 Broadcasters/Publishers Using Server Side Ad Insertion For Live, Highlights Technical Challenges

Server Side Ad Insertion (SSAI) is one of the hottest topics in the industry right now and I’m tracking over 30 vendors in the market that say they offer some type of SSAI technology. To see what’s really going on in the market I conducted two surveys with over 400 broadcasters and publishers on their use of SSAI from both a business and technical level. The survey collected data on how they do ad buys across digital/TV, what their DRM needs are, how their CPMs and fill rates in live compare to VOD, how important frame accuracy is, the metrics that matter most along with a host of technical questions.

The results of the survey shed light on just how far SSAI still has to go to be considered a technology that broadcasters can rely on with performance and proper ad triggering. Lots of technical problems exists, especially getting SSAI platforms to work at true scale. I’ve include one slide with results from the survey but all of the raw data is available for purchasing. Please contact me for details.

The Challenges With Ultra-Low Latency Delivery For Real-Time Applications

For the past 20+ years, CDNs have dominated the content delivery space when it comes to delivering video on the Internet. Third-party CDNs focus on layers 4-7 in the OSI stack which includes things like caching and TCP protocol optimization. But they still depend on the underlying routers of the providers they buy bandwidth from, and latency could still vary and be only as good as the underlying network. For ultra-low latency applications, layers 4-7 don’t help, which is why many CDNs are looking at other technologies to assist them with delivering video with only a few hundred milliseconds of delay. For the delivery of real-time applications, there is a debate currently taking place within the industry as to whether or not you need a private core with optimal routing on layers 2-3, to truly succeed with low-latency delivery.

Historically there have been two different ecosystems in networking: private networks and the public internet. Private networks were used by corporations and enterprises to connect their headquarters, branch offices, data centers and retail locations. The public internet was used to reach consumers and have them connect to websites. In the past, these two ecosystems lived separately, one behind the firewall and the other beyond it.

With the rise of the cloud, the world changed. While Amazon deserves much of the credit for ushering in the era of cloud computing (and the demise of physical server data centers), it may have been Microsoft who triggered the tipping point when they shifted from client-server based Exchange to cloud-based Office365. For all of the enterprises dependent on Microsoft software, the cloud was no longer something they could delay. Once they started to shift to the cloud, their entire architecture needed to change from hardware-based solutions to cloud-based solutions. The success of companies like ZScaler and Box shows this continuing trend.

Lately we’ve seen new vendors come to the market, taking products from the private networking space and using it to try and solve problems with the public internet. As an example, last week Mode launched in the market with $24M in funding with an MPLS Cloud, used in combination with SD-WAN and last-mile internet. MPLS is a 20-year-old technology used by enterprises to create private dedicated circuits between their branch offices (ex: Walmart connects all of their stores back to Headquarters using dedicated MPLS circuits). Enterprises buy these lines because they have guaranteed bandwidth and QoS. MPLS costs 10x-20x the price of Internet bandwidth, but you get guaranteed quality, which is why business still pay for this difference, because the latency variability on the internet is far higher.

However the problem with MPLS is that it was built for the enterprise market. It is very rigid and inflexible and often takes months to provision. It was designed for a hardware-centric world, where you have fixed offices and you buy multi-year contracts to connect offices together using fixed circuits. But what if you could make MPLS an elastic cloud-like solution, and bring the great qualities of MPLS over to the public internet.

While many vendors in the public internet space continue to evangelize that the public internet is great at everything, the reality is that it is not. If you are thinking in terms of “seconds” like 2-6 seconds, then the public Internet + CDNs solve this problem. However, when you think in “milliseconds” – like 30 milliseconds being the maximum allowed jitter for real-time voice calls – the public internet is not nearly good enough. If it were, enterprises wouldn’t pay billions of dollars a year and growing at rates 20x more than regular internet prices for private networks. Instead of rewriting all of the issues, read the awesome blog post by the team over at Riot Games entitled “Fixing The Internet For Real Time Applications”.

As many have pointed out, the current public internet has been tuned for a TCP/HTTPS world were web pages are the common payload. However, as we transition into an ultra low latency world where the internet is being used for WebRTC, sockets, APIs, live video and voice, the public internet is definitely not a “good enough” solution. If you’ve ever complained about a terrible video call experience, you know this to be true. Sectors like gaming, financial trading, video conferencing, and live video will continue to leverage the public internet for transport. The growth in ultra low latency applications (IoT, AR, VR, real-time machine learning, rapid blockchain etc.) will continue to make this problem worse.

When I talk to CIOs in charge of both sides of the house: consumer/end-user facing, as well as in-house employee facing, they have said they are interested in a way to utilize MPLS level QoS and reliability for millions of their end-users back to their applications (gaming, financial trading, voice calls, video calls). But is has to be super flexible just like the cloud, so they have total control over the network – create it, manage it, get QoS, define policy –  and have it be way more flexible than traditional hardware-defined networks. It can’t be a black box. The customer has to be able to see everything that’s happening, so they have real transparency for traffic and flow and have it be elastic, growing or shrinking depending on their needs. This is the service Mode has just launched with in the market, which you can read more about here. I’d be interested to hear from those who knows a lot more about this stuff than I do, what they think of the idea.

Videos From NAB Streaming Summit Presentations Now Online

All of the videos from the NAB Streaming Summit in Vegas are now online for viewing. Just scroll through the program to watch the sessions and presentations you like. We are working on a new video portal to host all the videos from future shows as well, so a better user experience is being developed. www.nabshow.com/education/conference/streaming-summit.

Announcing My Investment In Datazoom: A Platform For Video Data Capture & Routing

Within the industry, many content distributors have switched from purchasing end-to-end systems to buying best-of-breed technologies, and creating unique tech stack mash-ups that work best given the restraints of their teams, time and budgets. Even if you look to some of the top revenue generators in the streaming media industry (Facebook, Amazon, Netflix, Google) who have endless resources to build everything in-house, they still use a fair amount of outside technologies. However, what these companies have done differently is developed great “glue” for how all the pieces, built in-house or not, work together.

Based on that idea, I am excited to announce that last year I made a personal investment in a new company called Datazoom. While I was the first investor in Datazoom, the company has now raised $700,000 in total, led by Brooklyn Bridge Ventures. The premise of Datazoom is very simple. With a single SDK, Datazoom allows video distributors to collect any data they want from the video player and send it to any supported tool, in under one second.

Datazoom allows content owners to make on-the-fly changes to the data they are collecting and the frequency at which they collect it. And with a set of pre-built SDKs that can capture data from an ecosystem of players, devices and platforms (iOS, Android, Brightcove, JW Player, Anvato, THEOPlayer, and others), customers can make changes to their data collection footprint or integrate new tools at any time, without touching a line of code. The tools integrated into the Datazoom ecosystem (which they call Connectors) include Video Analytics, Data Warehouses and other Data Process and Visualization. Their Connector ecosystem today includes:  Google Analytics, Amazon Redshift, NPAW’s YOUBORA, Datadog, Amplitude, Heap Analytics, Adobe, Keen and Google’s BigQuery.

Pricing for using Datazoom’s platform is done on a SaaS model, a monthly license and usage based fee. By not pricing based on the number of video views or sessions, customers will have a much more predictable cost and flexibility. Datazoom’s platform is currently hosted on AWS and Google Cloud, with Azure POPs coming soon.

So, what does Datazoom really solve? Today’s methods of integration often involve using hard-coded and static logic, which restricts against any ability to make adhoc refinement and adjustment to maximize potential. Think about it: A single video stream often requires the workflow across 20-30 different systems, each of which have ever-changing availabilities, latencies and service reliabilities. Essentially the experience behind every video stream is ultimately dictated by a dynamic operating environment, created by the current status combination of these services, and yet how they are programmed to work together is static.

The greatest current technology opportunity in the streaming media industry will be solutions that work to maximize the efficiency of existing systems by building a unifying intelligence layer to manage the workflow of systems we use to deliver video. Likely this must be left not to the work of dedicated individuals, but with intelligent systems powered by AI and ML that have the endless capacity needed to handle the frequency, scale and volume of adjustments on a per-stream basis; technology glue. However, before any type of intelligence can be implemented, a few foundational things need to be established first. At the core of Datazoom’s approach is creating three things in the market; a standardized data “currency” that’s accepted by all systems; a point of exchange that all systems peer into to source data; and a real-time speed at which the data currency needs to move.

I made an investment in Datazoom because I believe that their offering helps to address many of the issues that face our industry today by providing a data ingest and management platform that allows customers to provide a more consistent end-user experience through service optimization. After launching at NAB, Datazoom has customers in trial, some of whom I will be able to talk about in detail later this year. If you’d like more info on Datazoom, feel free to reach out to me at any time.

Note: While I am always looking at new companies in the space, to date, this is the only private company I have invested in.

The Current Infrastructure Strategy To Support OTT Services Isn’t Economically Sustainable

There is a significant challenge facing the streaming video industry today. It’s a problem that will get much more acute as time passes unless significant technological innovations can change the economics of how video is distributed. The challenge, which I will call the capacity gap, is that the continued improvement in the performance of streaming infrastructure is not keeping up with the rate of growth in streaming demand.

The capacity gap is a result of the incredible success of streaming video combined with the maturation in performance of the platform used to deliver streaming video to consumers. Today streaming video is delivered to consumers using the same basic technology platform that was used to deliver streaming a decade ago: namely an x86 CPU based server platform running an OS and a streaming application. While the traditional server platform has certainly improved in performance over time, the reality is that the success of streaming video is outrunning the capabilities of the underlying technology infrastructure.

In the last ten years, while streaming video was growing from infancy to 20% of consumption hours, content delivery networks and companies that operate their own CDNs, such as Netflix, have accomplished significant feats of software optimization and achieved performance increases that have squeezed most of the available capacity from the software layers of the x86 streaming video delivery platform. In the same timeframe, the regular increase in server processor performance has all but leveled out and currently stands at approximately 3.5% per year.

This combination of these two factors means that the performance improvement curve of streaming video delivery infrastructure has flattened out just as streaming video is achieving mass market adoption and the demand curve is rapidly accelerating. The difference in these two curves is the capacity gap between the rapidly increasing market demand for streaming video and the inability of the underlying technology to increase performance to deliver on that demand. In a market where the demand curve is increasing 30% per year and the underlying delivery performance curve is increasing by less than 10% per year, the two curves will never cross. This creates a massive service capacity gap that will have to be addressed in order to meet market demand and the market to continue to develop and prosper.

So, what is being done to address the capacity gap? The current solution is to simply build more data centers and to “rack and stack” more servers to fill them. In 2017 Alphabet built ten new data centers. While all of them were not for YouTube, it’s a safe bet that YouTube and streaming video were a significant driver of the $10B Alphabet spent on data centers and data center operations last year. The other large streaming video players such as Amazon and Microsoft are spending similar amounts annually. If the streaming video market was close to saturation, the capacity gap problem could likely be solved by simply continuing to build more infrastructure. But this is not the case.

If online video consumption continues to take market share from traditional distribution channels and ultimately replaces traditional broadcast distribution, then streaming video infrastructure will need to increase its capacity by over 3X in the coming years. If all the streamed content improves from HD to 4K, the required bandwidth will increase by a factor of another 4X. If 8K, which is also required for true VR and AR, becomes widely adopted, there would be another 3X increase in capacity needed. It is not difficult to project a scenario over the next 10 years where the streaming video market requires 20X the infrastructure delivering capacity that exists today. Building 20X the data centers and filling them with 20X the servers in addition to constructing the power plants to power this infrastructure is clearly not an economically viable or sustainable strategy.

I’m not the only one to see this problem on the horizon and there are plenty of network engineers much smarter than me who are actively talking about how to scale video delivery as more content goes over-the-top. But if we truly believe that one day, streaming video is going to replace traditional linear TV distribution, then we need to start talking about how we are going to address the most daunting challenge facing the streaming media industry today – the capacity gap.