The Streaming Media Industry Needs A “Real” Awards Program For Vendors, Based On Actual Methodology

There are a lot of award programs in the streaming media industry and while many don’t want to admit it, most of them are a joke. Awards are given out to vendors with no real methodology behind them and it’s all based on marketing dollars spent, connections with editorial departments, or stuffing ballots. At the NAB Show in Vegas, nearly a dozen media organizations gave out awards to vendors and many of them spent no time looking at the product, getting hands-on with it, and picked vendors based on what was said in a press release. I’ve seen products win awards that were still in beta, had no referenceable customers, could not scale or simply didn’t do what they claimed. (Note, the NAB Show’s do not give out any awards of their own tied to the streaming industry, news/media companies simply use the NAB Show name in their own awards program)

Of course, everyone likes to win awards, but I think they should be based on a ranking system of some kind, with some real methodology behind them. Without that, it’s hard for vendors who don’t spend a lot on marketing to win awards and many vendors get passed over. I’m also seeing vendors put a good percentage of their marketing budget towards winning awards, with the idea that it somehow helps them win new business. And yet from what I can tell, winning awards doesn’t drive new revenue for vendors at all. I have yet to talk to a customer who said they picked one vendor over another, because it won an award in the market. They are much more likely to take a recommendation from a current customer, or an expert in the field, over anything else.

If an awards program was backed by a person or an organization that was well-respected in the industry and was based on real methodology of some kind, I do think it would help vendors get new business and help them stand out. But to date, I haven’t seen an awards program of that kind, outside of very specific technical programs by a standards body like SMPTE and others, that are tied to more traditional pay TV products. And while all vendors operate in a competitive environment, I also think peers would be willing to help vote on companies they respect, products they think have merit and live events that provide a good user experience.

With all this in mind, the next logical question is, what would it take to provide a respected awards program in the market for the streaming media industry? For vendors to be highlighted properly and companies to know how they were being compared to their peers. What would you want to see included in such a program? I’m interested to hear your feedback, either in the comments section below or you can drop me an email at any time.

Sponsored by

Better Video Compression Can’t Fix The OTT Infrastructure Problem, Hardware Might

Last month I write a blog post detailing how the current infrastructure strategy to support OTT services isn’t economically sustainable. I got a lot of replies from people with their thoughts on the topic and some suggested that the solution to the platform performance problem will be fixed with better video compression. While it’s a great debate to have, with lots of opinions on the subject, personally I don’t think better gains in compression are the answer.

While a breakthrough in compression technology would allow the current streaming infrastructure to deliver more video, the unfortunate reality is that significant resources have already been invested in, to optimize video compression, and the rate of improvement is far below the video growth rate. Depending on who’s numbers you look at, video streaming traffic is currently growing at a CAGR of about 35%. On the other hand, video compression has improved by about 7% CAGR over the past 20 years (halving the video bitrate every 10 years). Unless video compression has a major breakthrough of 15:1 improvement or more, in compression efficiency, better compression alone cannot solve the internet video bottleneck.

Compression is really about making the video itself smaller and therefore more efficient to deliver and not about the performance of the underlying streaming delivery platform. The benefits of any potential breakthroughs in video compression technology will likely benefit all streaming video delivery platforms equally and, as such, are a separate topic from the requirement to increase the performance of the streaming video platforms themselves.

Similarly, approaches to improve performance by moving popular content to edge devices closer to the customer, such as at cell phone towers and cable head end locations, have been suggested as ways to further improve throughput and capacity. These techniques should all be pursued regardless of what underlying technologies are used to actually stream the video but, unfortunately, these techniques, used separately or even together, offer only incremental benefits to performance. Given the massive scale of the projected demand for streaming video, we need an order of magnitude of improvement in today’s performance in order to address the capacity gap.

To increase the performance of the streaming video infrastructure, there are only two main areas of focus – the software and the hardware. While great strides have been made over the last decade by many in optimizing the streaming video software layers, even the most capable developers are facing diminishing returns on their efforts as additional software optimizations yield progressively fewer performance increases. Continued software optimization efforts will continue to eke out incremental performance gains but are not likely to be a significant factor in addressing the overwhelming capacity gap faced in the industry. By process of elimination, this leaves the hopes pinned firmly on the hardware side of the equation.

Unfortunately, the latest research shows that the performance curve for increases in CPU performance over time has flattened out. Despite increases in the number of transistors per chip, a number that is now approaching 20 billion on the largest CPUs, and the shrinking of the transistors themselves, to sub-10nm in the latest chips, we have reached diminishing returns. The projected increases in annual CPU performance are so far below the 35% annual growth rate in streaming video traffic demand that I think we can eliminate improved CPU performance as a possible solution for the capacity gap problem.

If the software and CPU platform, which has worked so well since the inception of the streaming video industry, cannot meet the future needs of the market, what can? Since the dawn of computing, the solution to insufficient software and CPU performance has been to offload the workload to a specialized piece of hardware. Examples of specialized hardware being utilized to enable higher performance include RAID cards in storage, TCP/IP offload engines in networking, and the ubiquitous GPUs which have revolutionized the world of computer graphics. Thanks to GPU’s, a consumer today can watch 4K video streams on their selection of viewing devices, including televisions, computers, and mobile platforms. Given this track record of success, its seems clear that the innovation that is required is a piece of specialized hardware that can offload the heavy lifting of streaming video from the CPU and software. This technology exists and is now making its way to market thanks to the recent advances in Field Programmable Gate Array or FPGA technology.

Traditionally, FPGA technologies have not been part of the data center infrastructure, but this has begun to change in recent years. In fact, the current edition of the magazine ACMQueue, a publication of the Association of Computing Machinery, features “FPGAs In Data Centers” as its feature article. Early work on implementing FPGA’s in the datacenter was pioneered by Microsoft who, with an effort called Project Catapult, proved that massive increases in the performance of web queries (for the Bing search engine) could be obtained by offloading the workload from the CPU’s and software to FPGA’s. This effort was so successful that Microsoft now deploys FPGAs as a part of their standard cloud server architecture. In order to address the needs of the streaming video industry, an ideal solution would be to offload video streaming to a specialized FPGA, which isn’t a new idea, but something that’s just now being seriously talked about. This would essentially require the implementation of the functionality of an entire streaming video server onto a single chip but, which is both possible and highly advantageous to do so.

Some of those in the industry who know way more than me when it comes to FPGA technologies have suggested new offerings in the market by companies like Hellastorm and others, have the potential to change the way video is distributed. Their solution in particular takes the core functionality of an entire streaming video server and implemented it on a chip the company calls the “Stream Processing Unit”, which eliminates software layers from the data path. The result is that they can offload the heavy lifting of the streaming video workload, thereby freeing up valuable CPU resources to improve the performance of other tasks, dramatically increasing streaming video performance.

Additionally, if the use case requires only streaming video functionality, the CPU, OS, RAM, and much of the other circuitry normally required to stream video can be eliminated entirely, thereby saving significant amounts of electricity. Going forward, the streaming video functionality can follow the hardware performance curve and benefit from improvements to FPGA technologies just as the prior generation of streaming video servers benefited from improvements in CPU technologies. The company claims that their Stream Processing Unit technology will keep pace with advances in networking speeds such that the streaming video being delivered from the platform will fully saturate the networking connection, be it 10Gbs, 100Gbs, and even 400Gbs in the future.

The dual benefits of greater capacity delivered with dramatically less power consumption make video stream offload attractive even before one considers the substantial benefits that can be obtained by freeing up CPU resources to provide other network services. Hardware offload of streaming video, with its ability to rapidly scale in performance over the next few years, appears to be the best way to address the looming capacity gap. Some don’t want to invest in hardware because they feel it has too high of a CAPEX requirement, but based on what we have seen in the market, and from speaking with those who are tasked with dealing with the massive influx of video traffic on the networks, video compression alone isn’t going to solve the capacity gap problem. I am interested to hear what others think can be done to solve the problem in the market, feel free to leave your comments below.

Review: AT&T’s WatchTV Live Streaming Offering Works Well, But Primarily A Way To Drive Wireless Subscriptions

I recently got hands-on with AT&T’s new WatchTV live streaming service and overall, it works well. It’s simple to use, the quality of the video was good on all the devices I tested and the interface is easy to navigate. It’s basically a stripped down version of AT&T’s more expensive DirecTV Now live streaming service, with no live sports, no DVR, a limited selection of 31 channels and only one concurrent stream allowed at a time. WatchTV costs just $15 a month, or comes free with AT&T’s Wireless “Unlimited & More” or “Unlimited & More Premium” plans. Although WatchTV doesn’t carry 24-hour sports channels, it does carry channels (like TNT) that occasionally carry some sports. AT&T said that when live sports air on some of those stations, they will not blackout the sports content.

AT&T says that WatchTV was created as a no-frills, “skinnier” streaming option for customers who just want the basics, but it’s really a way for AT&T to try to drive more consumers to sign up for their wireless plans, giving them a upsell over the competition. I don’t expect many consumers to sign up for the WatchTV service as a stand-alone offering, due to the limitation of the content being offered and AT&T said they have no plans for additional content options or “bolt-ons” for WatchTV. This makes sense since customers wanting additional functionality and more content options can simply sign up for DirecTV Now, at $40 a month.

If you’re already an AT&T Wireless subscriber with one of the plans WatchTV comes with, you’re getting a limited amount of free, live content, at no additional cost. And if you don’t watch live sports and only watch content from stations like TBS, TNT, A&E, AMC, Food Network, BBC, CNN, IFC, TLC, History, HGTV, Discovery and a few others, then you are getting a great deal at only $15 a month.

Announcing The Two-Day Streaming Summit at NAB Show New York: Call For Speakers Now Open

Following the successful launch of my new Streaming Summit event at the April NAB Show in Las Vegas, I am excited to announce we are expanding the Streaming Summit (www.nabstreamingsumit.com) to two full days, at the NAB Show New York, taking place October 17-18th. This is the start of a new focus at the show, with dedicated tracks on how to package, monetize and distribute online video. A show for the streaming media industry that will cover the entire streaming stack and all the technologies and platforms that power today’s streaming services.

The Summit will be two full days with over 75 speakers covering monetization and technical topics, along with networking opportunities. The format will consist of fireside chats, technical best practices presentations, round-table panels and a demo track to showcase some of the newest technologies and opportunities in the market. From technical topics like CMAF, AV1, blockchain, SRT, transcoding and SSAI to business topics like direct to consumer (D2C), content bundling, OTT monetization and video advertising models, the Streaming Summit will give attendees a holistic view of the entire online video ecosystem.

The call for speakers is now open and you can find all the details on submitting on the website. Some of the topics that will be covered include:

  • The Future of OTT and The Bundling of Content
  • Best Practices for Deploying Server Side Ad Insertion
  • HEVC, AV1 and The Future of Video Codecs
  • How to Build A Robust and Nimble Video Stack
  • Monetizing Video Content Direct-to-Consumer
  • WebRTC and Low Latency Live Streaming Deployments
  • Video Advertising: What’s Working and What Needs to Be Fixed
  • Best Practices for Streaming Live to Millions
  • SRT: Optimizing Streaming Performance Across the Internet
  • Blockchain Based Video Platforms
  • CMAF: Reducing Packaging, Storage and Delivery Costs
  • QoE: Measuring Latency, Buffering and The User Experience
  • The Challenge in Monetizing Live OTT Services
  • Best Practices for Streaming Live on Social Platforms

This the start of a new, dedicated focus at the NAB Show New York and an opportunity for individuals and companies to get involved. If you want to be involved, have ideas, or give me feedback on topics you want to see covered, I want to hear from you! My cell number is 917-523-4562 and you can call it anytime. My goal is to help provide the industry with a place we can all come together to highlight, discuss and debate what is taking place in the streaming media industry. I am looking for moderators, pitches, ideas and we are also offering vendors the opportunity to get involved via extremely affordable sponsorships. Even ticket prices to attend the event are under $700 for both days and I am personally giving out a discount code to help showcase all the new content coming to the NAB Show New York.

With about 75 speaking spots, I won’t be able to fulfill everyone’s request to be involved. I can’t stress enough how important it is to get a speaking submission in quickly. Once spots are gone I can’t open up new ones or add more slots to the program. So please visit the speaking submission page or call me (917-523-4562) at any time.

In addition, in conjunction with the NAB Show New York, I will also be launching a new summit, taking place on October 15th, that will focus on content distribution at the edge. A focused one-day event that will explore all that is taking place with CDN, WAF, DDOS, DNS and other services that are disrupting the traditional cloud platforms. The show will highlight how companies continue to search for improvement of the end-user experience with new decentralized services that requires getting closer to the eyeballs. I’ll have more details on that shortly, but feel free to reach out to me in the meantime if you’d like more details.

Instagram TV Launch Is A Mess With No Strategy and Poor Content

Last week Instagram launched Instagram TV (IGTV), describing it as a “long form video app” that lets anyone upload videos up to 10 minutes in length, or up to 60 minutes if they are a verified account. The whole point of IGTV is to have longer-form content and yet, most of the content on IGTV so far, is very short. Scrolling through the top 142 most recommended videos “for me”, shows 70% of them are under 3 minutes in length and many are less than 60 seconds in length. But even worse is the content. There are lots of videos of people fighting in the street, someone taking a shower, a guy dressed as a woman dancing, a brawl inside Target, and a video of someone pulling out a real gun and shooting someone in the back. Is this the kind of content IGTV really wants to highlight?

These videos sit alongside content from Oprah, Selena Gomez and other stars and yet even their videos are super short. Oprah’s video is just over a minute long with other celebrities’ videos being well under a minute long. With so much content being under a minute in length one has to ask, what’s the point of IGTV when the regular Instagram app already allows videos of up to 60 seconds? It doesn’t appear Instagram has a real strategy with IGTV and I can’t find any content partners they worked with to provide high-quality, long-form content at launch.

From a user experience standpoint, I haven’t seen any way to customize the content and whatever algorithm Instagram is using to recommend videos, doesn’t work. The recommendations they are serving up to me have no resemblance to the genre of content I would watch. There are also a lot of videos on IGTV in Spanish, Russian, Arabic etc. yet I can’t find any place to filter out content based on the language a user speaks.

Content is king, but apparently not on IGTV where short-form, crappy content is the norm.

Survey Of Over 400 Broadcasters/Publishers Using Server Side Ad Insertion For Live, Highlights Technical Challenges

Server Side Ad Insertion (SSAI) is one of the hottest topics in the industry right now and I’m tracking over 30 vendors in the market that say they offer some type of SSAI technology. To see what’s really going on in the market I conducted two surveys with over 400 broadcasters and publishers on their use of SSAI from both a business and technical level. The survey collected data on how they do ad buys across digital/TV, what their DRM needs are, how their CPMs and fill rates in live compare to VOD, how important frame accuracy is, the metrics that matter most along with a host of technical questions.

The results of the survey shed light on just how far SSAI still has to go to be considered a technology that broadcasters can rely on with performance and proper ad triggering. Lots of technical problems exists, especially getting SSAI platforms to work at true scale. I’ve include one slide with results from the survey but all of the raw data is available for purchasing. Please contact me for details.

The Challenges With Ultra-Low Latency Delivery For Real-Time Applications

For the past 20+ years, CDNs have dominated the content delivery space when it comes to delivering video on the Internet. Third-party CDNs focus on layers 4-7 in the OSI stack which includes things like caching and TCP protocol optimization. But they still depend on the underlying routers of the providers they buy bandwidth from, and latency could still vary and be only as good as the underlying network. For ultra-low latency applications, layers 4-7 don’t help, which is why many CDNs are looking at other technologies to assist them with delivering video with only a few hundred milliseconds of delay. For the delivery of real-time applications, there is a debate currently taking place within the industry as to whether or not you need a private core with optimal routing on layers 2-3, to truly succeed with low-latency delivery.

Historically there have been two different ecosystems in networking: private networks and the public internet. Private networks were used by corporations and enterprises to connect their headquarters, branch offices, data centers and retail locations. The public internet was used to reach consumers and have them connect to websites. In the past, these two ecosystems lived separately, one behind the firewall and the other beyond it.

With the rise of the cloud, the world changed. While Amazon deserves much of the credit for ushering in the era of cloud computing (and the demise of physical server data centers), it may have been Microsoft who triggered the tipping point when they shifted from client-server based Exchange to cloud-based Office365. For all of the enterprises dependent on Microsoft software, the cloud was no longer something they could delay. Once they started to shift to the cloud, their entire architecture needed to change from hardware-based solutions to cloud-based solutions. The success of companies like ZScaler and Box shows this continuing trend.

Lately we’ve seen new vendors come to the market, taking products from the private networking space and using it to try and solve problems with the public internet. As an example, last week Mode launched in the market with $24M in funding with an MPLS Cloud, used in combination with SD-WAN and last-mile internet. MPLS is a 20-year-old technology used by enterprises to create private dedicated circuits between their branch offices (ex: Walmart connects all of their stores back to Headquarters using dedicated MPLS circuits). Enterprises buy these lines because they have guaranteed bandwidth and QoS. MPLS costs 10x-20x the price of Internet bandwidth, but you get guaranteed quality, which is why business still pay for this difference, because the latency variability on the internet is far higher.

However the problem with MPLS is that it was built for the enterprise market. It is very rigid and inflexible and often takes months to provision. It was designed for a hardware-centric world, where you have fixed offices and you buy multi-year contracts to connect offices together using fixed circuits. But what if you could make MPLS an elastic cloud-like solution, and bring the great qualities of MPLS over to the public internet.

While many vendors in the public internet space continue to evangelize that the public internet is great at everything, the reality is that it is not. If you are thinking in terms of “seconds” like 2-6 seconds, then the public Internet + CDNs solve this problem. However, when you think in “milliseconds” – like 30 milliseconds being the maximum allowed jitter for real-time voice calls – the public internet is not nearly good enough. If it were, enterprises wouldn’t pay billions of dollars a year and growing at rates 20x more than regular internet prices for private networks. Instead of rewriting all of the issues, read the awesome blog post by the team over at Riot Games entitled “Fixing The Internet For Real Time Applications”.

As many have pointed out, the current public internet has been tuned for a TCP/HTTPS world were web pages are the common payload. However, as we transition into an ultra low latency world where the internet is being used for WebRTC, sockets, APIs, live video and voice, the public internet is definitely not a “good enough” solution. If you’ve ever complained about a terrible video call experience, you know this to be true. Sectors like gaming, financial trading, video conferencing, and live video will continue to leverage the public internet for transport. The growth in ultra low latency applications (IoT, AR, VR, real-time machine learning, rapid blockchain etc.) will continue to make this problem worse.

When I talk to CIOs in charge of both sides of the house: consumer/end-user facing, as well as in-house employee facing, they have said they are interested in a way to utilize MPLS level QoS and reliability for millions of their end-users back to their applications (gaming, financial trading, voice calls, video calls). But is has to be super flexible just like the cloud, so they have total control over the network – create it, manage it, get QoS, define policy –  and have it be way more flexible than traditional hardware-defined networks. It can’t be a black box. The customer has to be able to see everything that’s happening, so they have real transparency for traffic and flow and have it be elastic, growing or shrinking depending on their needs. This is the service Mode has just launched with in the market, which you can read more about here. I’d be interested to hear from those who knows a lot more about this stuff than I do, what they think of the idea.