Is WebRTC The Future Technology For Low-Latency Live Video Streaming?

There has been a lot of discussion about low-latency streaming and how Adobe’s upcoming end of support for Flash will impact low-latency workflows. RTMP media delivery had become the standard for many low-latency streaming workflows. However, when web browsers began deprecating support for Flash, CDNs started dropping RTMP streaming capabilities, and Adobe announced it will stop updating and distributing the Flash Player at the end of 2020, it became clear the industry needed a new solution.

As I have previously noted, HLS/DASH/Smooth and other HTTP streaming variants are the future. They all offer scalable delivery of on-demand content using standard codecs that are widely supported in most end-point devices. These adaptive segmented streaming formats use standard HTTP to deliver content in a variety of bitrates or spatial resolutions. By implementing smaller chunk sizes that require less buffering, stream delays can be significantly reduced. However, when chunk sizes are too small, it creates additional overhead from all of the additional HTTP requests and the potential for higher rebuffering rates.

CDN Limelight is betting big on WebRTC and has implemented acceleration techniques to allow streaming providers to reduce chunk sizes to the point where HLS and DASH traffic can be delivered with latency as low as four seconds. While this is a big improvement for on-demand workflows, it still isn’t fast enough to replace Flash in live streaming workflows for live sports, gaming, and online gambling use cases, all of which have some need for low-latency streaming.

To successfully replace Flash and still provide low-latency streaming, the industry needs a solution that provides the lowest possible latency from capture to client. It must also use a standard transmission protocol that does not require any special network configurations or optimizations, scales to support millions of simultaneous viewers using standard web clients and browsers, and doesn’t need any special plug-ins. Finally, the solution must have built-in capabilities for secure streaming. It’s a tall order.

Various streaming and CDN vendors are taking different approaches to solving this challenge. Some vendors have begun testing novel implementations of traditional chunk streaming formats such as HLS with very small segment sizes, but these techniques require specialized client software to support this non-standard implementation. Other vendors are pursuing solutions that use UDP for low-latency streaming, but they require specialized plug-ins to be installed on the clients.

WebRTC was originally developed by Google and they released as an open-source solution for browser-based realtime communication. It uses UDP to stream media without the need to create discrete media segments, which delivers a consistently low latency to all clients. With the addition of WebRTC support by Apple into the Safari 11 release, it is now natively supported by all major browsers including Google Chrome, Firefox, and Microsoft Edge. The WebRTC protocol was designed to be flexible in its implementation, allowing companies to experiment with solutions geared toward one to one, one to few, and even one to millions. Plus, it supports delivery over TLS to ensure the security of content in transit.

In addition to low-latency streaming, WebRTC offers a realtime two-way data channel that can be used to send and receive data streams. This two-way data technology offers some interesting possibilities for how online streaming can now become a more interactive experience. Viewers can vote in realtime on what song they would like a performer to sing during a live concert. Sports fans can receive customized live sports statistics during a game or match. Live online shopping channels can display customized offers or pricing for different customers. The possibilities seem like they could profoundly change the live video experience.

While the benefits of WebRTC are promising, there is no guarantee it will win out. Other protocol-based solutions are available on the market and that focus on being mobile-optimized with advanced packet-loss concealment/recovery capabilities, so there are alternatives. But as more live content is streamed this year, and broadcasters and content owners continue to demand low latency solutions, the industry is going to need to settle on a technology. Let me know in the comments what technology you think will win out.

  • We see a new wave of protocols based on Websockets/MSE approach. It covers the disadvantages of WebRTC as it’s not based on unreliable UDP. Having it as a foundation, technologies’ developers use it for transferring any media data for further playback via MSE engine in modern browsers.
    Our company has developed SLDP ( which uses the described approach. SLDP solves the problem of low latency (giving sub-second delay) and reliability. It also provides flexibility in codecs usage (it’s only limited by target platform playback) and ABR streaming (fast switch between resolutions).
    There are similar activities by other market participants so probably we’ll see similar technologies to be introduced soon.

    • Andrey Okunev

      The WebSocket-based approach does make sense. However, I wouldn’t declare UDP as unreliable or state that WebRTC has disadvantages, that make it unusable in video production or content delivery.

      The beauty of WebRTC-based approaches is that most of the requirements are already implemented in the technology: vendors can focus on what’s missing for a particular use case. Customers have already provided great feedback about our own solution and tested it in real-life environments:

      The downsides of SLDP, however, seem to be the following:

      – Latency must be larger than that of WebRTC (specific values have not been mentioned).
      – The WebSocket protocol requires a server to be deployed (WebRTC is P2P).
      – There seems to be no support for adaptive bit rate.

      • Andrey,

        Thanks for sharing your concerns.
        Regarding the downsides:
        – It’s the same as it’s limited only by GOP size
        – Yes, it’s a server-client technology as we’re targeted to broadcast streamers rather than P2P streamers.
        – SLDP supprots ABR both on server and client sides.

  • No Name

    MPEG-DASH’s support for CMAF (including sub-segment/chunk) and leveraging HTTP + QUIC have shown sub-second (in some demos, even few 100s of ms) end to end delay. I wonder this will be preferred solution for many. That said, 2-way data channel has to be OOB and needs to be synchronized with AV but so is the problem with WebRTC data channel too i believe.