A Detailed Look At How Net Insight Syncs Live Streams Across Devices

With last week’s live NFL stream on Twitter, we were all once again reminded of the delay that exists when it comes to streaming over the Internet. And as more premium video goes online, and more second screen interactivity is involved, syncing of the video is going to become crucial. Two months ago I highlighted in a blog post how Net Insight solves the live sync problem and I’ve been getting a lot of questions on how exactly it works. So I spent some time with the company, looking deeper at their technology in an effort to understand it better.

Net Insight provides a virtualized soft­ware solution that can be deployed over private, public or mixed cloud. The terminating part of the solution is the client SDK that has pre-support for the most popu­lar connected iOS and Android devices. The SDK contains the media distribution termination part, but also me­dia decoding and rendering, i.e. a player. The client SDK enables application developers to add next generation TV to their apps, cross-platform, with a uniform API.

Net Insight’s product is called Sye and it uses an optimized streaming protocol made for real-time applications. Sye operates directly on the video stream avoiding the inherent latency incurred by the packaging. The benefit of using Sye is a higher utilization in the distribution network which directly translates to a better viewing experience by providing and maintaining higher profile viewing for a longer period of time compared to legacy HTTP streaming. In legacy HTTP streaming TCP inherently forces back-offs and slow starts when packet loss is encountered. Sye uses an enhanced packet recovery protocol as the first defense of packet loss. When there are longer periods of bandwidth degradation, an ABR profile level change is enforced with a unique perspective. That perspective is the knowledge of the currently available client side bandwidth. With the server-side network aware function, restoring the highest possible ABR quality level is as fast as the IDR interval defined in the transcoder.

The receiving part of the system consists of a client binary to be included into existing apps for resource requesting, network termination, media decoding and rendering. The back-end server components are split up into two different functions: data plane and control plane. These functions are responsible for distributing streaming media as an overlay network on top of any type of underlying network infrastructure.

screen-shot-2016-09-17-at-8-10-18-pmThe control plane functions consist of controller, front-end and front-end balancer. These functions are responsible for client resource requests, load balancing of data and media, configuration, provisioning, alarming, monitoring and metrics, all presented and handled through their dashboard. The system is a pure software solution which Net Insight says is “hyper scalable” and therefore deployable on bare metal, a fully virtualized environment or in a hybrid physical/virtual approach. While a fully virtualized cloud based solution is supported, the egress of the data plane will utilize most of the network I/O provided, thus dedicated I/O is suggested for the best and most predictable performance.

While a lot of what is going on in the backend that makes it possible to sync live streams is complex, this is one of those solutions in the market where seeing is believing. I’ve seen the solution in-person and it’s amazing how well it works. The ability for Net Insight to sync the same stream, across multiple devices, to be in sync with the broadcast TV feed is amazing. Premium content owners that stream live, and especially sports content, are going to have to address the sync problem soon. It’s becoming too much of a problem when consumers are purposely staying away from the social element of a live event, just so their experience isn’t ruined. Synced live streaming is the next big thing.

Sponsored by

Here’s A List Of Best Practices and Tips For Successful Webcasting

I’ve been getting a lot of questions lately about tips and best practices for putting together a good webcast and what pitfalls to look out for. While live webcasts about sports and entertainment events seem to get all the exposure, more live webcasts take place each day in the enterprise market than in any other sector. But no matter the vertical, or use case, the same skill set applies. The way I see it, there are two different sets of skills involved—the soft skills and the hard skills. That’s not to say that the soft skills are easy, but you really need both of them to be successful and as an industry, we will always be evolving the medium. There is always something new to learn, tweak, or implement to get the most out of the webcast.

So with that in mind, I wanted to outline some of the little things that make the difference between a good webcast and a great webcast. Thanks to Kaltura for sharing with me the questions they get most often from customers.

The Devil is in the (Non-technical) Details

  • Test your delivery outside the office. You’re already testing your equipment, right? Make sure your tests include mobile. These days, it’s a good bet that at least some of your audience is going to tune in while on the go; make sure they have a quality experience, too.
  • Promote internal webcasts as much as external ones. Yes, your employees will dutifully log in because they have to. But if you put the same effort into engaging employees as you do into engaging customers, they’ll be a lot more receptive to your message. Attractive invites, a great title (not just “Q4 Forecast”), short and punchy slides, and a strong call to action will have just as much of a positive effect on your internal audience as an external audience.
  • Think about the experience. There are a lot of cool tools on the market these days that can make webcasts a little more interactive. Find a platform that will keep your viewers engaged.
  • Remember your asynchronous audience. DVR and catch-up isn’t just for TV. Be realistic. People will be late, people will get distracted, people will need to watch this again later. Make it easy for them. Ideally, make sure the recording is easily searched for and navigated. You’ve spent this much time creating this content—increase the ROI by extending the shelf life.
  • Get feedback. You’re going to do this again; specifically reach out for feedback so you can improve.

Getting into the Nitty-Gritty
What about the actual technical requirements? It turns out (unsurprisingly) that there is no one-size-fits-all set of specs; a lot of best practices vary depending on what kind of event you want to produce. I talked to some experts, who offered a list of points to consider.

  • Physical venue. Where are you going to put everything? Make sure you have a physical schematic and plot it all out. It’s not enough to just plot your lighting design. Make sure you know exactly how much cabling you’ll need to connect the cameras to the mixer, for example. (And make sure it’s the right kind of cable for the distance you’re trying to cover; HDMI has length limitations, whereas 3G SDI can be run for 250 feet or more without issue. A truly massive venue might call for fiber.) While you’re at it, check on your power requirements—all those lights can add up in a hurry. If you’re not shooting in your own facility, you have additional issues to consider. Check on load in and load out restrictions as well as elevator access. And don’t forget to see if the venue has insurance or union labor requirements.
  • Network. You’re going to want to consider your uplink speed first. Are you doing a simple feed? Or are you going to do multi-bitrate feeds from your encoder to both a primary and backup publishing point? You’ll want to reserve an uplink capable of 30-40% higher output speed than what you actually intend to send. Make sure to test the uplink speeds in the venue itself, and check latency while you’re at it. You’ll also want to ask for a static IP—you don’t want to hook up your encoder and then get kicked off the network. This is particularly critical when using a hardware encoder that doesn’t have a monitor attached, since if you access your encoder over a browser, you can’t afford to lose contact. If you’re not on your home network, make sure you can get open access without authentication. Again, a hardware encoder isn’t going to be able to interact with a login screen.
  • AV equipment and team. Here, you have a lot of decisions to make. How many cameras are you planning on? How many speakers onstage at any given time? Will you use fixed or wireless microphones? If wireless, are you planning to just pin lav mics to someone’s tie, or are you going to be taping equipment onto people’s skin? Where are you placing your lights—from the back or up against the stage? Are you going to project from the front or the back, and with a dual projector or a single? Each decision will have its own ramifications and precautions. For example, if you’re using wireless microphones, make sure that no one else in the building is using the same frequency.
  • Output. Are you planning to use interlaced or progressive output to your mixer? For broadcast, the standard has been to take interlaced output. But for the web, viewers may be watching on relatively high-resolution screens, which means they may be able to see that interlacing. In that case, you may want to consider progressive output over an SDI port.
  • Encoder. The most common question is which encoder is the best. It depends on circumstances—how mission-critical the webcast is, the budget, how many bitrates you want, and what inputs you need. Software-based encoders are relatively inexpensive, and generally fine. They can handle multi-bitrate output, but SDI requires a third-party capture card, which can add complications. The big problem is reliability and redundancy. If your OS crashes, you’re out of luck. The lower level of hardware encoder is prosumer, like Teradek. These offer single bitrates, with no power supply redundancy and relatively few input options. These are particularly good if you need a mobile unit with a roaming camera. The top of the line encoders, like those from Elemental and Harmonic, are incredibly reliable, with redundant power supplies and multi-bitrate content in whatever output you might want. They’re also expensive, rack-mounted and can require cooling. Your needs will determine which encoder is best for you.

If you’re looking for significantly more details, Kaltura recently held an interesting webinar on the topic where they gave out a lot of really good tips and best-practices, which you can access for free. You can find the recording here. If you have specific webcasting questions, put them in the comments section below and I’m sure others will help answer them.

Twitter’s NFL Stream Looking Good, It’s Business As Usual For MLBAM

img_3243Updated 9/16: The Twitter NFL numbers are out. There was a total of 243,000 simultaneous streams, average viewing length was 22 minutes, 2.1M unique viewers in total. Very, very small event.

Twitter’s live stream of the NFL game tonight is looking very good, with no signs of any quality problems. I’ve tested the stream on ten devices including iPhone’s, iPad’s, Apple TV, Xbox, Fire TV, MacBook and various Android hardware. I’m seeing a max bitrate of almost 3Mbps on the MacBook and Apple TV. So far there has been no buffering issues, although the sync on some of the devices is a bit off, with the each stream a few seconds ahead or behind. Overall for me, the streams are about 10 seconds behind the TV broadcast.

The fact there aren’t any streaming problem is really no surprise because Twitter hired Major League Baseball Advanced Media (MLBAM) to manage the stream. And for them, it’s just another day in the office. MLBAM is the best at what they do, having executed live events for over a decade. The stream is being delivered by Akamai and Level 3 [Updated: and Limelight Networks] and while the companies aren’t discussing traffic numbers, I estimate at peak, they are pushing around 4-5Tbps. We will have to wait to see if Twitter puts out simultaneous stream count numbers after the event, but I would be very surprised if it’s above 2M.

Akamai Slashing Media Pricing In Effort To Fill Network, Won’t Fix Their Underlying Problem

With Akamai’s top six media customers have moved a large percentage of their traffic to their own in-house CDNs over the past 18 months, Akamai has been scrambling to try to fill the excess capacity left on their network. Over the past few weeks I have been tracking media pricing very closely and now have enough data points directly from customers and RFPs to see just how much Akamai is undercutting Level 3, Amazon, Verizon and Limelight on CDN deals.

On average, Akamai is coming in about 15% cheaper trying to win new CDN business or keep the traffic they have. The lowest price I have seen Akamai quoting is $0.002 per GB delivered. To date, that is the lowest pricing I have ever seen on any CDN deal ever. That price is for very large customers, but even for small deals where Level 3 is at $.005 and Limelight will be at $0.0045, Akamai has come in at $0.003. Akamai is making it clear with renewals and with new deals that they want to keep/win the traffic. And while lower pricing might help them with some RFPs, I see deals where they don’t win it, even with the lower price. And many times if they do, it’s harder for them to keep all the business they have, even with lower pricing, because many content owners are now using a multi-CDN strategy, sharing traffic amongst multiple CDNs. So in many cases, even when Akamai keeps a customer, they are keeping less traffic, at a lower price point.

While selling on price alone has the potential to give Akamai a little bump in revenue if they can grab some more market share, it’s not a long-term strategy. When you can no longer sell media delivery based on metrics like performance and have to win business based solely on the lowest price, that’s a recipe for lower margins. In the first six months of this year, Akamai’s margins were down 150 basis points. Value add services which have healthy margins can make up for a lower margin service like CDN, but Akamai’s year-over-year growth in their performance and security business has also slowed.

Akamai, and all CDNs for this matter, can also get burned if they offer too low of a price and then realize a large percentage of the customers traffic is coming from regions like India, China or Australia. In those regions, it costs them substantially more to deliver the traffic and when they give a customer CDN pricing, it’s a number they are picking based off of blended traffic coming from a global audience. Get that wrong and your costs are higher, for business you already quoted at such a low price. While we don’t know Akamai’s true cost to deliver content since they don’t break out CapEx dollars, I guarantee that Akamai is not making money on a CDN contract priced at $0.002. That’s not a deal that is profitable to the company, standing on its own. Maybe it has a bigger overall impact based on who the customer is, or it gets them other business, but many of the deals I am seeing are for straight CDN, nothing else. Akamai is sacrificing margins on their media business, just to add traffic to their network. That’s not healthy for any business.

Akamai is also facing a massive CapEx problem, where they have to spend a lot of money to constantly refresh their network and the number of servers they have. Akamai has said they have 200,000 edge servers and Limelight has said they have 10,000 edge servers. Limelight has 1/3 the capacity of Akamai, but spends far less in CapEx. In the first six months of this year, Akamai spent $160M in CapEx, Limelight spent $5M. Even if half of Akamai’s CapEx is directed at their media business, it’s $80M or 20 times Limelight’s  CapEx spend. Based on those numbers and other data I have, by my estimate, it costs Akamai about $5M in CapEx to add 1Tbps of capacity to their network. Compare that to Limelight and Level 3 where I estimate it costs them about $1M in CapEx per Tbps of capacity, in the U.S. or Europe.

Highlighting this point even further is that on Limelight’s last earnings call, the company talked about their CapEx costs and capacity, when compared to Akamai. Limelight has less than 1/20th the infrastructure to deliver 1/5th the revenue, when compared to Akamai. We don’t know the exact capacity of Akamai’s network, but Limelight’s current egress capability is just shy of 15Tbps. And I think Akamai has said they hit a record 40Tbps. Also, Limelight added almost as much capacity in the first half of 2016 than they did in the full year of 2015, while spending $7.6M less in CapEx year-to-date. Meanwhile, Akamai’s CapEx costs are accelerating, while traffic growth has slowed, with declining growth in revenue.

Akamai’s got a short and long-term problem with their media business and really needs to decide if they want to be in a business that is so volatile, with little to no margins. You have a commoditized service offering, customers that now compete against you, cloud providers that have more scale and ways to make money and competitors that own and operate the network, and others with distinct CapEx advantages. Akamai would be better off getting out of the media business over time and putting all of their efforts into their web performance and security product lines.

On a side note, Twitter’s NFL stream, taking place Thursday Sept 15th, will be delivered by Akamai and Level 3 and I do not expect it to have a large simultaneous audience. My estimate is under 2M simultaneous streams. Also, Apple’s iOS 10 update that came out on Tuesday, the vast majority of that is being delivered by Apple’s in-house CDN, with only a small percentage of the overall traffic going to third-party CDN providers.

Conviva Drops Patent Lawsuit Against Nice People At Work

In March, QoE platform provider Conviva filed a patent infringement lawsuit against Nice People at Work claiming that NPAW was willfully infringing on a number of key patents, (8,874,725; 9,100,288; 9,246,965) which relate to the computation and summarization of video streaming metrics as well as delivery resource selection and switching. Last Thursday, the judge in the case heard oral arguments on all of the motions and two days later, before the court had a chance to render its decision, Conviva made the decision to dismiss the case and dropped all of its claims, putting an end to the litigation.

I haven’t had a chance to speak to either of the companies involved as they are at IBC this week, but whatever the reason for the change, it’s a smart move for Conviva. Patent suits rarely benefit any company and while Conviva is larger than NPAW, both companies are small, don’t have the resources for a long drawn-out lawsuit and their time could be best spent focusing on their customers. The best way to compete with a new competitor in the market isn’t in the courts, it’s with your product. Make it better than your competitors and it will stand on its own.

The market for QoE related services is starting to get very crowded. Along with well known vendors like Conviva, Cedexis and Adobe, there are a lot of new entrants including NPAW, Hola, Touchstream, Tektronix, Erisccon, VMC (owned by Volt Information Sciences), Interferex and others. And while not all of these vendors are doing the same thing, or would be an apples-to-apples comparison in each case, they are all working on improving QoE in one way or another. You will also see some of the online video platform (OVP) providers and CDNs roll out their own QoE solutions by the end of this year.

Thursday Webinar: Delivering OTT Experiences that Keep Audiences Hooked

Thursday at 2pm ET, I’ll be moderating a StreamingMedia.com webinar on the topic of “Delivering OTT Experiences that Keep Audiences Hooked“. In the hypercompetitive OTT market where success hinges on a provider’s ability to attract and retain a loyal audience, sharing quality content is just the start. To really stand out from the crowd, you need to tackle a range of challenges that are critical to providing the great viewing experience your audience expects. Join us for an expert discussion on issues you can’t ignore, including:

  • Delivering content to diverse devices without sacrificing quality
  • Ad insertion technologies that optimize monetization
  • Security strategies that protect your content and your viewers
  • How to meet scalability and global reach requirements
  • The surprising role storage can play in reducing latency

This presentation will include case studies of OTT platforms that are successfully providing fast, reliable, secure experiences that keep audiences engaged and coming back for more.

REGISTER NOW to join us for this FREE Live web event or the on-demand recording after September 15th.

“Have our troops hoist the colors to its peak, and let no enemy ever haul them down.”