How Mobile App Acceleration SDK’s Are Replacing TCP-Based Approaches

The TCP/IP protocol that is the communication language of the web, and how we do things on the internet, was built for a PC-focused internet whose best days are now behind it. Since the late 1990’s, when CDNs like Sandpiper Networks and Akamai came to the market, CDNs have done a fantastic job of speeding websites to end users on PCs who have great connections to the Internet. When early smartphone adoption began to change the nature of how we interacted with the internet, the CDNs were speeding up mobile websites as well, using many of the same tools they used in the pre-mobile era.

But today, the traditional PC-focused Internet and TCP/IP protocol were never designed to support the fast delivery of mobile apps. Both introduce a number of delays throughout the mobile app delivery process, making fast mobile app performance on end-user devices an elusive goal for most developers.

HTTP/1.1 only allowed a single request over one TCP connection. If one wanted to make multiple requests, they had to wait for the first request to finish in order to start the second request, and so on. If one request takes longer to finish, it holds up the line for a longer time, and all other requests behind it in the queue have to wait. This problem is called “Head of Line Blocking” (HOLB). To overcome this, web and app developers started making multiple concurrent TCP connections in order to boost speeds. Yet that approach doesn’t scale, since maintaining each TCP connection requires memory and CPU resources.

Google then came-up with SPDY, which later became the foundation to the standard we know of as HTTP/2 today. The idea here is to multiplex requests over a single TCP connection. This approach does solve the problem of HOLB at the HTTP layer, since you now don’t have to wait for a single request to finish before starting other requests. That said, it is still limited by the same HOLB problem at the TCP layer. This is a fundamental limitation of the TCP protocol itself, because it requires “in-order”, or sequential, data.

I’ve spent some time recently looking at Neumob, a mobile app acceleration company with offices in Silicon Valley and the UK, which focus its SDK-based solution on apps. Neumob solves the TCP problem by using UDP under-the-hood for its own protocol, or what they call the Neumob Protocol. UDP doesn’t suffer from HOLB, as it inherently does not require in-order data delivery. Neumob’s focus has been to create a mobile-first protocol, designed for the mobile apps in which 85-90% of all smartphone activity occurs, rather than taking a legacy protocol designed in the 1990s, and then retrofitting it to work for mobile world.

The company’s protocol accelerates everything within a mobile app, including all of those great (but heavy) 3rd-party calls like videos, images, ad network SDKs and analytics tools that make an app what it is. It doesn’t cache for one domain only, and it doesn’t meekly tune TCP. Instead, the company says they chose to develop its own robust UDP-based protocol, 3-POP WAN acceleration architecture and software-defined content routing that dynamically does one thing exceptionally well: speed up the performance of mobile apps, no matter whether its users are in the same city or halfway around the world.

Neumob says one of the differentiating features of their protocol is their network profiles approach. More than half of the connections the company serves are wireless: 4G, 3G (WCDMA, HSDPA, EVDO_A), 2G (EDGE, CDMA) and so on. Even in the same LTE network, any given mobile carrier will have different coverage and latencies, and all of these networks have different characteristics. The company says they have the ability to detect if the network connection is on, and tune connection parameters accordingly. With their SDK, the protocol is able to detect the mobile network carrier, the network technology (WiFi, LTE, HSPA etc) and the country in which the device is connecting, then apply different protocol parameters to maximize mobile app speed and error reduction. It’s a pretty simple approach, to a complex problem.

Historically, web-based CDNs have used edge servers in order to cache static objects efficiently. This is good for small web sites with a low amount of calls, but when the total size of typical libraries grew bigger, CDNs introduced another concept of placing a second level of cache in a few aggregation points (called parent cache, shield cache, super cache, super POP etc), near the origin in order to improve the cache hit rate in the edge server, while reducing access to the origin. This approach was also useful for accelerating dynamic objects (not cacheable and in need of origin access every time). These days, most CDNs support accelerating dynamic content in their own way, but this 2-POP approach is pretty common. Having edge POP and another POP near the origin, and using various middle-mile acceleration techniques between edge POP and a POP near to the origin is foundational architecture the allows CDNs to accelerate dynamic content.

Neumob has expanded this idea to the actual device in the user’s hand. The company says CDNs take what is basically a server-side only approach, with no information about the device itself, and simply assumes it’s “a good client”. This assumes it has a good DNS resolver configuration, so that it can find a nearby edge POP using DNS (or relying on anycast to find a nearby edge POP), and that it knows how to connect using an up-to-date protocol.

Neumob’s approach, by contrast, hosts a small and intelligent proxy right in the device itself by virtue of its embedded SDK within the app being used. Traffic from the app travels through Neumob’s small edge server in the device. This enables the protocol to get unique information about the client, while providing Neumob with the ability to optimize the last mile from the edge of the internet to the device itself, something they say was not possible in the traditional CDN approach.

For example, Neumob can identify that the device is connecting to a Wifi network or to LTE via a specific mobile carrier, without guessing, which enables Neumob to apply the most appropriate protocol parameters. Neumob is able to fall back properly when anything bad or unexpected happens during content transmission, which reduces errors, collects more detailed metrics about the request, alerts about unusual errors, and more. This is effectively having an intelligent agent on the device that’s constantly reporting on network connections.

So how does all of this reduce errors within mobile apps? Neumob says it’s important to underscore how effective the UDP-driven protocol is in reducing errors within apps. These errors include timeouts, when an app’s responses effectively freeze, and force the user to refresh or navigate elsewhere, since images or other content can’t be delivered. Errors can also include blank spaces with missing images; third-party-hosted content that never arrives; and even advertisements that are never seen by the user (and therefore can’t be monetized) because of failed delivery.

Neumob says typical mobile app error rates range from 3% on faster networks such as LTE, to over 12% on 2G & 3G networks, and in countries such as India and China. By not being inherently limited by HOLB (“Head of Line Blocking”), the Neumob protocol already provides apps with a leg up in reducing these frustrating errors. It also uses innovative loss detection & recovery mechanisms, while providing fine-grained control with the aforementioned 3rd POP implemented right inside the SDK.

The traditional PC-focused Internet and TCP/IP protocol were never designed to support the fast delivery of mobile apps. Both introduce a number of delays throughout the mobile app delivery process, making fast mobile app performance (and low error rates) on end-user devices an elusive goal for most developers. Neumob is looking to address these challenges, and because it has been specifically and exclusively engineered for mobile apps, it by necessity incorporates a variety of improvements and network-driven leaps forward. The company says they are able to achieve mobile app speed gains of 30-300%, and reduction of in-app errors by up to 90%.

The SDK revolution, in which app developers can add small bits of code to their apps that contain everything from robust analytics to advertising solutions, is where that next stage of performance and speed innovation lies. The right SDK can effectively transform the last, mobile mile from a latency-filled bottleneck into a lightning-fast conduit for images, files, high-bandwidth videos and more.

It’s a tricky problem for mobile-first infrastructure providers to solve, but therein lies the kernel of the solution: reimagining how we interact with the internet in this newly-dominant era of mobile, and of mobile apps, versus the way we did things in the now-fading PC internet and mobile web era.

Sponsored by

Remembering Sam Blackman: A Quiet & Humble Professional

On Monday news came out that Sam Blackman, former CEO of Elemental Technologies passed away at 41. I first met Sam when he started Elemental ten years ago and over that time, had many conversations with him about how he wanted to change the world, and not just in tech. While I never got to know Sam on a very personal level, he was a good business friend and someone who always loved talking about the industry as much as I did, sharing notes and ideas on what he was working on next. While many will want to remember Sam for the visionary and leader he was, those who really knew him would tell you about his passion for life and his community.

All a man has in his life is his character and integrity. Sam had both, and for that reason more than anything else, I am proud to have called him my friend. He will be missed.

Updated: The Blackman family asked that donations be made in lieu of flowers to three places:

Oregon Food Bank
Forest Park Conservancy
Rosemary Anderson High School

You can also joinn the 4K 4Charity Fun Run Series events at NAB, IBC, and in Portland as Sam was so passionate about these events. www.4K4Charity.com

Market For Web Performance Getting Very Competitive, Data Shows Pricing Has Fallen By 30% This Year

Recent pricing data I’ve collected from over 150 customers taking web performance services from third-party CDNs including Akamai, Fastly, Amazon, Instart Logic, CDNetworks and others show that on average, pricing for web performance has dropped 30% since Q4 of last year. Multiple CDN have also confirmed this number and even Akamai, in a call with me in April, went on-the-record and agreed with that figure. That’s not to say every web performance contract has fallen in value, some have actually gone up as some customers are willing to pay more if the CDN can show a real performance difference over a competitor. And while that’s good, the bottom line is that the web performance market is starting to get very competitive with performance amongst vendors getting very close to one another, just like we saw in the media delivery business.

Anyone who has been tracking the media CDN business knows that pricing has hit rock bottom for some time, with the largest contracts being under $0.004 per GB delivered. And while web performance had always been the stand out product with high margins, we’re already starting to see pricing erode. It won’t drop as bad as we have seen with the media segment in the pricing declines, but make no mistake, web performance services are no longer a moat around any CDNs core business. The market for these services is now more competitive than ever, and that’s only going to increase.

Data from the 150 customers (contact me if you interested in buying the raw data) shows some very interesting trends not only on price, but also the fact that many customers are now using a dual-vendor approach for web performance services, just like they have been doing for media delivery. The number of customers now using two or more CDNs for web performance was a lot higher than I was expecting. In addition when customers were asked, “what has disrupted the market for web performance services in the past 12-18 months the most?”, many customers responded by saying “Startups that are more flexible and nimble“, and “Vendors offering such similar levels of performance“.

Don’t be fooled by what any CDN vendor may tell you, disruption is coming to the web performance market and I’m starting to see it show up in contracts with regards to pricing, multi-vendor usage and lower traffic growth than some have predicted. I expect the pressure amongst CDN vendors to compete even harder on web performance services is only going to intensify over the coming quarters.

If you are interested in purchasing the raw data collected from the 150 customers, please contact me.

Clearing Up The Cloud Confusion re: Amazon, Disney, Hulu, BAMTech, Akamai and Netflix

Over the past few days there has been a lot of infrastructure news surrounding how video is delivered from third-party content delivery networks. Between the news around Disney’s upcoming OTT services, Hulu using Amazon for their live streaming service, and BAMTech now being 75% owned by Disney, some in the media are making inaccurate statements.

Lets start with the press release that Hulu is using Amazon’s CDN CloudFront to help deliver some of their streams for Hulu’s new live service. This isn’t really “new” news as Hulu confirmed for me in May they were using Amazon, along with Akamai for their live streams and other CDNs as well for their VOD content. For live stream ingestion, Hulu is taking all of the live signals via third-party vendors including BAMTech and iStreamPlanet. What the Hulu and Amazon tie up does show us is just how commoditized the service of delivering video over the Internet really is. Nearly every live linear OTT service is using a multi-CDN approach, even for their premium service. Case in point, AT&T is using Level 3, Limelight and Akamai for their DirecTV Now live service, and this is the norm, not the exception. There was also a blog post saying it’s important for Disney and BAMTech to “own not just content assets, but also delivery infrastructure.” But BAMTech doesn’t own any delivery infrastructure, they use third-party CDNs.

In the news round up of Amazon and Hulu, multiple blogs are also implying that Netflix also uses Amazon to deliver Netflix videos. Statements like, “Netflix depends on Amazon to deliver its ever-growing library of shows and movies to customers“, is not accurate. Yes, Netflix relies heavily on Amazon’s cloud services, but not for video delivery. Netflix delivers all of their videos from their own content distribution network and doesn’t use Amazon’s CloudFront CDN for video delivery at all.

When it comes to Disney’s new ESPN OTT service, due out in 2018, and their Disney branded movie/content service due out in 2019, some have said third-party content delivery networks, and in particular Akamai, will see a “boost” or “great benefit” from these new services. But the realty is, they won’t. And not just Akamai, but any of the third-party CDNs that BAMTech uses, of which they use many. If you just run the numbers, you can see what the value of a contract from Disney would be worth, to any third-party CDN, specific to the bits consumed. If ESPN had 3M subs from day one, which they won’t, and each user watched 5 hours a day, with 50% of their viewing on mobile and 50% on a large screen, each user would consume about 130GB of data per month. At a price of about $0.004 per GB delivered, each viewer would be worth about $0.52 to a CDN.

And with BAMTech using multiple CDNs for their live streaming, if three CDNs all got 1/3 of the traffic, the value to each CDN would be worth just over $500k per month. But ESPN won’t have 3M subs from day one, so the value would be even lower. For a company like Akamai that had $276M in media revenue for Q2, an extra $1M or less in revenue per quarter, isn’t a “boost” at all. So for some to write posts saying ESPN’s new OTT service could be a “large business opportunity for the company [Akamai],” it’s simply not true. Reporters should stop using words like “large” and “big” when discussing opportunities in the market, if they aren’t willing to define them with metrics and actual numbers.

No OTT Service Has Figured Out How To Achieve Service & Monetization Parity Across Traditional & Online Broadcasts

It’s no secret that TV by appointment is giving way to OTT-centric preferences. Frost & Sullivan’s research numbers corroborate this trend at many levels such as growing rates of OTT viewership, falling STB sales, soaring connected device and smart device usage, and thriving growth in multi-screen video transcoding and protection solutions. We also see continued expansion of online video offerings from websites and via apps, both by pay TV service providers and directly by broadcasters.

Against this backdrop, we see recent service offerings available in the market, such as Hulu Live, YouTube Live and FOX making all of their primetime programming available live to all US markets. Hulu is now nearly a decade old and broadcasters like CBS, NBC and ABC have offered OTT streaming for some time now, as have HBO and ESPN. Content is often free for pay TV subscribers after username and password authentication; monthly fees for standalone consumption are nominal. And yet, no OTT provider has yet to figure out how to achieve service and monetization parity across traditional broadcasts.

FOX has showed some success because they allow local affiliates to control advertising and branding of channels. All of FOX’s primetime entertainment is streamed live, rather than select shows. 210 regional U.S. markets are covered, as opposed to more select coverage with other broadcasters. Consequently, FOX boasts that nearly all pay TV households in the US can now view FOX channels online via their streaming media devices, smart TVs, and tablets. The reason FOX was able to achieve ubiquity of coverage in the U.S. where other broadcasters had so far failed is by its ability to allow local affiliates to control the advertising and branding of the channels being offered.

This is in stark contrast to the ongoing trend of disintermediation where broadcasters seek to go directly to end consumers, bypassing the pay TV service providers. This second difference, in terms of monetization and branding, holds the promise of solving one of the most vexing challenges with OTT today, which is monetization. Targeted ads and usage fees have thus far fallen short of their promise. Programmers, service providers and broadcasters have all been challenged to maintain their business brands in a market where consumers often confer loyalty to specific shows, specific talent, or select social media destinations more than channels or service providers. By managing to cooperatively partner with affiliates on advertising and branding and thereby avoiding conflict and competition, FOX may perhaps have found a win-win middle ground.

This is of course easier said than done, and much will depend on the quality of experience and inventory of ads that will be delivered. The initial statistics are certainly promising. The third difference appears to be that this will truly be live-streamed content, in contrast to other offerings where episodes are made available for on-demand viewing concurrently with or at a short delay after the conventional broadcast goes live. While this technological difference is significant and noteworthy to infrastructure vendors, I’m also of the opinion that everyday users should neither care about this distinction nor become aware of it.

Which brings us to the flip side of these services, which is that it sheds light on the many shortcomings of the OTT ecosystem today. FOX is not currently providing sports content through this framework. Sports continue to be provided through a separate app and presumably a separate set of agreements. Viewers, even pay TV subscribers, continue to be subject to the disparity and lack of consistency in content access across types of content, channels, resolutions, regions and in some cases device support. Service levels can vary dramatically by location, even for the same user. Service provider apps and destinations offer overlapping content with broadcaster apps and destinations, with online video services often joining in the same fray. Users are left to figure out the nuances of true live streaming, catch-up TV, cloud DVR and video on demand, all of which should “ideally” simply be “TV on any screen”.

Content services are most beloved when they offer delightful, consistent, cross-device OTT experiences, which are at par with conventional live linear managed experiences. While tier-1 services such as Comcast and others are coming closer to this idea in the U.S., the overall problem is far from solved. Even a decade after Netflix and Hulu first began to stream content, no one has fully figured out how to achieve service and monetization parity across traditional and online broadcasts.

Apple’s Adoption Of HEVC Will Drive A Massive Increase In Encoding Costs Requiring Cloud Hardware Acceleration

For the last 10 years, H.264/AVC has been the dominant video codec used for streaming but with Apple adopting H.265/HEVC in iOS 11 and Google heavily supporting VP9 in Android, a change is on the horizon. Next year the Alliance for Open Media will release their AV1 codec which will again improve video compression efficiency even further. But the end result is that the codec market is about to get very fragmented, with content owners soon having to decide if they need to support three codecs (H.264, H.265, and VP9) instead of just H.264 and with AV1 expected to be released in 2019.

As a result of what’s take place in the codec market, and with better quality video being demanded by consumers, content owners, broadcasters and OTT providers are starting to see a massive increase in encoding costs. New codecs like H.265 and VP9 need 5x the servers costs because of their complexity. Currently, AV1 needs over 20x the server costs. The mix of SD, HD and UHD continues to move to better quality: e.g. HDR, 10-bit and higher frame rates. Server encoding cost to move from 1080p SDR to 4K HDR is 5x. 360 and Facebook’s 6DoF video are also growing in consumption by consumers which again increases encoding costs by at least 4x.

If you add up all these variables, it’s not hard to do the math and see that for some, encoding costs could increase by 500x over the next few years as new codecs, higher quality video, 360 video and general demand increases. If you want to see how I get to that number, here’s the math:

  • 5x number of minutes to be encoded over today
  • 5x the encoding costs for new codecs like VP9 and HEVC over H.264
  • 5x as more video is in higher resolution, higher frame rate, HDR (e.g. 1080p60 SDR vs 4Kp60 HDR is 5x pixels)
  • 2x as now you have to support two codecs (H.264 & HEVC or VP9)
  • 2x if you have to support 360 video and Facebook’s 6DoF (Degrees of Freedom)

This is why over the past year, a new type of accelerator in public clouds called Field Programmable Gate Array (FPGA) is growing in the market. Unlike CPUs and GPUs FPGAs are not programmed by using an instruction set but instead by wiring up an electrical circuit. This is the same way traditional Application Specific Integrated Circuits (ASIC) are programmed but a big difference is that FPGA can be programmed “in the field”. This means it can be programmed on demand in the cloud just like CPUs and GPUs are. Fortunately, customers just need to change a single line of code to replace a software encoder with an FPGA encoder and still get the benefits of using common frameworks like FFmpeg.

Encoding software such as x265 contains a great many presets that allow the user to customize settings and trade-off overall computing requirements against the size of the encoded video. x265 can produce very high-quality results with the “veryslow” preset. The coding rate (frames per second encoded) is low, yielding the best compression, but with considerable cost of encoding resources. On the AWS EC2 c4.8xlarge instance running X265 deliver only 3 frames per second (fps) of 1080p video. Hence to deliver 60fps 20x c4.8xlarge instances would be required which would cost around $33 an hour.

To put that in comparison, video compression vendor NGCodec’s encoder running in the AWS EC2 FPGA instance f1.2xlarge can deliver better visual quality than x264 ‘veryslow’ but can deliver over 60 fps on a single f1.2xlarge instance. The total cost would be around $3 including the cost of the f1 instance and the cost of the codec. This is a total savings of over 10x as well as avoiding the complexity of parallelizing live video to use multiple C4 instances. This cost and quality benefit is why public cloud providers like Amazon, Baidu, Nimbix, and OVH have already deployed FPGA instances which their customers can use on demand. Many other data centers providers tell me they are also in development of FPGA public instances and I expect this trend to continue.

I’d be interested to hear what others think of FPGA and welcome their comments below.

Why Vendors Need To Hire Marketing Leaders From Outside Their Industry

With the growth we have seen in the streaming media market, most vendors have a long list of open reqs, all desperately trying to hire the same people. And while it makes sense to hire sales and product folks from within the industry, I’m a firm believer that when it comes to marketing positions, vendors need to start hiring from outside our industry – immediately.

At first many might think that’s an odd statement for me to make, but marketing is a skill set. Either you understand how to market a product or you don’t. Understanding the technology helps to a degree, but companies aren’t selling technology, they are selling a service. No offense to the marketers in our industry, but we need some fresh blood, with those who know how to market a service/produce, and bring a different perspective to the industry. Even for myself, I recently hired a marketing specialist to help me re-think and re-imagine my brand in the market. A good marketing person knows how to tell a story and transform a product and service into something compelling, even if they don’t know how to encode a piece of video.

As an industry, we are all using far too many high-level and generic words like speed, quality, performance, scale etc. with no real meaning behind them. Good marketing and branding is an art. It involves knowing how to price, package and productize a product/service and do it in a way that resonates with the customer, be it b2b or b2c. Those with good marketing skills know how to transcend verticals and markets, while delivering a clear message. And the really good marketers can meld companies, industries and make brands more valuable and relevant.

As I have seen first hand from the marketing person I am working with, great marketers are remarkable observers. They love to observe people’s behaviors and can quickly tell what a person likes, what resonates with them, and what creates the experience the client is looking for. A good marketing person is also extremely curious, they constantly ask questions, and want to know what businesses and people think of things. Skillful marketers always have questions to ask and never run out of ways to think about how a person or business reacts to a name, a brand, a service or a feeling. In short, really good marketing folks are geniuses because they aren’t afraid to try something new, to disrupt the market, to change how people think.

A good marketing person doesn’t work 9 to 5. They spend a lot of personal hours watching people, questioning the norm, researching, looking at data and advancing their skills. They tend to read everything they can and absorb information like a sponge, constantly retaining it for later. They are great planners, but even better doers. Marketing professionals live in the trenches, because it’s where they get their energy from and they don’t use buzz words or quotes from books, because they have been there and done it. They have the hands-on experience, are always thinking, coming up with ideas, and trying something new. They also love their community, are aware of their surroundings, love challenges, and I’ve found, they never start a conversation with a list of their achievements. They are most interested in their client’s challenges and how they can solve them.

I also have found that really good marketers are quite humble, don’t come with an ego and are not seeking glory. They take great pride in their work and they love to see a campaign and branding exercise be successful. Great marketers believe in accountability, and are not afraid of data, reporting and have a tangible methodology to determine the clients ROI.

When it comes to marketing products and services in the online video industry, it’s time for our market to be disrupted. We need change. We need to evolve. We all need a fresh perspective. Even me. It does not matter how long you have been in the space, in fact, I think the longer you have been in the industry, is actually a dis-service when it comes to having a fresh marketing approach to the market. Right now I am having someone look at what I do, critic it, change it, and find ways to make it even more relevant, make it transcend verticals – which is the only way for any business and brand to grow. And that is the true value of a marketing genius, growing a company. If you are interested in branding, marketing, and packaging help, feel free to reach out to me and I’ll put you in contact with the marketing genius I am using.