Former CEO of KIT Digital, Found Guilty on All Charges, Get 3-Years Probation

Kaleil Isaza Tuzman, the former CEO of KIT Digital who was found guilty of market manipulation, wire fraud, defrauding shareholders, and accounting fraud was sentenced on September 10th to three years probation by the judge in the case. This is astonishing as the prosecutors had sought a sentencing of 17-1/2 to 22 years in prison. The judge also ordered three years of supervised release.

U.S. District Judge Paul Gardephe sentenced Kaleil to only probation, saying the 10 months Kaleil spent in Colombian prisons was so horrible it would put him at little risk of committing further crimes. “The risk associated with sending Mr. Tuzman back to prison, the risk to his mental health, is just too great,” Gardephe said. “While in many other cases it has been my practice to sentence white-collar defendants for these sorts of crimes to a substantial sentence, in good conscience I can’t do that here.”

It’s sad state of affairs in our legal system when someone who defrauded investors, lied, cost many employees to lose their jobs and was found guilty on every charge, gets probation. Reading through some of the legal filings, there appears to be more to the story though on what might have impacted his sentence. In a Supplemental Sentencing Memorandum filed on July 7th of this year, it says:

While incarcerated and during the five years since his release from prison, Kaleil has repeatedly provided material and substantial assistance to the Anti-Corruption Unit of the Colombian Attorney General’s Office, which ultimately resulted in the indictment of a number of government officials in the Colombian National Prison Institute known as “INPEC” (Instituto Nacional Penitenciario y Carcelario)—including the prior warden of La Picota prison, César Augusto Ceballos—on dozens of charges of extortion, assault and murder.” So one wonders if he got a lighter sentence due to information he was providing to the Colombian government.

On the civil side, Kaleil is still being sued by investors in a hotel project who accuse him of stealing $5.4 million and in May 2021, a similar suit was filed in U.S. federal court, asking for $6 million.

For a history of what went on on KIT Digital, you see read my post here from 2013, “Insiders Detail Accounting Irregularities At KIT Digital, Rumors Of A Possible SEC Fraud Investigation“.

Sponsored by

Streaming Services Evaluating Their Carbon Footprint, as Consumers Demand Net-Zero-Targets

Right now, almost anyone has access to some sort of video streaming platform that offers the content they value at a satisfactory video quality level, most of the time. But the novelty factor has long worn off and most of the technical improvements are now taken for granted. Of course viewers are increasingly demanding in terms of video quality and absence of buffering, and losing a percentage of viewership due to poor quality means more lost profits than before, but consumers are starting to care about more than just the basics.

Just like in many other industries (think of the car or fashion industries) consumer demands – especially for Generation Z – are now moving beyond “directly observable” features and sustainability is steadily climbing the pecking order of their concerns. To date, this importance has mostly been for physical goods, not digital, but I wonder whether this may be a blind-spot for many in our industry? Remember how many people thought consumers would not care much about where and how their shoes were made? Some large footwear companies sustained heavy losses due to that wrong assumption.

Video streaming businesses should be quick to acknowledge that, whether they like it or not, whether they believe in global warming or not, that they have to have a plan to reach the goal of net-zero emissions. The relevance of this to financial markets and customer concerns about sustainable practices are here to stay and growing. About half of new capital issues in financial markets are being linked to ESG targets (Environmental, Social and Governance). Sustainability consistently ranks among the top 5 concerns in every survey of Generation Z consumers when it comes to physical goods and one could argue it’s only a matter of time before this applies to digital services as well.

Back in 2018 I posted that the growth in demand for video streaming had created a capacity gap and that building more and more data centers, plus stacking them with servers was not a sustainable solution. Likewise, encoding and compression technology has been plagued with diminishing gains for some time, where for each new generation of codec, the increase in compute power they require is far greater than the compression efficiency benefits they deliver. Combine that with the exponential growth of video services, the move from SD to HD to 4K, the increase in bit depth for HDR, the dawn of immersive media, and you have a recipe for everything-but-net-zero.

So, what can be done to mitigate the carbon footprint of an activity that is growing exponentially by 40% per year and promises to transmit more pixels at higher bitrates crunched with more power-hungry codecs? Recently Netflix has pledged to be carbon neutral by 2022, while media companies like Sky committed to become net zero carbon by 2030. A commonly adopted framework is the “Reduce – Retain – Remove”. While many companies accept that they have a duty to “clean up the mess” after polluting, I believe the biggest impact lies in reducing emissions in the first place.

Netflix, on the “Reduce” part of their pledge, aim to reduce emissions by 45% by 2030 and others will surely follow with similar targets. The question is how can they get there? Digital services are starting to review their technology choices to factor-in what can be done to reduce emissions. At the forefront of this should be video compression, which typically drives the two most energy intensive processes in a video delivery workflow: transcoding and delivery.

The trade-off with the latest video compression codecs is that while they increase compression efficiency and reduce energy costs in data transmission (sadly, only for the small fraction of new devices compatible with them), their much-higher compute results in increased energy usage for encoding. So the net balance in terms of sustainability is not a slam dunk, especially for operators that deliver video to consumer-owned-and-managed devices such as mobile devices.

One notable option able to improve both video quality and sustainability is MPEG-5 LCEVC, the low-complexity codec-agnostic enhancement recently standardized by MPEG. LCEVC increases the speed at which encoding is done by up to 3x, therefore decreasing electricity consumption in data center. At the same time, it reduces transmission requirements, and immediately does so for a broad portion of the audience, thanks to the possibility of deploying LCEVC to a large number of existing devices, and notably all mobile devices. With some help from the main ecosystem players, LCEVC device coverage may become nearly universal very rapidly.

LCEVC is just one of the available technologies with so-called “negative green premium”, good for the business and good for the environment. Sustainability-enhancing technologies, which earlier may have been fighting for attention among a long list of second-priority profit-optimization interventions, may soon bubble up in priority. The need for sustainability intervention is real, and will only become greater in the next few years, so all available solutions should be brought into play. Netflix says it best from their 2020 ESG report, “If we are to succeed in entertaining the world, we need a habitable, stable world to entertain.”

Real-World Use Cases for Edge Computing Explained: A/B Testing, Personalization and Privacy

In a previous blog post, [Unpacking the Edge Compute Hype: What It Really Is and Why It’s Important] I discussed what edge computing is—and what it is not. Edge computing offers the ability to run applications closer to users, dramatically reducing latency and network congestion, providing a better, more consistent user experience. Growing consumer demand for personalized, high-touch experiences is driving the need for running application functionality to the edge. But that doesn’t mean edge compute is right for every use case.

There are some notable limitations and challenges to be aware of and many industry analysts are predicting every type of workload will move to the edge, which is not accurate. Edge compute requires a microservices architecture that doesn’t rely on monolithic code. The edge is a new destination for code, so best practices and operational standards are not yet well defined or well understood.

Edge compute also presents some unique challenges around performance, security and reliability. Many microservices require response times in the tens of milliseconds, requiring extremely low latency. Yet providing sophisticated user personalization consumes compute cycles, potentially impacting performance. With edge computing services, there is a trade-off between performance and personalization.

Microservices also rely heavily on APIs, which are a common attack vector for cybercriminals, so protecting API endpoints is critical and is easier said than done, given the vast number of APIs. Reliability can be a challenge, given the “spiky” nature of edge applications due to variations in user traffic, especially during large online events that drive up the volume of traffic. Given these realities, which functions are the most likely candidates for edge compute in the near term? I think the best use cases fall into four categories.

A/B Testing
This use case involves implementing logic to support marketing campaigns by routing traffic based on request characteristics and collecting data on the results. This enables companies to perform multivariate testing of offers and other elements of the user experience, refining their appeal. This type of experimental decision logic is typically implemented at the origin, requiring a trip to the origin in order to make the A/B decisions on which content to serve to each user. This round-trip adds latency that decreases page performance for the request. It also adds traffic to the origin, increasing congestion and requiring additional infrastructure to handle the traffic.

Placing the logic that governs A/B testing at the edge results in faster page performance and decreased traffic to origin. Serverless compute resources at the edge determines which content to deliver based on the inbound request. Segment information can be stored in a JavaScript bundle or in a key-value store, with content served from the cache. This decreases page load time and reduces the load on the origin infrastructure, yielding a better user experience.

Personalization
Companies are continually seeking to deliver more personalized user experiences to increase customer engagement and loyalty in order to drive profitability. Again, the functions of identifying the user and determining which content to present typically reside at the origin. This usually means personalized content is uncacheable, resulting in low offload and negative impact to performance. Instead, a serverless edge compute device can be used to detect the characteristics of inbound requests, rapidly identifying unique users and retrieving personalized content. This logic can be written in JavaScript at the edge and personalized content can be stored in JavaScript bundle or in a key-value store at the edge. Performing this logic at the edge provides highly personalized user experiences while increasing offload, enabling a faster, more consistent experience.

Privacy Compliance
Businesses are under growing pressures to safeguard their customers’ privacy and comply with an array of regulations, including GDPR, CCPA, APPI, and others, to avoid penalties. Compliance is particularly challenging for data over which companies may have no control. One important aspect of compliance is tracking consent data. Many organizations have turned to the Transparency and Consent Framework (TCF 2.0) developed by the Interactive Advertising Bureau (IAB) as an industry standard for sending and verifying user consent.

Deploying this functionality as a microservice at the edge makes a lot of sense. When the user consents to tracking, state-tracking cookies are added to the session that enable a personalized user experience. If the user does not consent, the cookie is discarded and the user has a more generic experience that does not involve personal information. Performing these functions at the edge improves offload and enables cacheability, allowing extremely rapid lookups. This improves the user experience while helping ensure privacy compliance.

Third-Party Services
Many companies offer “productized” services designed to address specific, high-value needs. For example, the A/B testing discussed earlier is often implemented using such a third-party service in conjunction with core marketing campaign management applications. These third-party services are often tangential to the user’s request flow. When implemented in the critical path of the request flow, they add latency that can affect performance. Moreover, scale and reliability are beyond your control, which means the user experience is too. Now imagine this third-party code is running natively on the same serverless edge platform handling the user’s originating request. Because the code is local, latency is reduced. And the code is now able to scale to meet changing traffic volumes and improving reliability.

One recent example of this was the partnership between Akamai and the Queue-It virtual waiting room service. The service allows online customers to retain their place in line, while providing a positive waiting experience and reducing the risk of a website crash due to sudden, spikes in volume. The partnership was focused specifically on providing an edge-based virtual waiting room solution to handle traffic during the rush to sign up for COVID vaccinations. The same approach could be used for any online event where traffic spikes are expected, such as ticket reservations to a sought-after concert or theater event, now that these venues are poised to open back up.

Conclusion
These examples highlight how important it is to understand and think carefully about what functions make sense to run at the edge. It’s true that some of these use cases may be met by traditional centralized infrastructures. But consider the reduction in overhead, the speed and efficiency of updating functionality, and the performance advantages gained by executing them at the edge. These benefit service providers and users alike. Just as selecting the right applications for edge compute is critical, so it working with the right edge provider. In this regard, proximity matters.

Generally speaking, the closer edge compute resources are to the user, the better. Beware of service providers running code in more centralized nodes that they call “the edge.” And be sure they can deliver the performance, reliability and security needed to meet your service objectives, based on the methodology you choose, while effectively managing risk.

The edge compute industry and market for these services is an evolving landscape that’s only just starting off. But there is a growing list of use cases that can benefit now from edge compute deployed in a thoughtful way. We should expect to see more uses cases in the next 18 months as edge computing adoption continues and companies look at ways to move logic and intelligence to the edge.

Streaming Summit at NAB Show Returns, Call For Speakers Now Open

It’s back! I am happy to announce the return of the NAB Show Streaming Summit, taking place October 11-12 in Las Vegas. The call for speakers is now open and lead gen opportunities are available. The show will be a hybrid event this year, with both in-person and remote presentations. See the website for all the details or contact me with your ideas on how you want to be involved.

The topics covered will be created based on the submissions sent in, but the show covers both business and technology topics including; bundling of content; codecs; transcoding; live streaming; video advertising; packaging and playback; monetization of video; cloud based workflows; direct-to-consumer models, the video ad stack and other related topics. The Summit does not cover topics pertaining to video editing, pre/post production, audio only applications, content scripts and talent, content rights and contracts, or video production hardware.

Please reach out to me at (917) 523-4562 or via email at anytime if you have questions on the submission process or want to discuss an idea before you submit. I always prefers speaking directly to people about their ideas so I can help tailor your submission to what works best. Interested in moderating a session? Please contact me ASAP!

Apple Using Akamai, Fastly, Cloudflare For Their New iCloud Private Relay Feature

[Updated October 18, 2021: Apple’s iCloud Private Relay feature, which was being powered during the beta by Akamai, Fastly and Cloudflare, will officially launch with iOS 15 in “beta”. It will no longer be enabled by default, due to incompatibility issues with some websites.]

On Monday, Apple announced some new privacy features in iCloud, one of which they are calling Private Relay. The way it works is that when you go to a website using Safari, iCloud Private Relay takes your IP address to connect you to the website and then encrypts the URL so that app developers, and even Apple, don’t know what website you are visiting. The IP and encrypted URL then travels to an intermediary relay station run by what Apple calls a “trusted partner”. In a media interview published yesterday, Apple would not say who the trusted partners are but I can confirm, based on public details (as shown below; Akamai on left, Fastly on the right), that Akamai, Fastly and Cloudflare are being used.

On Fastly’s Q1 earnings call, the company said they expect revenue growth to be flat quarter-over-quarter going into Q2, but that revenue growth would accelerate in the second half of this year. The company also increased their revenue guidance range to $380 million to $390 million, up from $375 million to $380 million. Based on the guidance numbers, Fastly would be looking at a pretty large ramp of around 15% of sequential growth in the third and fourth quarter. Fastly didn’t give any indication of why they thought revenue might ramp so quickly, but did say that, “a lot of really important opportunities that are coming our way.” By itself, this new traffic generated from Apple isn’t that large when it comes to overall revenue and is being shared amongst three providers. This news comes out at an interesting time as this morning, Fastly had a major outage on their network that lasted about an hour.

Rebuttal to FCC Commissioner: OTT, Cloud and Gaming Services Should Not Pay for Broadband Buildout

Brendan Carr, commissioner of the Federal Communications Commission (FCC), published an op-ed post on Newsweek entitled “Ending Big Tech’s Free Ride.” In it, he suggests that companies such as Facebook, Apple, Amazon, Netflix, Microsoft, Google and others, should pay a tax for the build-out of broadband networks to reach every American. In his post he blames streaming OTT services as well as gaming services like Xbox and cloud services like AWS for the volume of traffic on the Internet. There are a lot of factual problems with his post from both a business and technical standpoint, which is always one of the main problems when regulators get involved in topics like this. They don’t focus on the facts of the case but rather their “opinions” disguised as facts. The Commissioner references a third-party post-doctoral paper as his argument, which contains many factual errors when it comes to numbers disclosed by public companies, some of which I highlight below.

The federal government currently collects roughly $9 billion a year through a tax on traditional telephone services—both wireless and wireline. That pot of money, known as the Universal Service Fund, is used to support internet builds in rural areas. The Commissioner suggests that consumers should not have to pay that tax on their phone bill for the buildout of broadband and that the tax should be paid by large tech companies instead. He says that tech companies have been getting a “free ride” and have “avoided” paying their fair share.” He writes that “Facebook, Apple, Amazon, Netflix and Google generated nearly $1 trillion in revenues in 2020 alone,” saying it “would take just 0.009 percent of those revenues,” to pay for the tax. The Commissioner is ignoring the fact that in 2020, Apple made 60% of their revenue overseas and that much of it comes from hardware, not online services. Apple’s “services” revenue, as the company defines it, made up 19% of their total 2020 revenue. So that $1 trillion number is much, much lower, if you’re counting revenue from actual online services that use broadband to deliver the content.

If the Commissioner wants to tax a company that makes hardware, why isn’t Ford Motor Company on the list? They make physical products but also have a “mobility” division that relies on broadband infrastructure for their range of smart city services. Without that broadband infrastructure Ford would not be able to sell mobility services to cities or generate any revenue for their mobility division. You could extend this notion to all kinds of companies that make revenue from physical goods or commerce companies like eBay, Etsy, Target and others. Yet the Commissioner specifically calls out video streaming as the problem and references a paper written in March of this year as his evidence. The problem is that the paper is full of so many factually wrong numbers, definitions and can’t even get the pricing of streaming services accurate, something the Commissioner clearly hasn’t noticed.

The paper says YouTube is “on track to earn more than $6 billion in advertising revenue for 2020”. No, YouTube generated $19.7 billion revenue in 2020. The paper also says that Hulu’s live service costs $4.99 a month, when in actuality it costs $65 a month. It also says that the on-demand version of Hulu costs $11.99 a month when it costs $5.99 a month. There are many instances of wrong numbers like this in the report that can’t be debated and are simply wrong. Full stop. The authors say the goal of the paper is to look at the “challenge of four rural broadband providers operating fiber to the home networks to recover the middle mile network costs of streaming video entertainment.” They say that “subscribers pay about $25 per month subscriber to video streaming services to Netflix, YouTube, Amazon Prime, Disney+, and Microsoft.” That’s not accurate. YouTube is free. If they mean YouTube TV, that costs $65 a month. The paper also uses words like “presumed” and “assumptions” when making their arguments, which isn’t based on any facts.

The paper also points out that the data and methodology used in the report to come to their conclusions “has limitations” since traffic is measured “differently” amongst broadband providers. So only a slice of the overall data is being used in the report and the methodology collection isn’t consistent amongst all the providers. We’re only seeing a small window into the data being used in the repot, yet the Commissioner is referencing this paper as his “evidence”. The paper also references industry terms from as far back as 2012 saying they have “adapted” them to today, which is always a red flag for accuracy.

The paper also incorrectly states that “The video streaming entertainment providers do not contribute to middle or last mile network costs. The caching services provided by Netflix and YouTube are exclusionary to the proprietary services of these platforms and entail additional costs for rural broadband providers to participate.” Netflix and others have been putting caches inside ISP networks for FREE, which saves the ISP money on transit. Apple will also work with ISPs via their Apple Edge Cache program. For anyone to suggest that big tech companies don’t spend money to build out infrastructure for the consumers benefit is simply false. Some ISPs choose not to work with content companies offering physical or virtual caches but that’s based on a business decision they have made on their own. In addition, when a consumer signs up for a connection to the Internet from an ISP, the ISP is in the business of adding capacity to support whatever content the consumer wants to stream. That is the ISPs business and there is no valid argument that an ISP should not have to spend money to support the user.

The paper stats that, “Rural broadband providers generally operate at close to breakeven with little to no profit margin. This contrasts with the double-digit profit margins of the Big Streamers.” Disney’s direct-to-consumer streaming division which includes Disney+, Hulu, ESPN+ and Hotstar lost $466 million in Q1 of this year. What “double-digit profit margins” is the paper referencing? Again, they don’t know the numbers. The paper also gets wrong many of their explanations of what a CDN is, how it works, and how companies like Netflix connect their network to an ISP like Comcast. The paper also shows the logo of Hulu and Disney+ on a chart listing them under the Internet “backbone” category, when of course the parent owner of those services, The Walt Disney Company, doesn’t own or operate a backbone of any kind. The paper argues that since rural ISPs have no scale, they can’t launch “streaming services of their own” like AT&T has. Of course, this is 100% false and there are many third-party companies in the market that have packaged together content ready to go that any ISP can re-sell as a bundle to their subscribers, no matter how many subscribers they have.

According to a 2020 report from the Government Accountability Office (GAO), the FCC’s number one challenge in targeting and identifying unserved areas for broadband deployment was the accuracy of the FCC’s own broadband deployment data. Congress recently provided the FCC with $98 million to fund more precise and granular maps. You read that right, the FCC was given $98 million dollars to create maps. In March of 2020, Acting FCC Chairwoman Jessica Rosenworcel said these maps could be produced in “a few months,” but that estimate has now been changed to 2022. Some Senators have taken notice of the delay and have demanded answers from the FCC.

It’s easy to suggest that someone else should pay a tax without offering any details on who exactly it would apply to, how much it would be, what the classifications are to be included or omitted, which services would or would not fall under the rule, how much would need to be collected and over what period of time. But this is exactly what Commissioner Carr has done by calling out companies, by name, that he thinks should pay a tax. All while providing no details or proposal and referencing a paper filled with factual errors. I have contacted the Commissioner’s office and offered him an opportunity to come to the next Streaming Summit at NAB Show, October 11-12, and debate this topic with me in-person. If accepted, I will only focus on the facts, not opinions.

Unpacking the Edge Compute Hype: What It Really Is and Why It’s Important


The tech industry has always been a prolific producer of hype and right now, no topic is mentioned more generically than that of “edge” and “edge compute”.  Everywhere you turn these days, vendors are promoting their “edge solution”, with almost no definition, no real-world use cases, no metrics around deployments at scale and a lack of details on how much revenue is being generated. Making matters worse, some industry analysts are publishing reports saying the size of the “edge” market is already in the billions. These reports have no real methodology behind the numbers, don’t define what service they are covering and talk to non-realistic growth and use cases. It’s why the moment edge compute use cases are mentioned, people always use the examples of IoT, connected cars and augmented reality.

Part of the confusion in the market is due to rampant “edge-washing”, which is vendors seeking to rebrand their existing platforms as edge solutions. Similar to how some cloud service providers call their points of presence the edge. Or CDN platforms marketed as edge platforms, when in reality, the traditional CDN use cases are not taking advantage of any edge compute functions. You also see some mobile communications providers even referring to their cell towers as the edge and even a few cloud based encoding vendors are now using the word “edge” in their services.

Growing interest among the financial community in anything edge-related is helping fuel this phenomenon, with very little understanding of what it all means, or more importantly, doesn’t mean. Look at the valuation trends for “edge” or “edge computing” vendors and you’ll see there is plenty of incentive for companies to brand themselves as an edge solution provider. This confusion makes it difficult to separate functional fact from marketing fiction. To help dispel the confusion, I’m going to be writing a lot of blog posts this year around edge and edge compute topics with the goal of separating facts from fiction.

The biggest problem is that many vendors are using the phrase “edge” and “edge compute” interchangeably and they are not the same thing. Put simply, the edge is a location, the place in the network that is closest to where the end user or device is. We all know this term and Akamai and been using it for a long time to reference a physical location in their network. Edge computing refers to a compute model where application workloads occur at an edge location, where logic and intelligence is needed. It’s a distributed approach that shifts the computing closer to the user or device being used. This contrasts with the more common scenario where applications are run in a centralized data center or in the cloud, which is really just a remote data center usually run by a third-party. Edge compute is a service, the “edge” isn’t. You can’t buy “edge”, you are buying CDN services that simply leverage an edge-based network architecture that perform work at the distributed points of presence closest to where the digital and physical intersect. This excludes basic caching and forwarding CDN workflows.

When you are deploying an application, the traditional approach would be to host that application on servers in your own data center. More recently, it is likely you would instead choose to host the application in the cloud, with a cloud service provider like Amazon Web Services, Microsoft Azure or the Google Cloud Platform. While cloud service providers do offer regional PoPs, most organizations typically still centralize in a single or small number of regions.

But what if your application serves users in New York, Rome, Tokyo, Guangzhou, Rio di Janeiro, and points in between? The end-user journey to your content begins on the network of their ISP or mobile service provider, then continues over the Internet to whichever cloud PoP or data center the application is running on, which may be half a world away. From an architectural viewpoint, you have to think of all of this as your application infrastructure, and many times the application itself is running far, far away from those users. The idea and value of edge computing turns this around. It pushes the application closer to the users, offering the potential to reduce latency and network congestion, and to deliver a better user experience.

Computing infrastructure has really evolved over the years. It began with “bare metal,” physical servers running a single application. Then virtualization came into play, using software to emulate multiple virtual machines hosting multiple operating systems and applications on a single physical server. Next came containers, introducing a layer that isolates the application from the operating system, allowing applications to be easily portable across different environments while ensuring uniform operation. All of these computing approaches can be employed in a data center or in the cloud.

In recent years, a new computing alternate has emerged called serverless. This is a zero-management computing environment where an organization can run applications without up-front capital expense and without having to manage the infrastructure. While it is used in the cloud (and could be in a data center—though this would defeat the “zero management” benefit), serverless computing is ideal for running applications at the edge. Of course, where this computing occurs matters when delivering streaming media. Each computing location, on-premises, in the cloud and at the edge, has its pros and cons.

  • On-premises computing, such as in an enterprise data center, offers full control over the infrastructure, including the storage and security of data. But it requires substantial capital expense and costly management. It also means you may need reserve server capacity to handle spikes in demand-capacity that sits idle most of the time, which is an inefficient use of resources. And an on-premises infrastructure will struggle to deliver low-latency access to users who may be halfway around the world.
  • Centralized cloud-based computing eliminates the capital expense and reduces the management overhead, because there are no physical servers to maintain. Plus, it offers the flexibility to scale capacity quickly and efficiently to meet changing workload demands. However, since most organizations centralize their cloud deployments to a single region, this can limit performance and create latency issues.
  • Edge computing offers all the advantages of cloud-based computing plus additional benefits. Application logic executes closer to the end user or device via a globally distributed infrastructure. This dramatically reduces latency and avoids network congestion, with the goal of providing an enhanced and consistent experience for all users.

There is a trade-off with edge computing, however. The distributed nature of the edge translates into a lower concentration of computing capacity in any one location. This presents limitations for what types of workloads can run effectively at the edge. You’re not going to be running your enterprise ERP or CRM application in a cell tower, since there is no business or performance benefit. And this leads to the biggest unknown in the market today, that being, which application use cases will best leverage edge compute resources? As an industry, we’re still finding that out.

From a customer use case and deployment standpoint, the edge computing market is so small today that both Akamai and Fastly have told Wall Street that their edge compute services won’t generate significant revenue in the near-term. With regards to their edge compute services, during their Q1 earnings call, Fastly’s CFO said, “2021 is much more just learning what are the use cases, what are the verticals that we can use to land as we lean into 2022 and beyond.” Akamai, which gave out a lot of CAGR numbers at their investor day in March said they are targeting revenue from “edge applications” to grow 30% in 3-5 years, inside their larger “edge” business, with expected overall growth of 2-5% in 3-5 years.

Analysts that are calling CDN vendors “edge computing-based CDNs” don’t understand that most CDNs services being offered are not levering any “edge compute” services inside the network. You don’t need “edge compute” to deliver video streaming, which as an example, made up 57% of all the bits Akamai’s delivered in 2020, for their CDN services, or what they call edge delivery. Akamai accurately defines the video streaming they deliver as “edge delivery”, not “edge compute”. Yet some analysts are taking the proper terminology vendors are using and swapping that out with their own incorrect terms, which only further adds to the confusion in the market.

In simple terms, edge compute is all about moving logic and intelligence to the edge. Not all services or content needs to have an edge compute component, or be stored at or delivered from the edge, so we’ll have to wait to see which applications customers use it for. The goal with edge compute isn’t just about improving the use experience but also having a way to measure the impact on the business, with defined KPIs. This isn’t well defined today, but it’s coming over the next few years as we see more uses cases and adoption.