Real-World Use Cases for Edge Computing Explained: A/B Testing, Personalization and Privacy
In a previous blog post, [Unpacking the Edge Compute Hype: What It Really Is and Why It’s Important] I discussed what edge computing is—and what it is not. Edge computing offers the ability to run applications closer to users, dramatically reducing latency and network congestion, providing a better, more consistent user experience. Growing consumer demand for personalized, high-touch experiences is driving the need for running application functionality to the edge. But that doesn’t mean edge compute is right for every use case.
There are some notable limitations and challenges to be aware of and many industry analysts are predicting every type of workload will move to the edge, which is not accurate. Edge compute requires a microservices architecture that doesn’t rely on monolithic code. The edge is a new destination for code, so best practices and operational standards are not yet well defined or well understood.
Edge compute also presents some unique challenges around performance, security and reliability. Many microservices require response times in the tens of milliseconds, requiring extremely low latency. Yet providing sophisticated user personalization consumes compute cycles, potentially impacting performance. With edge computing services, there is a trade-off between performance and personalization.
Microservices also rely heavily on APIs, which are a common attack vector for cybercriminals, so protecting API endpoints is critical and is easier said than done, given the vast number of APIs. Reliability can be a challenge, given the “spiky” nature of edge applications due to variations in user traffic, especially during large online events that drive up the volume of traffic. Given these realities, which functions are the most likely candidates for edge compute in the near term? I think the best use cases fall into four categories.
This use case involves implementing logic to support marketing campaigns by routing traffic based on request characteristics and collecting data on the results. This enables companies to perform multivariate testing of offers and other elements of the user experience, refining their appeal. This type of experimental decision logic is typically implemented at the origin, requiring a trip to the origin in order to make the A/B decisions on which content to serve to each user. This round-trip adds latency that decreases page performance for the request. It also adds traffic to the origin, increasing congestion and requiring additional infrastructure to handle the traffic.
Businesses are under growing pressures to safeguard their customers’ privacy and comply with an array of regulations, including GDPR, CCPA, APPI, and others, to avoid penalties. Compliance is particularly challenging for data over which companies may have no control. One important aspect of compliance is tracking consent data. Many organizations have turned to the Transparency and Consent Framework (TCF 2.0) developed by the Interactive Advertising Bureau (IAB) as an industry standard for sending and verifying user consent.
Deploying this functionality as a microservice at the edge makes a lot of sense. When the user consents to tracking, state-tracking cookies are added to the session that enable a personalized user experience. If the user does not consent, the cookie is discarded and the user has a more generic experience that does not involve personal information. Performing these functions at the edge improves offload and enables cacheability, allowing extremely rapid lookups. This improves the user experience while helping ensure privacy compliance.
Many companies offer “productized” services designed to address specific, high-value needs. For example, the A/B testing discussed earlier is often implemented using such a third-party service in conjunction with core marketing campaign management applications. These third-party services are often tangential to the user’s request flow. When implemented in the critical path of the request flow, they add latency that can affect performance. Moreover, scale and reliability are beyond your control, which means the user experience is too. Now imagine this third-party code is running natively on the same serverless edge platform handling the user’s originating request. Because the code is local, latency is reduced. And the code is now able to scale to meet changing traffic volumes and improving reliability.
One recent example of this was the partnership between Akamai and the Queue-It virtual waiting room service. The service allows online customers to retain their place in line, while providing a positive waiting experience and reducing the risk of a website crash due to sudden, spikes in volume. The partnership was focused specifically on providing an edge-based virtual waiting room solution to handle traffic during the rush to sign up for COVID vaccinations. The same approach could be used for any online event where traffic spikes are expected, such as ticket reservations to a sought-after concert or theater event, now that these venues are poised to open back up.
These examples highlight how important it is to understand and think carefully about what functions make sense to run at the edge. It’s true that some of these use cases may be met by traditional centralized infrastructures. But consider the reduction in overhead, the speed and efficiency of updating functionality, and the performance advantages gained by executing them at the edge. These benefit service providers and users alike. Just as selecting the right applications for edge compute is critical, so it working with the right edge provider. In this regard, proximity matters.
Generally speaking, the closer edge compute resources are to the user, the better. Beware of service providers running code in more centralized nodes that they call “the edge.” And be sure they can deliver the performance, reliability and security needed to meet your service objectives, based on the methodology you choose, while effectively managing risk.
The edge compute industry and market for these services is an evolving landscape that’s only just starting off. But there is a growing list of use cases that can benefit now from edge compute deployed in a thoughtful way. We should expect to see more uses cases in the next 18 months as edge computing adoption continues and companies look at ways to move logic and intelligence to the edge.