How the Cloud and the Edge Work Together

The movement of APIs and services from the cloud to the edge is shaking the world of technology infrastructure.

cloud-edge-computing

A paradigm shift is shaking the world of technology infrastructure. It’s the movement of APIs and services from the cloud to the edge.

Edge computing is the new kid on the block—and the definition of running compute at the edge is evolving. Edge computing can offer dramatic speed and cost efficiencies. So—when should something run at the edge versus on the cloud? Here’s how both can work together to deliver new levels of efficiency across multiple business workloads.

What is edge computing?

Edge computing refers to running functions on compute infrastructure managed by a global provider and natively distributed globally. Unlike traditional cloud computing models, where you’re essentially leasing a computer/set of computers in a managed data center, edge computing means running the same code across a managed fleet of globally distributed computers.

Today, edge-computing solutions are often best employed when your application doesn’t require much—or any “state”. Some examples of this include: serving a website and authorization APIs—services that don’t require a large database to function. Edge-computing providers bring some level of stateful services to market, including key value stores and basic storage, but these solutions are still early in availability and scale.

Where does the cloud remain strong?

Because of the lack of state at the edge, applications requiring large databases or significant storage are still best run at cloud computing centers. Database-as-a-service providers and large-scale storage systems such as S3 remain in the cloud.

As edge computing grows in adoption, expect to see storage systems blending centralized cloud computing systems and edge-available caches and endpoints. However, the future for big iron databases points to a longer staying power in the cloud, as edge-computing providers continue to focus on “scale-out/low-cost” compute versus the high-performance computing and memory needs of databases.

Best of both worlds

For example, we operate a number of edge-enabled APIs that provide for fast, geographically close services to customers. Prior to the move to the edge, all of these services lived on the cloud natively and we operated three major cloud regions: U.S., Europe, and Asia. Our code ran on servers in Iowa, Germany and Tokyo. This ensured tour customers received fast responses, no matter their location.

When APIs run in more than 300 regions—all generally within a few dozen milliseconds from customers, adopting an edge-enabled solution can increase your global reach by 100x in a matter of weeks. But that said, core database systems should remain cloud-native. Things like edge APIs to provide for data collection can provide meaningful speedups and cost savings, but this edge collected data should funnel back to central cloud locations for analysis. By combining the distributed edge-powered API capture points (and fast local responses) with the tried and true workflow of a centralized cloud-based data analysis system we can see the benefits of combining both architectures and get a solution that performs better than either approach alone.

Edge economics

Economics drove the adoption—not the technology. In other words, changes in the technology landscape came about because of the economics they offer—not the innovation behind the product. When cloud computing arrived on the scene, the economic advantage was obvious—swap high up-front investments in your own server and network infrastructures for zero up front. Best of all, on-demand leases of compute power can be paid with a credit card.

When it comes to edge computing, we see similar economic staying power for some applications. Cloud computing companies buy server and data center infrastructure in bulk to support resales as capacity, while edge platforms, like content delivery networks (CDNs), use edge computing to drive additional value on their existing infrastructure. This then lowers the cost required to provide compute services to customers.

CDNs are essentially simple servers that hold copies of data. They look up requests for this data (CPU) and then return the data to the user. Free CPU cycles are available throughout the day—most retrieve and transmit actions requiring less CPU power than, say, running a full database engine. CDN companies are able to further monetize their existing capacity with serverless models—this allows for meaningful economic impacts downstream.

Cloud-based bandwidth costs can increase by ~6x—but they’re not even a factor in edge-native pricing. Edgemesh operates ~2 billion requests per month all around the world. This translates to cost savings of 72% a month for edge native versus cloud-native. That’s in addition to the dramatically lower management costs (no servers to monitor), rapid global scale (code runs near users automatically), and simplified billing.

All in all? The edge-native model is compelling from a simplicity and economics perspective for startups looking to offer low-cost solutions on a global scale. No servers. There are no regions to pay for. Just on-demand functions running and scaling around the globe. All of which are paid for with a credit card.


SHARE

Jake Loveless has had a 20-year career in making things go faster, from low latency trading for Wall Street to large-scale web platforms for the Department of Defense. He is a two-time winner of High-performance Computing awards and a frequent contributor to the Association of Computing Machinery. Today, Loveless runs Edgemesh, the global web acceleration company he co-founded with two partners in 2016. Edgemesh helps ecommerce companies across multiple industries and platforms (including headless) deliver 20%-50% faster page loads to billions of users around the globe.