Achieve the Cloud-Edge Continuum Without Burdening Developers

The grand period of hyper-scalers is clearly not coming to an end—but things are going to change quite a bit.

cloud-edge-computing

One of the really cool things about the internet is that all computers are fundamentally equal. Probably one of the things that made me fall in love with it.

When I first got an iPhone around 2008, almost the first thing I did on it was to install Linux and run Drupal. Talk about edge computing! I actually think this is where we are going. The grand period of hyper-scalers is clearly not coming to an end — but things are going to change quite a bit.

Think about it. When you push at least some of the personalization and even the collaboration to computers that are really close to the end-user it means that their private information doesn’t need to traverse the whole internet. That means you not only get incomparable performance but also respecting the user’s privacy becomes easier. Maybe even the default. That actually respecting compliance frameworks like the GDPR (and even hyper-local frameworks like we see popping up all over the place) becomes much easier.

It also changes the picture quite a bit regarding environmental impact. A huge percent of the carbon impact of running web applications happens at the network level rather than at the origins. In our models, in some cases, it can be 90%.

In the same way, doing a lot of our security at the edge layer has proven to be effective. So much of the attack volume never hits any of the inner machines.

The flip side is that it’s not something that happens trivially. Computers may be all equal on the Internet but the software they are running is not. Performance, privacy, and carbon impact are not the only constraints. Sometimes you must have strong consistency and save data that is going to be reliably available.

We always say that there aren’t really many useful stateless services. Most services that have value change something in the world. An ecommerce transaction. A comment on a design. And when you change the world it all becomes much trickier.

Consistency is not impossible to do in a distributed-peer-to-peer-edge-scenario … but way more expensive, largely slower, much more costly to achieve, and often with a huge environmental impact (think WEB3).

When we say computers are equal it’s evidently a huge simplification. They differ by how much raw power they have, their concurrency capabilities — and most importantly their throughput and latency from other computers. You can only efficiently load balance if the computer taking off the load form is nearby. So, it’s really a complex set of tradeoffs. I see a lot of fluff going around with magical promises that are not considering those tradeoffs. Can you run a relational database that is “geo-replicated” at the edge? Yes. But should you? In most cases, the answer is no. This is going to be brittle as hell. You’ll pay for it in a couple of years (or months!)

You shouldn’t expect magic, and you can’t rewrite the whole software stack we have created over the last few decades to just “project everything to the edge.” When people complain about “cold-starts” on serverless function when what they really deploy is a huge Java monolith that does a thousand queries to a relational database on every call … it’s simply that there is an impedance mismatch here.

Part of our mission is figuring this out at the infrastructure level so that we can coordinate the application clusters. This allows us to project closer to the end customer what can be while keeping services that want to be close to each other a very short ping away. This eliminates the need for customers to rewrite everything — without blockchains — and without incredibly brittle and complex machinery.

This is harder to do than it sounds. But I think this also confirms our approach to infrastructure orchestration and a lot of the intuitions that we had early on. We are in a unique position because we always refused the “patchwork approach” to running containerized applications. Because we control (and see) the whole thing from the storage layer through network dependencies — and the dependencies of each service — we can understand the constraints. This includes what we can push further away from the center and what must stay closely knit. And we’ve been looking at what can be done in terms of “just right consistency” at the actual edge quite a bit.

At the end-of-the-day, from my standpoint, it’s not really a question of “what you should do at the edge vs what you should do at the origin.” It’s more of a continuum question and more of a question from a developers’ standpoint. How can we make sure that the optimum can be achieved without burdening the app developer? And more importantly, how can we solve as much of this as possible, transparently, within the infrastructure.


SHARE

Ori Pekelman is a technologist with a taste for business. A polyglot developer and software architect, he also worked as a teacher in a design school and had a previous career as a journalist. His job is to translate an incredibly ambitious roadmap that stretches years into the future into a concrete product offering that is not only efficient but also enjoyable to its users.

Over the years he has helped build a number of very successful startups, including: aSmallWorld, AF83, Commerce Guys, and now Platform.sh. He is an open-source advocate and promoter and organizer of tech meetups such as ParisDataGeeks, the largest European regular big-data gathering.