Publishing a web app into production requires provisioning and managing the computing resources for it. This usually means that we have to buy or rent servers that we need to install, maintain and update regularly. It is our responsibility to keep our servers secure from attacks, and as our application usage changes, we need to scale up or down our servers, either horizontally by changing the number of machines or vertically by adding more powerfull machines.
The process of handling our own computing needs can quickly become inefficient, costly, and extremely risky. When we manage everything on our own, we run the risk of under- or over-provisioning, which leads to poor performance and increased costs. There are a variety of security risks that we need to be acutely aware of, and we must make sure that we have highly trained personnel to handle this whole process.
Small companies and individual developers can find this very challenging. Furthermore, it distracts us from what we should actually be focusing on: building and maintaining our application. Although companies with large infrastructure teams may be able to handle this internally, the need for several coordination levels may slow down development.
It is in this context that many businesses have turned to cloud computing and, so-called, serverless deployments.
In serverless computing, our application is hosted on the servers of a cloud provider, such as Amazon Web Services, Azure, or Google Cloud. As cloud providers execute our application, they dynamically allocate the resources we need, charging us only for what we actually use. It is called serverless not because servers aren’t involved, but because servers are abstracted from the development process.
Serverless computing is also known as “Functions as a Service” or “FaaS” because the code is executed as functions. Code runs as stateless functions within containers and can respond to a wide variety of events, including http requests, queueing events, scheduled events, etc. FaaS offerings from major cloud providers include AWS Lambda, Azure Functions, and Google Cloud Functions.
Serveless computing has revolutionized the way applications are built. In addition to being highly scalable, serverless applications are easier to deploy, use computing resources more efficiently, and provide more secure infrastructure. However, serverless is not without its downsides.
Cloud computing, despite its efficient resource allocation, is generally overpriced, making it more expensive than traditional computing. Cloud computing is also highly centralized. No matter where our users may be, our code will always be executed in large data centers from very specific geographical positions, causing latency due to the distance between users and data centers. Lastly, the first time a container is used, it may take several seconds for it to spin up, since serverless providers make use of containerized architectures. The delay caused by this dynamic allocation of resources is referred to as a cold start.
In conclusion, despite its benefits, serveless can be excessively expensive, with random, lengthy cold starts and increased latency.
This is where Cloudflare and edge computing come into play.
Edge computing reduces latency and bandwidth usage by bringing computing closer to the end user. Unlike centralized cloud servers, which are geographically far from the devices they communicate with, the edge of the network is closer to the device, reducing the need to communicate over long distances. As serverless moves away from centralized data centers to the edge, a paradigm shift occurs. With Cloudflare Workers, this new approach to serverless is now readily available.
Cloudflare Workers are distinguished by their truly decentralized nature. Cloudflare’s 200+ locations can run your code instantly in one click. Cloudflare’s data centers around the world are connected to make your application truly global, not just restricted to a particular data center. Developers no longer have to worry about load balancing and region configurations, so the process is greatly simplified.
The trully global nature of Cloudflare Workers, guarantees the shortest possible amount of latency for end users as code will be executed in the Cloudflare location closer to them. The latency of an application is an important metric, since lower latency increases user engagement significantly
Since Cloudflare Workers use the V8 Javascript engine directly instead of Node.js, they start much faster and consume much less resources than other serverless platforms. As far as I know, Cloudflare Workers are the only player in the market that eliminates cold starts entirely, which means they require no spin up time. This is true in every location in Cloudflare’s global network. As a result, Cloudlare claims to be almost two times faster than AWS Edge, and 3 times faster than AWS Lambda.
With significant gains in latency reduction, zero cold start times and greater computing speed, one might expect a significant price increase. Although this is certainly true for Amazon’s Edge solution, which is approximately three times more expensive than Amazon Lamda, it is not the case for Cloudflare, which chose to move in the opposite direction. As a result, CloudFlare is not only better than its competitors, but cheaper as well, offering edge computing at about a tenth of the cost of Amazon’s Edge.
Cloudflare’s pricing model is extremely simple, based mostly on a single charge per request. AWS, on the other hand, has hundreds of billable items that can result in thousands of different pricing combinations, making it nearly impossible to understand and predict. As an added bonus, Cloudflare monitoring allows you to view your billing live as each request is received, whereas AWS monitoring can take days or weeks to update, resulting sometimes in some very unpleasant billing surprises.
As serverless becomes more democratized, edge applications will become the norm. In today’s competitive business environment, edge applications are more resilient than centralized ones. By building applications at the edge, your infrastructure can be simplified, cheaper and more responsive to your needs. When you have less infrastructure to worry about, you can spend more time on what matters the most - your product.