Edge caching refers to the practice of using intermediate storage between traditional or hyperscale data centers and end users accessing the resource. This article presents a point of view on how to leverage edge caching technology for enhanced CI/CD pipeline.
This article was originally published by CloudSmith.
At Cloudsmith we are proud to be cloud-native. We’re building a universal cache that really delivers in terms of accelerating software development and distribution, and we don’t believe that is possible with conventional on-prem solutions.
These days multiple teams in locations all over the globe are absolutely typical in software development. Those teams and individuals need a single source of truth and they need consistent, reliable delivery wherever they are. And for us, it feels obvious that the cloud is the only sensible way to deliver that type of service.
Thankfully our customers agree. They trust us enough to put Cloudsmith at the heart of their CI and deployment pipelines. In that context, performance – how fast it takes to fetch a package – is vitally important. When hundreds or thousands of packages need to be integrated into a build, small delays quickly compound into critical and irritating eternities.
We don’t want that to happen to our customers, and we don’t believe they should sacrifice anything in performance. We are always working as hard as possible to make cloud-native package management as fast and performant as possible.
To that end, we’ve been busy over the last week rolling out new product features that make the cloud feel and perform as if it was right there in the building with you – specifically geographic storage and edge caching.
Many of our customers want control over exactly where the packages they use are stored. This might be due to regulatory or compliance reasons, or simply a desire to keep intellectual property within a specific jurisdiction, for all sorts of easy to understand reasons
Thanks to our cloud-native approach and our custom storage options, they can do just that, and choose precisely where packages within the Cloudsmith infrastructure are stored.
Of course storing packages closer to where your services and teams operate can also provide significant performance benefits and lower latency in many cases – and particularly for cold fetches. That makes things fast, but at Cloudsmith ‘fast’ isn’t good enough. As mentioned above we want the cloud to feel like home – and the modern DevOps processes we integrate with demand it.
That’s why we’ve gone one step further in terms of both control and performance.
The closer your packages are to your teams and processes, the better. It’s as simple as that. And no amount of optimization in our code (and there is plenty in there) can defeat the laws of physics. As a result, requests that have to cross half of the planet are always going to take hundreds of milliseconds no matter what you do.
Cloudsmith addresses this issue by supporting the caching of artifacts and packages as close to the team as possible (and for multiple teams in multiple locations simultaneously). Using [email protected], an Amazon platform that combines the Cloudfront CDN platform with Lambda’s serverless processing capabilities, we shave off a large chunk of latency by moving critical computation closer to our customers and removing the need for requests to cross the world in many cases.
To repeat: that can really make a difference. We have a number of customers in Australia, and even before we take application latency into account we’re dealing with a 280ish ms round trip time between Ireland and Oz, on even the most optimal of network links.
What’s more, we now give customers the ability to configure edge caching settings for themselves. In other words, we give full control over how long packages are cached, and allow different package types to be cached for different periods of time – and those rules are entirely flexible.
Time-to-live (TTL) is entirely customizable, meaning packages that change infrequently can be stored in the cache for extended periods of time (with consequent performance improvements), whereas those that are frequently updated can be automatically updated on a more frequent basis to ensure the cache is always serving the latest and correct version.
Ultimately, it’s about both performance and control. With the ability to define edge caching rules, our customers control how long assets stay close to their users, all across the world. Edge cached assets will be immediately available for use with minimal time required to fetch them with reduced latency, better throughput, high availability.