top of page

Who Guards the Guardian? Rethinking Vendor Reliance in Modern Cloud Architectures

  • Writer: Reuben Sant
    Reuben Sant
  • Nov 19
  • 4 min read

Guest article by Reuben Sant, iSupport. Reuben is the Director and lead software engineer at iSupport Ltd., a seasoned technologist with a track record of delivering mission-critical projects across various sectors. With years of hands-on experience building, scaling, and rescuing complex systems, he joins us today to share his insights on the realities of modern cloud architectures and the hidden risks many organisations overlook.


When something on the internet breaks, we tend to assume it is our fault. A misconfiguration. A DNS mistake. A server under pressure. But over the years, outages have increasingly reminded us that even when your systems are healthy, your digital services can still go dark, simply because your upstream guardian fails.


ree

We trust platforms like Cloudflare, major cloud providers, CDNs, and serverless runtimes to shield our applications from threats and scale our services globally. And they do. But this raises a difficult question:


If Cloudflare is my guardian… who guards the guardian?

It’s a question that matters, because beneath the convenience and performance benefits lies a systemic issue that many organisations have accidentally baked into their architectures.


The deeper issue: all eggs in one basket by design

Many businesses, especially in their early digital journey, make an architectural choice that feels logical at the time:


  • Place everything on one cloud

  • Build heavily on one vendor’s serverless ecosystem

  • Rely on one CDN, one DNS provider, one routing layer

  • Centralise everything for simplicity


This is easy, fast, and efficient — until it isn’t.

Because the moment your single provider suffers an outage, your entire organisation suffers one too. Even if your origin servers are healthy. Even if your code is stable. Even if your infrastructure is fine. You are only as resilient as the most fragile layer in your vendor chain. And in many cases, that layer is outside your control.


Key principle: aim to be vendor-neutral where it matters most

This does not mean abandoning the cloud, rejecting CDNs, or avoiding serverless. These are powerful tools. The risk lies not in the technology, but in over-coupling yourself to one provider’s proprietary stack.


Where resilience matters most, design around portability, not dependency.

  • Prefer open protocols over closed ecosystems

  • Choose portable runtimes that run on multiple providers

  • Use tools that support multi-cloud or cloud-agnostic deployments

  • Keep critical logic close to your codebase — not trapped in someone else’s “magic layer”


Vendor services should help you — not trap you.

Practical self-check questions

Here are a few simple but powerful questions every organisation should ask themselves:


  1. Can I export my data, and import it somewhere else without drama? If the answer is no, you are already locked in.

  2. How much business logic lives inside vendor-specific “Workers”, “Functions”, or “Lambdas”? Does your application rely on features that don’t exist outside your provider’s ecosystem?

  3. If my main provider had a major outage today, what could I realistically do? Can you bypass the proxy and go directly to origin? Switch DNS or CDN quickly? Fail over to another region or cloud?

  4. Am I using serverless as a tool, or has it become a trap? Serverless can accelerate development, but only when used with awareness. Unchecked, it becomes a dependency that blocks resiliency.


The main takeaway

The cloud is not the enemy. Serverless is not the enemy. Vendor platforms offer extraordinary power: scale, reach, performance, automation.


The real risk is when we tie our resilience, continuity, and architectural destiny to one provider

Good engineering in 2025 (and beyond) means:


  1. Building vendor-neutral foundations - Modern platforms are powerful, but they become dangerous when they become irreplaceable. Vendor-neutral foundations mean using open standards, supported protocols, and technologies that don’t lock you in. It means choosing databases, runtimes, deployment tools, and security layers that work across multiple providers, not just one. The goal is simple: your architecture should continue functioning or be easily rehomed, even if one vendor disappears, suffers an outage, or changes their pricing model overnight.


  2. Designing for portability - Portability is resilience in disguise. Whether running containers, functions, or full applications, portability ensures your workloads can shift between regions, clouds, or providers without major rewrites. This avoids the silent trap where “serverless convenience” slowly evolves into “unmovable dependency”.


  3. Keeping critical logic close to your own stack - It’s tempting to push more and more business logic into proprietary services: Cloudflare Workers, AWS Lambda, Firebase Functions, or bespoke vendor-specific pipelines. But when core logic lives inside your provider’s ecosystem, it becomes difficult (and expensive) to move. Good engineering means keeping your essential logic within your own codebase, under your control, where you control how and where it runs. Let vendors provide acceleration, not own your architecture.


  4. Maintaining the original idea of the internet: the ability to route around failure - The internet was designed with one philosophy: if a path breaks, find another. Modern architectures often forget this principle by funnelling everything through one gatekeeper. True resilience brings back the original spirit, multiple paths, multiple providers, failover options, and the ability to maintain service even when one part of the ecosystem collapses. Your architecture should not rely on a single point of failure, regardless of how big or reputable that point may be.


Because resilience isn’t just about protecting your origin.. It’s about ensuring you are never trapped by your guardian’s weaknesses.

Comments


bottom of page