Deploying AWS Applications with Cloudflare Workers and Cloud Connector

A comprehensive guide on using Cloudflare as a security and CDN layer for AWS-hosted applications, with edge-level API key injection

Deploying AWS Applications with Cloudflare Workers and Cloud Connector

Deploying AWS Applications with Cloudflare Workers and Cloud Connector

When building web applications that need both security and performance, we often reach for AWS services like CloudFront and WAF. However, there's an alternative approach for your architecture. In this post, we'll explore how to use Cloudflare Workers and Cloud Connector to create a secure, globally distributed frontend for AWS-hosted applications.

The challenge we're solving is straightforward: we need to protect our API keys from being exposed in the browser while serving our static assets through a global CDN. Traditional AWS solutions involve CloudFront distributions, Lambda@Edge functions, and WAF rules. While these work well, Cloudflare offers a different approach while keeping your core application logic in AWS.

Understanding the Architecture

Our architecture leverages Cloudflare's global network as the entry point for all traffic, with Workers providing edge compute capabilities for security and routing. Here's how the pieces fit together:

Static Assets

API Calls

User Browser

Cloudflare
CDN/WAF

Cloudflare
Worker

Cloud
Connector

S3 Static
Website

API Gateway

Lambda
Functions

When a user makes a request, it first hits Cloudflare's edge network. The Worker examines the request path and makes a routing decision. For API calls, it injects the API key and forwards the request to API Gateway. For static assets, it passes the request through to Cloud Connector, which retrieves content from S3. This separation of concerns keeps our architecture clean and secure.

Building the AWS Infrastructure

Let's start by setting up the AWS side of our infrastructure. We need an S3 bucket configured for static website hosting, but with a crucial security requirement: it should only accept requests from Cloudflare's IP addresses.

Creating the S3 Static Website Construct

Our CDK construct creates an S3 bucket with the necessary configuration for static website hosting while restricting access to Cloudflare IPs:

// s3-static-website-construct.ts
export class S3StaticWebsiteConstruct extends Construct {
  public readonly websiteBucket: Bucket;
  public readonly websiteUrl: string;
  public readonly websiteDomainName: string;
 
  private readonly CLOUDFLARE_IPS = [
    "173.245.48.0/20",
    "103.21.244.0/22",
    "103.22.200.0/22",
    "103.31.4.0/22",
    "141.101.64.0/18",
    "108.162.192.0/18",
    "190.93.240.0/20",
    "188.114.96.0/20",
    "197.234.240.0/22",
    "198.41.128.0/17",
    "162.158.0.0/15",
    "104.16.0.0/13",
    "104.24.0.0/14",
    "172.64.0.0/13",
    "131.0.72.0/22",
  ];
 
  constructor(scope: Construct, id: string) {
    super(scope, id);
 
    this.websiteBucket = new Bucket(this, "WebsiteBucket", {
      websiteIndexDocument: "index.html",
      websiteErrorDocument: "index.html",
      encryption: BucketEncryption.S3_MANAGED,
      enforceSSL: false, // Required for static website endpoint
      blockPublicAccess: new BlockPublicAccess({
        blockPublicAcls: true,
        blockPublicPolicy: false,
        ignorePublicAcls: true,
        restrictPublicBuckets: false,
      }),
      removalPolicy: RemovalPolicy.DESTROY,
      autoDeleteObjects: true,
    });
 
    // Restrict access to Cloudflare IPs
    this.websiteBucket.addToResourcePolicy(
      new PolicyStatement({
        sid: "AllowCloudflareIPs",
        effect: Effect.ALLOW,
        principals: [new AnyPrincipal()],
        actions: ["s3:GetObject"],
        resources: [`${this.websiteBucket.bucketArn}/*`],
        conditions: {
          IpAddress: {
            "aws:SourceIp": this.CLOUDFLARE_IPS,
          },
        },
      }),
    );
 
    // Deploy website files
    new BucketDeployment(this, "DeployWebsite", {
      sources: [Source.asset(path.join(__dirname, "../../../frontend/dist"))],
      destinationBucket: this.websiteBucket,
    });
 
    this.websiteUrl = this.websiteBucket.bucketWebsiteUrl;
    this.websiteDomainName = this.websiteBucket.bucketWebsiteDomainName;
  }
}

The key aspect of this configuration is the IP restriction. We're telling S3 to only accept requests from Cloudflare's IP addresses, which prevents direct access to the bucket while allowing Cloudflare's Cloud Connector to retrieve content. The bucket is configured with static website hosting enabled, which provides the HTTP endpoint that Cloud Connector needs. This is similar to how CloudFront Origins work but using Cloudflare instead.

Setting Up the API Gateway

Next, we need to create an API Gateway that will handle our backend API calls. This API will be protected by API key authentication, with the key being injected by our Cloudflare Worker:

// api-construct.ts
export class ApiConstruct extends Construct {
  public readonly api: RestApi;
  public readonly apiKey: string;
 
  constructor(scope: Construct, id: string, props: ApiConstructProps) {
    super(scope, id);
 
    this.api = new RestApi(this, "Api", {
      restApiName: "my-app-api",
      description: "API for my application",
      deployOptions: {
        stageName: props.environment,
      },
      defaultCorsPreflightOptions: {
        allowOrigins: Cors.ALL_ORIGINS,
        allowMethods: Cors.ALL_METHODS,
        allowHeaders: ["Content-Type", "X-Api-Key"],
      },
      apiKeySourceType: ApiKeySourceType.HEADER,
    });
 
    // Create API key
    const apiKey = this.api.addApiKey("ApiKey", {
      apiKeyName: `my-app-api-key-${props.environment}`,
    });
 
    // Create usage plan
    const usagePlan = this.api.addUsagePlan("UsagePlan", {
      name: `my-app-usage-plan-${props.environment}`,
      throttle: {
        rateLimit: 100,
        burstLimit: 200,
      },
    });
 
    usagePlan.addApiKey(apiKey);
    usagePlan.addApiStage({
      stage: this.api.deploymentStage,
    });
 
    // Store the API key value for output
    this.apiKey = apiKey.keyId;
 
    // Add Lambda integrations
    const analyzeResource = this.api.root.addResource("analyze");
    analyzeResource.addMethod(
      "POST",
      new LambdaIntegration(props.analyzeLambda),
      {
        apiKeyRequired: true,
      },
    );
  }
}

The API Gateway is configured to require an API key for all requests. This is an important security component that deserves deeper explanation.

Understanding API Key Security

When you create an API Gateway, you can protect it with various authentication methods. API keys are one of the simplest approaches - they're essentially passwords that clients must include with their requests. However, this creates a fundamental problem for frontend applications: where do you store the API key?

If you embed the API key in your JavaScript code, it becomes visible to anyone who views your source code. Even if you try to hide it in environment variables or build-time configurations, it's still exposed in the final bundle. This makes API keys unsuitable for direct use in frontend applications.

Our Cloudflare Worker solves this problem by acting as a secure proxy. The API key is stored as a Worker secret, which is encrypted at rest and only accessible to your Worker code. When a request comes from your frontend, the Worker adds the API key header before forwarding to API Gateway:

// The critical security transformation
headers.set("x-api-key", env.API_KEY);

From your frontend's perspective, it's making a simple fetch request to /api/analyze. The authentication happens transparently at the edge, where client code can't access or tamper with it. This approach provides the security of API key authentication without the exposure risk of client-side storage and ensures that only requests that are made through our Cloudflare Worker are allowed.

Creating the Cloudflare Worker

Now we come to the heart of our security implementation: the Cloudflare Worker. This Worker runs at the edge and handles all incoming requests, making routing decisions and injecting security headers.

Understanding the Worker Logic

The Worker needs to handle two types of requests: API calls that need authentication and static asset requests that should pass through to S3. Here's the complete implementation:

// worker.js
export default {
  async fetch(request, env, ctx) {
    const url = new URL(request.url);
    console.log("Worker received request:", request.method, url.pathname);
 
    // Handle API routes
    if (url.pathname.startsWith("/api/")) {
      console.log("Processing API request to:", url.pathname);
 
      // Validate origin for security
      const origin = request.headers.get("origin");
      const allowedOrigins = [
        `https://${env.ALLOWED_DOMAIN}`,
        `http://${env.ALLOWED_DOMAIN}`, // For local development
        env.ALLOWED_DOMAIN,
      ];
 
      if (
        origin &&
        !allowedOrigins.some(
          (allowed) => origin === allowed || origin.includes(allowed),
        )
      ) {
        return new Response("Forbidden", {
          status: 403,
          headers: { "Content-Type": "text/plain" },
        });
      }
 
      // Clone the request to modify headers
      const headers = new Headers(request.headers);
 
      // Inject API key - this is the critical security step
      headers.set("x-api-key", env.API_KEY);
 
      // Remove origin/referer headers before forwarding
      headers.delete("origin");
      headers.delete("referer");
 
      // Construct API Gateway URL with stage name
      const apiUrl = `https://${env.API_GATEWAY_HOST}/${env.STAGE}${url.pathname}`;
      console.log("Forwarding to API Gateway:", apiUrl);
 
      const apiRequest = new Request(apiUrl, {
        method: request.method,
        headers: headers,
        body: request.body,
        redirect: "follow",
      });
 
      try {
        const response = await fetch(apiRequest);
        console.log("API Gateway response status:", response.status);
 
        // Clone response to modify headers
        const modifiedResponse = new Response(response.body, response);
 
        // Add CORS headers
        modifiedResponse.headers.set(
          "Access-Control-Allow-Origin",
          origin || "*",
        );
        modifiedResponse.headers.set(
          "Access-Control-Allow-Methods",
          "GET, POST, PUT, DELETE, OPTIONS",
        );
        modifiedResponse.headers.set(
          "Access-Control-Allow-Headers",
          "Content-Type",
        );
 
        return modifiedResponse;
      } catch (error) {
        console.error("API Gateway request failed:", error);
        return new Response(
          JSON.stringify({
            error: "Internal Server Error",
            message: error.message,
          }),
          {
            status: 500,
            headers: {
              "Content-Type": "application/json",
              "Access-Control-Allow-Origin": origin || "*",
            },
          },
        );
      }
    }
 
    // Handle OPTIONS preflight requests
    if (request.method === "OPTIONS") {
      return new Response(null, {
        status: 204,
        headers: {
          "Access-Control-Allow-Origin": "*",
          "Access-Control-Allow-Methods": "GET, POST, PUT, DELETE, OPTIONS",
          "Access-Control-Allow-Headers": "Content-Type",
          "Access-Control-Max-Age": "86400",
        },
      });
    }
 
    // For all other routes, pass through to origin (S3 via Cloud Connector)
    return fetch(request);
  },
};

The Worker's logic is straightforward but powerful. For API requests, it validates the origin, injects the API key, and forwards the request to API Gateway. For static assets, it simply passes the request through, allowing Cloud Connector to handle the S3 retrieval.

Configuring the Worker

The Worker configuration is managed through a wrangler.toml file that defines environments and routing:

# wrangler.toml
name = "my-app-worker"
main = "worker.js"
compatibility_date = "2024-01-01"
 
[env.development]
vars = {
  ALLOWED_DOMAIN = "localhost:5173",
  STAGE = "development"
}
 
[env.production]
vars = {
  ALLOWED_DOMAIN = "yourdomain.com",
  STAGE = "production"
}
 
[[env.production.routes]]
pattern = "yourdomain.com/*"
zone_name = "yourdomain.com"

Deploying the Complete Solution

Now that we have all the pieces, let's walk through the complete deployment process. This involves deploying AWS infrastructure, configuring Cloudflare services, and connecting everything together.

Step 1: Deploy the AWS Infrastructure

First, we deploy our CDK stack to create the S3 bucket, API Gateway, and Lambda functions:

cd infra
pnpm install
pnpm build
 
# Deploy to production
npx cdk deploy MyAppCloudflareStack-production \
  --context environment=production

The CDK deployment will output several important values that we'll need for the next steps:

Outputs:
MyAppCloudflareStack-production.ApiUrl = https://abc123.execute-api.us-east-1.amazonaws.com/production
MyAppCloudflareStack-production.ApiKey = def456
MyAppCloudflareStack-production.WebsiteUrl = http://my-app-production.s3-website-us-east-1.amazonaws.com

Make note of these values - we'll need them for configuring Cloudflare.

Step 2: Build and Deploy the Frontend

The frontend deployment is handled automatically by our CDK stack through the BucketDeployment construct. When you deploy the CDK stack, it builds and uploads your frontend files to S3:

// Inside the S3StaticWebsiteConstruct
new BucketDeployment(this, "DeployWebsite", {
  sources: [Source.asset(path.join(__dirname, "../../../frontend/dist"))],
  destinationBucket: this.websiteBucket,
});

This construct looks for built files in the frontend/dist directory and uploads them to S3 during the CDK deployment process.

Step 3: Configure Cloudflare DNS

Log into your Cloudflare dashboard and navigate to your domain's DNS settings. We need to create an A record that points to a dummy IP address. This might seem strange, but it's how we tell Cloudflare to proxy requests through their network:

Type: A
Name: @ (or your subdomain)
Content: 192.0.2.1
Proxy status: Proxied (orange cloud)
TTL: Auto

The IP address 192.0.2.1 is a special address from the documentation range that will never route anywhere. Since we're using Cloudflare's proxy, the actual IP doesn't matter - Cloud Connector will handle the routing to S3.

Step 4: Set Up Cloud Connector

Cloud Connector is Cloudflare's service for connecting to cloud storage providers. In the Cloudflare dashboard:

  1. Navigate to Rules → Cloud Connector
  2. Click "Create Cloud Connector"
  3. Select "Amazon S3"
  4. Configure the connector:
    • Name: my-app-s3
    • Bucket URL: Use the S3 website URL from your CDK output
    • When incoming requests match:
      • Field: Hostname
      • Operator: equals
      • Value: yourdomain.com

Cloud Connector will now intercept requests to your domain and fetch content from S3.

Step 5: Deploy the Cloudflare Worker

Before deploying the Worker, we need to set up the secret environment variables:

cd cloudflare
 
# Set the API key secret
wrangler secret put API_KEY --env production
# When prompted, paste the API key from your CDK output
 
# Set the API Gateway host
wrangler secret put API_GATEWAY_HOST --env production
# Enter just the host part: abc123.execute-api.us-east-1.amazonaws.com
 
# Deploy the Worker
pnpm deploy:prod

The Worker is now deployed and ready to handle requests. The final step is to ensure it's connected to your domain's route.

Step 6: Configure Transform Rules for SPA Routing

Single Page Applications need special handling for client-side routing. When a user navigates directly to a route like /dashboard, we need to serve the index.html file and let the JavaScript router take over.

In the Cloudflare dashboard:

  1. Go to Rules → Transform Rules → Rewrite URL
  2. Create a new rule:
    • Rule name: "SPA Routing"
    • When incoming requests match:
      • Custom filter expression
    • Enter this expression:
      (http.request.uri.path ne "/api/*") and
      not (http.request.uri.path matches "\\.(js|css|png|jpg|jpeg|svg|ico|webp|woff2?)$")
    • Then:
      • Rewrite to: Static
      • Path: /index.html
      • Preserve query string: Yes

This rule ensures that any request that isn't for an API endpoint or a static asset gets rewritten to serve index.html, enabling client-side routing.

Cloudflare vs AWS Native: A Realistic Comparison

Now that we've implemented this architecture, let's examine when you might choose Cloudflare over AWS's native CloudFront and WAF solutions. Both approaches can achieve the same security and performance goals, but they differ in complexity, cost, and operational considerations.

Cost Analysis

The cost comparison depends heavily on your traffic patterns and requirements:

Cloudflare Pricing (at publish):

  • CDN: $0.05 per GB data transfer
  • Workers: $0.50 per million CPU requests
  • Basic WAF: Free tier available, Pro at $20/month, Business at $200/month
  • DDoS Protection: Included in all paid plans

AWS Pricing (at publish):

  • CloudFront: $0.085 per GB (first 10TB), $0.0075 per 10,000 requests
  • Lambda@Edge: $1.84 per million requests, plus duration charges
  • WAF: ~$20-30/month for basic rules, Shield Advanced $3,000/month for enterprise DDoS protection
  • Free tier: 1TB data transfer per month

Real-world scenario: For a small application serving 1TB/month with 1 million API requests:

  • Cloudflare: ~$50-70/month (Pro plan + Workers)
  • AWS: ~$85-100/month (CloudFront + Lambda@Edge + basic WAF)

For larger applications, AWS's tiered pricing can become more cost-effective, while Cloudflare's predictable pricing benefits smaller deployments.

Infrastructure Complexity Trade-offs

Cloudflare's Hidden Complexity: While Cloudflare Workers appear simpler, this approach actually introduces operational complexity that's often overlooked. You lose the infrastructure-as-code benefits of CDK, requiring manual dashboard configuration for DNS, Cloud Connector, Transform Rules, and Worker deployment. This creates a hybrid infrastructure that's partially managed by code and partially by manual configuration.

AWS's Integration Advantage: The AWS approach, while requiring more initial setup, can be entirely managed through CDK. Your entire infrastructure becomes version-controlled, repeatable, and auditable. Teams already invested in AWS tooling benefit from unified monitoring, logging, and billing.

Security Equivalence

Both approaches achieve identical security outcomes. The core insight - protecting API keys by injecting them at the edge - can be implemented equally well with either platform:

  • Cloudflare Workers: Store API keys as Worker secrets, inject in edge function
  • Lambda@Edge: Store API keys as SSM parameters or Secrets Manager, inject in Lambda function

There's no inherent security advantage to either approach. The choice is about implementation preference, not security capability.

Performance Considerations

Cloudflare Workers Performance:

  • Cold starts: ~5ms (V8 isolates)
  • Global edge network: 300+ locations
  • Memory limit: 128MB
  • CPU timeout: 30 seconds

AWS Lambda@Edge Performance:

  • Cold starts: 100ms-1s (container-based)
  • Edge locations: Via CloudFront's network
  • Configurable memory: 128MB-10GB
  • Flexible timeout and language support

Workers excel at simple, fast operations, while Lambda@Edge handles complex computations better.

When to Choose Each Approach

Choose Cloudflare When:

  • Cost predictability is crucial (especially for smaller applications)
  • You need DDoS protection without enterprise-level AWS Shield costs
  • Your edge logic is simple and JavaScript-friendly
  • You're comfortable with hybrid infrastructure management

Choose AWS When:

  • You require infrastructure-as-code for all components
  • Your application already uses extensive AWS services
  • You need complex edge computations or non-JavaScript runtimes
  • Compliance requires single-vendor solutions
  • You want unified monitoring and billing

The Real Trade-off

The fundamental trade-off isn't about which platform is "better" - both are excellent. It's about whether you prioritize cost savings and performance (Cloudflare) or infrastructure consistency and deep AWS integration (native AWS services).

For teams already committed to CDK and AWS tooling, introducing Cloudflare creates operational overhead that may outweigh its benefits. For cost-conscious projects or those prioritizing global performance, Cloudflare's advantages can be compelling.

Security Implementation Deep Dive

Both architectures achieve security through edge-level credential injection, but the implementation details differ slightly.

API Key Protection Strategy

The core security principle remains identical across both platforms: sensitive credentials are stored securely at the edge and injected into requests before they reach your API. This prevents client-side exposure while maintaining the simplicity of unauthenticated frontend requests.

Cloudflare Implementation:

// API key stored as encrypted Worker secret
headers.set("x-api-key", env.API_KEY);

AWS Lambda@Edge Equivalent:

// API key retrieved from AWS Systems Manager
const apiKey = await ssm
  .getParameter({ Name: "/api/key", WithDecryption: true })
  .promise();
headers["x-api-key"] = apiKey.Parameter.Value;

Origin Validation and Access Control

Both platforms implement origin validation to prevent unauthorized API access. The Cloudflare Worker checks origin headers against an allowlist, while Lambda@Edge can implement identical logic. Similarly, S3 access restriction works the same way - whether limiting access to Cloudflare IPs or CloudFront's IP ranges.

The security outcomes are equivalent; the difference lies in implementation details and operational preferences.

Conclusion

Cloudflare Workers and Cloud Connector offer a viable alternative to AWS's native CloudFront and WAF solutions for securing web applications. This approach demonstrates how edge computing can solve the fundamental challenge of API key protection in frontend applications, though the same security outcomes can be achieved with either platform.

The architecture we've explored excels in specific scenarios: cost-sensitive applications that benefit from Cloudflare's predictable pricing, projects requiring DDoS protection without enterprise-level AWS costs, and teams comfortable with hybrid infrastructure management. The combination of Workers for API security and Cloud Connector for static assets provides a functional separation of concerns.

However, this approach introduces operational complexity that shouldn't be overlooked. The lack of CDK support means infrastructure management becomes split between code and manual configuration, which may not align with teams already invested in infrastructure-as-code practices.

The choice between Cloudflare and AWS native solutions isn't about which is objectively better - both platforms can deliver secure, performant applications. Instead, the decision hinges on your specific requirements: cost constraints, infrastructure consistency needs, team expertise, and organizational preferences around vendor diversity.

For teams prioritizing cost optimization and willing to manage hybrid infrastructure, Cloudflare's approach offers clear advantages. For organizations requiring full infrastructure-as-code control or deep AWS integration, native AWS solutions remain the more suitable choice. The key is honestly evaluating your constraints and choosing the approach that best fits your operational model.