From Duplication to Unification: How Cloudflare Workers Helped Us Centralize Shared Logic

Problem

Within our frontend repository resided a crucial piece of code. This code held the transformative power to convert frontend-specific data stored in IndexedDB into its backend counterpart. This data, ranging from mere kilobytes to several megabytes, holds a story for another day.

However, as fate would have it, our backend team found themselves in need of this transformational magic for their microservices. While not an insurmountable problem, this challenge led us to stumble upon an intriguing technology that we are going to talk about in this blog.

What would be an ideal solution?

The ideal solution for our problem would be building a service that meets the following criteria:

  • Seamlessly integrates with existing codebases, minimizing the need for extensive restructuring or managing multiple repositories.

  • Easily maintainable and automatable for deployment pipelines.

  • Effortlessly scales to accommodate fluctuations in demand.

  • Ensures low latency for optimal performance.

  • Cost-effective.

Solutions we were exploring

Let's delve into some solutions we thought of:

  1. Shared Library:

    Initially, publishing a library on NPM seemed like a straightforward solution. However, our backend infrastructure is driven by Django, which presented significant hurdles. While workarounds such as wrappers or transpilers existed, they introduced unnecessary complexities. Although automating the maintenance and publication process via a CI/CD pipeline could mitigate some overhead, the compatibility issues remained a concern.

  2. SSR was out of the question!

    Implementing Server-Side Rendering (SSR) would demand extensive alterations to our existing frontend codebase. This approach would introduce complexities and elongate the development timeline, making it impractical for our scenario.

  3. Serverless Functions

    Serverless functions like AWS Lambda seemed to be the best option for the use case. The code can exist in our frontend repository, which can be deployed whenever it is changed and pushed. The serverless functions provided several benefits like:

    • Eliminates the need to provision and manage servers ourselves. This frees up our team's bandwidth to focus on development and reduces infrastructure overhead.

    • Automatically scales based on demand

    • Cost-efficient option, especially for tasks that are not constantly running

    • Cloud services also provide option to deploy them on edge for ultra low latency in all regions around the world

However, we had to reckon with the issue of cold starts. A “cold start” refers to the duration required to initialize and execute a fresh instance of a serverless function. The delay caused by cold starts, especially in scenarios with infrequent invocations, could impact user experience.

Enter Cloudflare Workers

Cloudflare Workers emerged as our ultimate solution. This on-edge service boasts off its revolutionary 0ms cold starts, with which our backend microservices could now seamlessly access the transformational code almost instantaneously. This remarkable capability is made possible by Cloudflare's deployment on V8 isolates, ensuring lightning-fast execution without the typical delays experienced by other serverless platforms. Read more on how Cloudflare Workers is able to eliminate cold starts.

Moreover, Cloudflare Workers boast a cost-effective pricing model, offering ten times less expense compared to other serverless solutions. With no need for managing servers, it provided all other features other serverless solutions provided, allowing us to devote more time to innovation.

Writing Code with Cloudflare Workers

With the command-line interface, wrangler, we can create, test, and deploy our Cloudflare Workers project.

To initiate your Worker project, simply run npx wrangler init. This command prompts you to configure your project setup as per your requirements.

In your src/index file (or src/index.ts if you've chosen TypeScript), resides the primary fetch event handler function. Here, you can specify how to handle incoming requests within your worker.

export default {
  async fetch(
    request: Request,
    env: Env,
    ctx: ExecutionContext
  ): Promise<Response> {
    return new Response("Hello World!");
  },
};

Managing multiple routes can become overwhelming and complex. To simplify this process and maintain clean code, consider utilizing third-party libraries such as itty-router.

import type { IRequest } from "itty-router";
import { Router, json, error, createCors } from "itty-router";

const router = Router({ base: "/api" });

const { preflight, corsify } = createCors({
  origins: ["*"],
  methods: ["POST", "OPTIONS"]
});

router
  // embedding preflight upstream to handle all OPTIONS requests
  .all("*", preflight)
  .post("/tranform/", yourTransformController)
  .all("*", () => error(404, "Invalid endpoint"));

export default {
  fetch: (req: IRequest) =>
    router
      .handle(req)
      .then(json)
      .catch(error)
      // corsify all Responses (including errors)
      .then(corsify)
};

Deploying your code is a breeze with npx wrangler deploy.

Voilà! Your code is now distributed to over 200 cities worldwide, boasting 0ms cold starts for lightning-fast performance.

Integrating into the CI/CD Pipeline

You can stick to the traditional route and use your trusty CI/CD pipeline, and simply add a wrangler deploy command to it. And if your repository is hosted on GitHub, Cloudflare makes it even easier with their GitHub Action, cloudflare/wrangler-action@v3.

Here's a sample deployment workflow for deploying your Cloudflare Workers code:

name: Deploy to Cloudflare Workers
on:
  push:
    branches:
      - development
    paths:
      # Only run the workflow when changes are made to your code
      - "src/parser/*"

jobs:
  deploy:
    runs-on: ubuntu-latest
    name: CF Deployment
    steps:
      - name: Checkout code
        uses: actions/checkout@v4
      - name: Deploy to Cloudflare Workers
        id: deploy
        uses: cloudflare/wrangler-action@v3
        with:
          apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
          accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
          workingDirectory: "src/parser"
      - name: Print Deployment URL
        env:
          DEPLOYMENT_URL: ${{ steps.deploy.outputs.deployment-url }}
        run: echo $DEPLOYMENT_URL

What else can you do with Cloudflare Workers?

Cloudflare Workers offer a plethora of possibilities beyond regular serverless computing, opening doors to innovative projects and functionalities.

You can utilize Workers to create cron jobs, perform tasks like intercepting and modifying requests, or leverage them as robust proxies. Here are some fun examples.

Cloudflare's supplementary tools offer impressive capabilities, including:

  • Ability to connect to both SQL and NoSQL databases, alongside its native serverless database D1, Workers can significantly reduce the need for traditional backend services for a lot of use cases.

  • Its on-edge storage system, R2 (R2-D1! Nice!), efficiently stores large volumes of unstructured data without incurring costly egress bandwidth fees.

  • Cloudflare Workers unlock a wealth of potential with their on-edge KV storage, strategically located across multiple data centers worldwide, enabling the creation of dynamic, high-performing, low-latency APIs.

  • For those seeking strongly consistent storage, Cloudflare provides Durable Objects, ensuring read consistency, albeit at a slightly slower pace compared to KV storage.

The possibilities are limitless. From creating multiplayer games utilizing durable objects and Web Sockets to crafting sophisticated real-time applications, Cloudflare pushes the boundaries of serverless computing to new heights.

We are hiring!

If solving challenging problems at scale in a fully remote team interests you, head to our careers page and apply for the position of your liking!