System Design8 min read· February 2025

How I Built Linkite — Architecture of a Production URL Shortener

I built Linkite because every existing shortener I tried either lacked analytics, had sketchy security, or didn't let me control rate limits. So I decided to build one myself — and document every decision along the way.

The ID generation problem

The first thing you have to solve in a URL shortener is: how do you generate short IDs that don't collide? The naive approach — random strings — works until it doesn't. At 10k links, you start thinking about collision probability more seriously.

I went with a base-62 encoder (nanoid under the hood) with 7-character IDs. That gives you about 3.5 trillion unique combinations — more than enough for anything I'll ever build. If the generated ID is already taken, I just regenerate. Simple.

javascript
import { nanoid } from 'nanoid';

async function generateUniqueId(db) {
  let id;
  let attempts = 0;

  do {
    id = nanoid(7);   // 7 chars, base-62
    attempts++;
    if (attempts > 5) throw new Error('ID generation failed');
  } while (await db.links.findOne({ shortId: id }));

  return id;
}

Three-stage URL security

This is the part I spent the most time on. A URL shortener is basically a redirect machine — if you don't validate inputs properly, you're just helping bad actors hide malicious links. I built three consecutive checks before any URL gets stored:

💡 The three checks add ~200ms to link creation. That's fine — link creation is a write-once operation, and no user minds a 200ms wait when they're shortening a URL.

Redis for hot-path caching

The read path (redirecting short → long URL) needs to be fast. I'm not going to MongoDB on every redirect — that's too slow and too expensive at any real scale.

Redis sits in front of MongoDB. On every redirect, I check Redis first. Cache hit: ~2ms. Cache miss: go to Mongo, fetch, write back to Redis with a 24-hour TTL. Popular links stay warm in Redis indefinitely because they keep getting hit before TTL expires.

javascript
async function resolveShortUrl(shortId) {
  // check Redis first — should be a hit for any active link
  const cached = await redis.get(`link:${shortId}`);
  if (cached) return JSON.parse(cached);

  // miss: fall back to MongoDB
  const link = await db.links.findOne({ shortId });
  if (!link) return null;

  // write back to Redis so the next request is fast
  await redis.setex(`link:${shortId}`, 86400, JSON.stringify(link));
  return link;
}

Real-time analytics

Every click writes an event to a lightweight queue: timestamp, IP, user-agent, referrer. A background worker processes the queue and aggregates stats into the analytics collection. The dashboard polls every 30 seconds for live data. Nothing fancy — but it works reliably and doesn't slow down the redirect path.

The browser extension

I shipped a browser extension in plain HTML/CSS/JS (no framework). It reads the current tab URL via the Chrome extension API, fires a POST to the Linkite API, and shows you the short link with a one-click copy button. Total size: 14kb. I deliberately kept it framework-free to keep the install footprint small.

What I'd do differently

If I were starting over, I'd use a proper background job queue (BullMQ or similar) for the analytics pipeline instead of the DIY queue I built. I'd also add better rate limiting on link creation per IP to prevent abuse. The core architecture, though — I'm happy with it.

← All postsby Rahul Chowdhury