How I Built Linkite — Architecture of a Production URL Shortener
I built Linkite because every existing shortener I tried either lacked analytics, had sketchy security, or didn't let me control rate limits. So I decided to build one myself — and document every decision along the way.
The ID generation problem
The first thing you have to solve in a URL shortener is: how do you generate short IDs that don't collide? The naive approach — random strings — works until it doesn't. At 10k links, you start thinking about collision probability more seriously.
I went with a base-62 encoder (nanoid under the hood) with 7-character IDs. That gives you about 3.5 trillion unique combinations — more than enough for anything I'll ever build. If the generated ID is already taken, I just regenerate. Simple.
Three-stage URL security
This is the part I spent the most time on. A URL shortener is basically a redirect machine — if you don't validate inputs properly, you're just helping bad actors hide malicious links. I built three consecutive checks before any URL gets stored:
- ›Stage 1 — HTTPS check: reject any URL that isn't HTTPS. HTTP links in 2025 are a red flag anyway.
- ›Stage 2 — Google Safe Browsing API: check against Google's threat database. Takes ~100ms but catches a huge surface area of malware, phishing, and social engineering links.
- ›Stage 3 — Live GET request: actually fetch the URL and follow redirects. If the final destination is suspicious or the request times out, reject. This catches chains of redirects that lead somewhere sketchy.
💡 The three checks add ~200ms to link creation. That's fine — link creation is a write-once operation, and no user minds a 200ms wait when they're shortening a URL.
Redis for hot-path caching
The read path (redirecting short → long URL) needs to be fast. I'm not going to MongoDB on every redirect — that's too slow and too expensive at any real scale.
Redis sits in front of MongoDB. On every redirect, I check Redis first. Cache hit: ~2ms. Cache miss: go to Mongo, fetch, write back to Redis with a 24-hour TTL. Popular links stay warm in Redis indefinitely because they keep getting hit before TTL expires.
Real-time analytics
Every click writes an event to a lightweight queue: timestamp, IP, user-agent, referrer. A background worker processes the queue and aggregates stats into the analytics collection. The dashboard polls every 30 seconds for live data. Nothing fancy — but it works reliably and doesn't slow down the redirect path.
The browser extension
I shipped a browser extension in plain HTML/CSS/JS (no framework). It reads the current tab URL via the Chrome extension API, fires a POST to the Linkite API, and shows you the short link with a one-click copy button. Total size: 14kb. I deliberately kept it framework-free to keep the install footprint small.
What I'd do differently
If I were starting over, I'd use a proper background job queue (BullMQ or similar) for the analytics pipeline instead of the DIY queue I built. I'd also add better rate limiting on link creation per IP to prevent abuse. The core architecture, though — I'm happy with it.