March 5, 2026|Crawlstack Engineering

LocalStorage vs. IndexedDB vs. SQLite (WASM): Which Storage Engine Should You Use in 2026?

Picking the wrong database for your browser extension can kill your app's performance — or your users' patience. We benchmark all three major options with real numbers, so you can make the right call before you start building.

Storage is the architectural decision that defines a browser extension's ceiling. Choose wrong, and you'll hit a wall at 10,000 records, spend weeks optimizing around the wrong tool, or — worst of all — ship something that freezes your users' browsers during a background sync.

We've tested all three major options extensively while building Crawlstack. Here's what actually matters.

The Contenders

1. LocalStorage: Simple, Synchronous, and Severely Limited

LocalStorage is the first thing most developers reach for. It's available everywhere, the API fits in a tweet, and it just works. For anything beyond configuration flags, though, it's the wrong tool — full stop.

Why it fails for real apps:

The core problem is that LocalStorage is synchronous. Every read and write blocks the main thread. In a web app, "blocking the main thread" means frozen UI, dropped animations, and unresponsive interactions. Even a modest 500-record read can cause a noticeable stutter.

On top of that, LocalStorage is capped at roughly 5MB per origin and stores only strings — meaning every object must be serialized and deserialized manually.

  • Best for: Feature flags, user preferences, a handful of config values
  • Avoid if: You're storing more than a few hundred records, or you care about UI responsiveness
  • Typical insert speed: ~100 records/second (before the browser starts freezing)

2. IndexedDB: Powerful in Theory, Painful in Practice

IndexedDB is the browser's "proper" database — asynchronous, transactional, capable of holding gigabytes of structured data. It sounds perfect. The reality is more complicated.

A Brief History: The Sin of Killing WebSQL

To understand IndexedDB's design, you need to know about WebSQL — the browser storage API it replaced.

Around 2009–2010, browsers shipped WebSQL, a clean SQLite interface that let developers write real SQL queries. It was fast, familiar to anyone who'd used a database before, and genuinely pleasant to work with.

The W3C killed it. The official reason: "all implementations relied on the same backend (SQLite)," violating a W3C rule requiring multiple independent implementations. No other browser vendor had written an alternative SQL engine, so rather than encourage that work, the committee deprecated WebSQL and gave the world IndexedDB instead.

IndexedDB is what happens when a standards committee designs an API by specification rather than by usability. The cursor-based query model, verbose transaction management, and complete absence of SQL make even simple operations feel punishing. Libraries like Dexie.js exist precisely because raw IndexedDB is too painful to use directly.

Where IndexedDB genuinely works:

  • Asynchronous operations that won't block your UI
  • Storage limits in the gigabytes
  • Decent performance up to around 50,000–100,000 records with proper batching

Where it breaks down:

This is the part that surprises developers who haven't pushed IndexedDB hard. Past roughly 100,000 records, performance degrades noticeably. Past 1,000,000, the entire browser can become sluggish — not just your extension's tab — due to background index maintenance operations.

The worst part: these index updates can run silently after a batch insert, leaving the browser nearly unusable for several minutes with no indication of why. Setting transaction durability to relaxed helps, but doesn't eliminate the problem.

  • Best for: Medium-complexity apps with datasets under ~100k records
  • Avoid if: You're building anything data-intensive, or you need SQL-style querying
  • Typical insert speed: ~1,000 records/second (with batching; degrades at scale)

3. SQLite via WASM + OPFS: The Modern Standard

This is the technology that powers Crawlstack, and the performance difference versus IndexedDB is not subtle.

By running SQLite compiled to WebAssembly and persisting data via the Origin Private File System (OPFS), you get a full relational database engine inside the browser — with ACID transactions, indexes, JOINs, aggregations, and the entire SQL standard.

Real-world performance numbers from our testing:

  • ~10,000 records/second sustained insert throughput

  • Sub-millisecond query times on tables with 500,000+ rows (with proper indexing)

  • No background UI freezes — SQLite manages its own WAL efficiently

  • Best for: Any data-intensive application, high-volume ingestion, complex queries

  • Requires: Modern browser with OPFS support (Chrome 102+, Edge 102+; Firefox support is still maturing)

  • Typical insert speed: ~10,000 records/second

Technical Nuances Worth Understanding

1. The COOP/SharedArrayBuffer Problem — and How We Bypass It

Standard SQLite-WASM requires SharedArrayBuffer to perform synchronous I/O in a Web Worker, which in turn requires strict COOP/COEP headers. Many servers don't send these headers, making standard setups brittle in extensions.

Crawlstack uses OpfsSAHPool (Sync Access Handle Pool) — an architecture that achieves high-performance synchronous OPFS access without SharedArrayBuffer, sidestepping the cross-origin isolation requirement entirely.

2. Exclusive File Locking

OPFS sync handles are exclusive: only one context can hold a write lock at a time. The pool-based model manages handle acquisition and release automatically, ensuring data integrity even when multiple workers are active.

3. Firefox Compatibility

Firefox's OPFS implementation currently has limitations that break most SQLite-WASM setups. The OpfsSAHPool architecture is specifically designed to make Firefox support possible as that implementation matures — it's on the roadmap, but Chromium-based browsers (Chrome, Edge, Brave) remain the only fully supported environments today.


Head-to-Head Comparison

FeatureLocalStorageIndexedDBSQLite (WASM)
API ComplexityTrivialHighModerate (SQL)
Threading ModelSynchronous (blocks UI)AsynchronousAsynchronous
Insert Speed~100/s~1,000/s~10,000/s
Practical Record Limit~1,000~100,000Millions+
Query PowerKey lookups onlyBasic indexesFull SQL
Data IntegrityNoneTransactionalACID
Storage Limit~5MBGigabytesGigabytes
Browser SupportUniversalUniversalChrome/Edge/Brave

Why Crawlstack Chose SQLite

For a web crawler, three things matter above all else: speed (can it keep up with the scraper?), query power (can you actually use the data you collect?), and portability (can you move to a larger infrastructure when needed?).

SQLite wins on all three.

But the biggest advantage is what we call the zero-to-scale path. Because Crawlstack is built on the SQLite/libSQL standard, you can start scraping locally with zero configuration, then seamlessly migrate to a self-hosted libSQL instance or a cloud-based Turso database when your dataset outgrows a single machine. Multiple Crawlstack nodes pointing at the same central database gives you a distributed scraping cluster — built on the same technology you started with.

The Bottom Line

If you're building a browser extension that handles more than a few hundred records, skip LocalStorage and strongly consider skipping IndexedDB too. The performance cliff is real, and the query limitations will frustrate you long before you hit it.

Start with SQLite. You'll thank yourself at 100,000 records.

Ready to try it?

Get started with Crawlstack today and experience the future of scraping.

Get Started Free