back
Everyone Is Wrong About SQLite

Here's a spicy take: SQLite might be the best database choice for your next SaaS project. Yes, really. The same database that powers your browser history could handle your production workload better than that overengineered Postgres cluster you're thinking about.

Before you close this tab in disgust, hear me out. I've watched countless developers dismiss SQLite as a "toy database" while simultaneously struggling with connection pools, replication lag, and $500/month database bills. Meanwhile, companies like Expensify are processing $10 billion in transactions on SQLite.

The problem isn't SQLite. The problem is that we've been thinking about it completely wrong.

The "Lite" Problem

Let's address the elephant in the room: that unfortunate "lite" in the name. It's like naming a sports car "Slow McLaren" – technically accurate in some narrow sense but completely misleading about its actual capabilities.

SQLite isn't "Postgres Lite" or "MySQL for Beginners." It's a different beast entirely. While Postgres is built for client-server architecture with network protocols and connection management, SQLite is an embedded database engine. It's not lighter because it does less – it's lighter because it doesn't need an entire server process, network stack, and authentication system.

Here's what SQLite actually is:

  • The most deployed database engine in the world (billions of instances)
  • Battle-tested in production by companies processing millions of requests
  • Faster than client-server databases for many workloads
  • ACID-compliant with full transaction support
  • Capable of handling databases up to 281TB

But sure, let's keep calling it a toy because of that name.

When SQLite Absolutely Crushes It

Here's where conventional wisdom gets it backwards. SQLite isn't just "good enough" for certain use cases – it's actually the superior choice for many production scenarios:

1. Read-Heavy Workloads

If your app is 95% reads (like most SaaS apps), SQLite will embarrass your Postgres setup. No network roundtrips. No connection overhead. Just direct disk reads with an intelligent page cache. I've seen query times drop from 50ms to 0.5ms just by switching from Postgres to SQLite.

2. Single-Server Deployments

Running everything on one beefy server? SQLite eliminates an entire class of problems:

  • No connection pool exhaustion
  • No network latency
  • No split-brain scenarios
  • No replication lag
  • Backups are literally just copying a file

3. Edge Computing

Deploying to multiple regions? Each edge location can have its own SQLite database. Cloudflare Workers, Fly.io, and similar platforms make this trivial. Your users get sub-10ms response times, and you get a simple architecture.

4. Embedded Analytics

Need to crunch numbers on user data? SQLite can handle complex analytical queries on gigabytes of data. Window functions, CTEs, JSON operations – it's all there. And it's fast because there's no network overhead.

The Real Limitations (Not What You Think)

Let's be honest about where SQLite struggles, because it's not where most people think:

❌ High Write Concurrency (But It's Not That Simple)

SQLite uses a single writer model. One write at a time. But here's what the haters don't tell you: since Write-Ahead Logging (WAL) mode was introduced, SQLite can handle concurrent readers WHILE writing. No more blocking reads during writes.

In WAL mode:

  • Writers don't block readers
  • Readers don't block writers
  • Multiple readers work simultaneously
  • Write performance improved dramatically
-- Enable WAL mode (do this once)
PRAGMA journal_mode=WAL;
Enter fullscreen mode Exit fullscreen mode

With WAL enabled and proper configuration, I've seen SQLite handle 500-1000+ writes/second on modern hardware while serving thousands of concurrent reads. Yes, Postgres can push higher numbers with multiple writers, but ask yourself: is your SaaS really doing more than 1000 writes per second? (Spoiler: it's not.)

❌ Multiple Application Servers

Need horizontal scaling across multiple servers hitting the same database? SQLite wasn't built for this. You'll need Postgres or MySQL. (Though solutions like LiteFS and rqlite are changing this game.)

❌ Complex Access Control

SQLite doesn't have users, roles, or row-level security. Your application handles all authorization. This is actually fine for 99% of SaaS apps where you're checking permissions in code anyway.

✅ But Here's What People Get Wrong:

  • "SQLite can't handle concurrent reads" – Wrong. It handles unlimited concurrent reads.
  • "SQLite doesn't support JSON" – Wrong. Full JSON support since 2015.
  • "SQLite can't do full-text search" – Wrong. FTS5 is excellent.
  • "SQLite databases corrupt easily" – Wrong. It's one of the most reliable storage formats ever created.

The Architecture That Changes Everything

Here's how to think about SQLite in production:

Traditional Architecture:
App Server → Network → Database Server → Disk

SQLite Architecture:
App Server → Disk
Enter fullscreen mode Exit fullscreen mode

That's it. Your database queries are now function calls. Your "connection" is a file handle. Your backup system is cp database.db backup.db.

This simplicity isn't a limitation – it's a superpower. Every component you remove is one that can't fail, can't be misconfigured, and doesn't need monitoring.

The Operations Dream You Didn't Know You Wanted

Let's talk about something that'll make your DevOps team weep with joy: SQLite operations.

Backups? It's Just a File

# Your entire backup strategy
cp production.db backup-$(date +%Y%m%d).db

# Or get fancy with point-in-time recovery using Litestream
# Streams every change to S3 automatically
litestream replicate production.db s3://mybucket/db
Enter fullscreen mode Exit fullscreen mode

That's it. No pg_dump. No coordinating replicas. No worrying about backup consistency. With Litestream, you get continuous replication to S3 with point-in-time recovery. Set it up in 5 minutes and forget about it.

Restoring? Even Easier

# Restore from yesterday
cp backup-20250115.db production.db

# Or restore from S3 with Litestream
litestream restore -o production.db s3://mybucket/db
Enter fullscreen mode Exit fullscreen mode

Compare this to restoring a 50GB Postgres database. I'll wait.

Testing? Use Real Production Data

# Clone prod for testing
cp production.db test.db
# Done. Full production dataset. Zero config.
Enter fullscreen mode Exit fullscreen mode

No sanitizing connection strings. No managing test database servers. No explaining to finance why you need another RDS instance for staging.

Monitoring? What Monitoring?

  • No connection pool metrics
  • No replication lag
  • No long-running query alerts
  • No vacuum schedules
  • No disk space alerts

Just application metrics that actually matter.

Real Production Patterns

Pattern 1: Write-Through Cache

// Instead of complex Redis + Postgres setups
async function getUser(userId: string) {
    return db.get("SELECT * FROM users WHERE id = ?", userId);
    // SQLite *is* your cache with 0ms latency
}
Enter fullscreen mode Exit fullscreen mode

Pattern 2: Per-Tenant Databases

// Each customer gets their own SQLite database
function getCustomerDb(customerId: string) {
    return new Database(`./data/customers/${customerId}.db`);
    // Perfect isolation, easy backups, simple compliance
}
Enter fullscreen mode Exit fullscreen mode

Pattern 3: Hybrid Architecture

// SQLite for reads, Postgres for writes
// Stream changes from Postgres to SQLite replicas
// 99% of queries hit SQLite, 1% hit Postgres
const read = async (query: string) => sqlite.get(query);
const write = async (query: string) => postgres.execute(query);
Enter fullscreen mode Exit fullscreen mode

The Money Shot: When to Use SQLite

Use SQLite when:

  • You're read-heavy (>90% reads)
  • You can fit on one server (up to 100GB RAM and 1TB storage is cheap now)
  • You value simplicity and reliability
  • You want to minimize operational overhead
  • You're building embedded or edge applications
  • You need consistent sub-millisecond query times

Don't use SQLite when:

  • You need high write concurrency (>1000 writes/second sustained)
  • You require multiple writers from different servers
  • You need built-in replication (though LiteFS exists)
  • You need database-level access control

The Plot Twist Nobody Talks About

Here's the dirty secret: most startups using Postgres would be better off with SQLite. They're paying for distributed system complexity they don't need while getting worse performance than a simple SQLite setup would provide.

I've migrated several production systems from Postgres to SQLite. Results:

  • 10x faster queries (no network hop)
  • 90% reduction in operational complexity
  • $500/month saved on managed database costs
  • Backup/restore times dropped from hours to seconds
  • Zero downtime from connection pool issues

Stop Thinking Client-Server

The biggest mental shift is this: stop thinking of SQLite as a database server. It's not. It's a library that happens to implement a SQL database. Once you internalize this, everything clicks.

You wouldn't spin up a separate server for your JSON parser. You wouldn't create a connection pool for your regex engine. So why do it for your database when SQLite can handle your workload as a library?

The Future Is Already Here

Major platforms are betting big on SQLite:

  • Cloudflare's Durable Objects use SQLite
  • Fly.io's LiteFS enables distributed SQLite
  • Turso is building a distributed SQLite platform
  • Even Rails is pushing SQLite as a production default

These aren't toy projects. They're production infrastructure serving billions of requests.

Your Move

Before you default to Postgres for your next project, ask yourself:

  1. Will you really have >1000 sustained writes per second?
  2. Do you actually need multiple application servers writing to the same database?
  3. Is the operational complexity worth it for your use case?

If you answered "no" to any of these, give SQLite a serious look. Not as a stepping stone to a "real" database, but as your production database.

The "lite" in SQLite doesn't mean it's less capable. It means it carries less baggage. And in production, less baggage means more speed, more reliability, and more sleep.

Stop overthinking it. Start with SQLite. You can always add complexity later if you actually need it. Chances are, you won't.

A Note From Someone Who Chose Postgres (And Why)

I build UserJot, a feedback management platform. We use Postgres, and for good reasons: we need multi-server scalability, rely heavily on pgvector for semantic search, use LISTEN/NOTIFY for real-time updates, and run complex analytical queries across millions of data points.

UserJot Dashboard

But I've also built a blogging platform entirely on SQLite. Single binary deployment, Litestream replication to S3, scaled beautifully on a $5/month VPS. It was a joy to operate precisely because SQLite eliminated all the complexity.

The lesson? I chose Postgres for UserJot because our specific needs demanded it. For the blogging platform, SQLite was the obvious choice. Most projects are more like the blogging platform than they are like UserJot.

Choose boring technology, but choose it deliberately.