The "Always Online" assumption is dying. By 2026, users expect apps to work perfectly offline and sync instantly when back online. This shift has turned JSON from a simple "API response format" into a primary "Local Storage format." Here's the stack that actually works.
TL;DR
- Local-first is the new standard: Store the "Source of Truth" locally; treat the cloud as sync/backup
- DuckDB-Wasm + JSON: Run analytical queries on JSON datasets directly in the browser
- The Sync Problem: Use CRDTs (Automerge/Yjs) or sync engines (Replicache, PowerSync) for JSON state
- Sovereign Edge: Store sensitive JSON locally to comply with privacy laws and reduce latency
Why Local-First Matters (2026)
With the maturity of DuckDB-Wasm and SQLite-Wasm, we're no longer limited to basic Key-Value stores like IndexedDB. We're running full analytical engines on JSON blobs inside the user's browser or on edge nodes (Cloudflare Workers, Deno Deploy).
The benefits are compelling:
- Instant reads: No network latency for local data
- Offline capability: Full functionality without connectivity
- Privacy by default: Sensitive data never leaves the device
- Reduced server costs: Less traffic, smaller backends
The Local-First JSON Stack
Storage: The Binary Shift
While the developer sees JSON, the storage layer is increasingly binary for performance:
- SQLite JSONB: Standard on mobile/desktop for storing semi-structured local state
- DuckDB: Used for heavy-duty local analysis of JSON logs or large data exports
import initSqlJs from 'sql.js';
// Initialize SQLite in the browser
const SQL = await initSqlJs({
locateFile: file => `https://sql.js.org/dist/${file}`
});
const db = new SQL.Database();
// Create table with JSONB-style storage
db.run(`
CREATE TABLE user_data (
id TEXT PRIMARY KEY,
data TEXT NOT NULL, -- JSON stored as text
updated_at TEXT DEFAULT (datetime('now'))
)
`);
// Store JSON data locally
function saveUserData(id: string, data: object) {
db.run(
'INSERT OR REPLACE INTO user_data (id, data) VALUES (?, ?)',
[id, JSON.stringify(data)]
);
} Synchronization: JSON as a Delta
In 2026, we don't sync the "whole file." We sync JSON Patches or CRDT Operations.
- JSON Patch (RFC 6902): Great for simple client-server updates
- Automerge / Yjs: The standard for collaborative JSON editing and complex offline sync
import * as Automerge from '@automerge/automerge';
// Initialize a CRDT document
let doc = Automerge.init<{ todos: Array<{ id: string; text: string; done: boolean }> }>();
// Make local changes
doc = Automerge.change(doc, 'Add todo', d => {
d.todos.push({ id: crypto.randomUUID(), text: 'Learn CRDTs', done: false });
});
// Sync with another device
function syncWithPeer(localDoc: typeof doc, remoteChanges: Uint8Array) {
// Apply remote changes - conflicts are automatically resolved
const [newDoc] = Automerge.applyChanges(localDoc, [remoteChanges]);
return newDoc;
}
// Get changes to send to server/peers
const changes = Automerge.getChanges(Automerge.init(), doc); Architecting for "Sovereign Edge"
"Sovereign Data" means the user owns their data locally. This is a massive security and privacy win.
The Security Pattern
- Local Encryption: Encrypt the local JSON store with a key derived from user auth
- Selective Sync: Only sync non-sensitive metadata or encrypted blobs to the cloud
- Local Validation: Run JSON Schema validation locally to prevent corrupt state
- Auth-Bound Keys: Purge or re-key local data if the user logs out
// Derive encryption key from user credentials
async function deriveStorageKey(password: string, salt: Uint8Array): Promise<CryptoKey> {
const keyMaterial = await crypto.subtle.importKey(
'raw',
new TextEncoder().encode(password),
'PBKDF2',
false,
['deriveKey']
);
return crypto.subtle.deriveKey(
{ name: 'PBKDF2', salt, iterations: 100000, hash: 'SHA-256' },
keyMaterial,
{ name: 'AES-GCM', length: 256 },
false,
['encrypt', 'decrypt']
);
}
// Encrypt JSON before storing locally
async function encryptAndStore(key: CryptoKey, data: object): Promise<ArrayBuffer> {
const iv = crypto.getRandomValues(new Uint8Array(12));
const encoded = new TextEncoder().encode(JSON.stringify(data));
const encrypted = await crypto.subtle.encrypt(
{ name: 'AES-GCM', iv },
key,
encoded
);
// Prepend IV to encrypted data
const result = new Uint8Array(iv.length + encrypted.byteLength);
result.set(iv);
result.set(new Uint8Array(encrypted), iv.length);
return result.buffer;
} Performance: Handling "Big JSON" in the Browser
If you're storing 100MB of JSON locally, standard JSON.parse() will freeze your UI.
The Rules
- Streaming Parsers: Use libraries that handle JSON as a stream
- Web Workers: Always move JSON storage and heavy querying to a background worker
- IndexedDB as Cache: Use IndexedDB as the persistence layer for your binary chunks
// main.ts - Main thread
const worker = new Worker(new URL('./json-worker.ts', import.meta.url));
worker.postMessage({ type: 'QUERY', sql: 'SELECT * FROM events WHERE type = ?' , params: ['click'] });
worker.onmessage = (e) => {
if (e.data.type === 'RESULT') {
updateUI(e.data.rows);
}
};
// json-worker.ts - Worker thread (doesn't block UI)
import * as duckdb from '@duckdb/duckdb-wasm';
let db: duckdb.AsyncDuckDB;
self.onmessage = async (e) => {
if (e.data.type === 'QUERY') {
const result = await db.query(e.data.sql, e.data.params);
self.postMessage({ type: 'RESULT', rows: result.toArray() });
}
}; DuckDB-Wasm for Local Analytics
Instead of writing manual JS loops to filter/aggregate local JSON, use DuckDB-Wasm:
import * as duckdb from '@duckdb/duckdb-wasm';
// Load DuckDB in the browser
const JSDELIVR_BUNDLES = duckdb.getJsDelivrBundles();
const bundle = await duckdb.selectBundle(JSDELIVR_BUNDLES);
const worker = await duckdb.createWorker(bundle.mainWorker!);
const logger = new duckdb.ConsoleLogger();
const db = new duckdb.AsyncDuckDB(logger, worker);
await db.instantiate(bundle.mainModule, bundle.pthreadWorker);
// Query JSON data with SQL
const conn = await db.connect();
// Load JSON file directly
await conn.query(`
CREATE TABLE events AS
SELECT * FROM read_json_auto('events.json')
`);
// Run analytical queries locally
const result = await conn.query(`
SELECT
event_type,
COUNT(*) as count,
AVG(duration_ms) as avg_duration
FROM events
WHERE timestamp > '2026-01-01'
GROUP BY event_type
ORDER BY count DESC
`);
console.log(result.toArray()); The "Senior Pro" Checklist: Local-First JSON
- โ Background Sync: Implement a robust retry/backoff strategy for JSON state syncing
- โ Conflict Resolution: Have a deterministic policy (Last-Writer-Wins vs. CRDT) for when the same JSON key changes on two devices
- โ Schema Migration: Local data lives forever. You need a plan to migrate local JSON schemas when your app updates
- โ Storage Quotas: Monitor browser storage limits; use compression (Zstd-Wasm) if your JSON payloads are large
- โ Wasm Offloading: Use DuckDB-Wasm for filtering/aggregating local JSON rather than writing manual JS loops
Pitfalls to Avoid
The "Large JSON" UI Freeze
Never block the main thread with JSON.stringify on a huge state object.
// โ Don't do this on the main thread
const hugeState = JSON.stringify(appState); // Freezes UI for seconds
// โ
Do this instead
const worker = new Worker('./stringify-worker.js');
worker.postMessage(appState);
worker.onmessage = (e) => {
const stringified = e.data;
// Now safe to use
}; Sync Loops
Ensure your sync logic doesn't trigger an infinite update loop between client and server. Use vector clocks or version numbers.
Auth Drift
Ensure local data is purged or re-keyed if the user logs out or changes permissions.
// On logout: clear sensitive local data
async function handleLogout() {
// Clear encryption keys from memory
encryptionKey = null;
// Clear sensitive tables
db.run('DELETE FROM user_data');
db.run('DELETE FROM offline_queue');
// Clear IndexedDB
await indexedDB.deleteDatabase('app-local-storage');
} References
Continue Learning
- JSON in Relational Databases โ Server-side JSONB patterns
- Big JSON Storage & Compression โ Zstd, Parquet, and data lakes
- Handling Large JSON Files โ Streaming parsers in Node.js
- JSON Tools โ Format and validate JSON online