RxDB is one of those tools that can look deceptively simple in a demo and then becomes quite a bit more complex once your app gets real state and real users.
At ClarityBoss, we use RxDB as our primary local data layer in the frontend. This is not a thin cache over an API. It is the main place our Vue app reads and writes data during normal usage.
If you are new to RxDB, these are useful references before diving into our approach:
I am splitting this into multiple posts. Part 1 focuses on frontend architecture and ergonomics. Part 2 covers our Go and Postgres backend implementation in detail.
RxDB as the primary local data layer
Compared to a traditional web application backed by REST/GraphQL/APIs, RxDB becomes the local source of truth. The storage layer is configurable (memory is a good starting point, but we use the Dexie.js IndexedDB backend). Thus, as users move through the app and create/edit entries, most data is read directly from RxDB, not from ad hoc API calls.
That gives us a few benefits immediately:
- Navigation feels fast because list and detail views have local state available.
- User writes are local-first, with replication handling network realities. In a coffee shop or on an airplane with bad Wifi? No problem; your writes are preserved locally and will be replicated when a better network connection is available.
- UI code can rely on reactive queries instead of manually orchestrating fetches all over the place.
This is the initialization shape we use (trimmed for brevity):
const db = await createRxDatabase<Collections>({
name: `db_${uid}`,
storage,
multiInstance: true,
eventReduce: true,
})
await db.addCollections({
entries: {
schema: entrySchema,
methods: entryMethods,
migrationStrategies: entryMigrations,
},
persons: {
schema: personSchema,
methods: personMethods,
migrationStrategies: personMigrations,
},
})
// setupReplication is a helper function, which may be covered in a future post
setupReplication(db.entries, 'entry', tokenFunc)
setupReplication(db.persons, 'person', tokenFunc)
A small number of collections, on purpose
We currently replicate two primary collections in the frontend database - entries and persons.
Too few collections and you force awkward denormalization. For example, embedding person objects into every entry would duplicate data and create ownership/update headaches.
Too many collections and your sync model, migrations, and mental model all get noisier than they need to be, and you are having to do manual joins all over the place.
We keep person facts inside the person document itself (name, relationship, emails, attributes, frequency), and entries reference person IDs where needed. That has been simpler to reason about than splitting person details into multiple child collections.
Frontend ergonomics and abstractions
We put a little effort into helpers so RxDB feels natural in Vue code. VueUse provides a useObservable helper that was very inspirational, but we needed something a bit more specific.
RxJS and Vue compatibility
One example is wrapping RxJS observables (the underlying reactivity library RxDB uses) in a composable so view code stays straightforward:
function useRxDBObservable<E>(
extractor: (database: DatabaseType) => Observable<E>,
initialValue: E,
): { value: Readonly<ShallowRef<E>>; loaded: Readonly<ShallowRef<boolean>> } {
let subscription: Subscription | undefined
tryOnScopeDispose(() => {
subscription?.unsubscribe()
subscription = undefined
})
const db = injectDatabase()
const obsRef = shallowRef(initialValue)
const loaded = shallowRef(false)
watchEffect(() => {
subscription?.unsubscribe()
subscription = undefined
if (db.value) {
const observable = extractor(db.value)
subscription = observable.subscribe({
error: (err) => {
console.error('Error in RxDB observable subscription:', err)
},
next: (val: E) => {
obsRef.value = val
if (!loaded.value) loaded.value = true
},
})
}
})
return { value: shallowReadonly(obsRef), loaded: loaded }
}
This can then be utilized to show a friendly “Loading” screen when the data is not yet available, and allow the data to reactively update if any dependencies are updated, such as an :id pulled from the route.
const { value: person } = useRxDBObservable(
(database) => database.persons.findOne({ selector: { id: route.params.id as string } }).$,
undefined,
)
JSON representation vs. higher level types
In our data model, journal entries have either a date (e.g. ‘2026-01-01’) or a timestamp that represents an exact instant (e.g. ‘2026-01-01T06:00:00Z’). This is transmitted during replication as a string, but we generally utilize the DateValue, CalendarDate, and ZonedDateTime from the @internationalized/date package when manipulating and comparing these values in the frontend.
To do this, we use a postCreate hook for entries to precompute a cached date value used frequently in sorting and comparisons.
function dateValue(entry: Pick<JournalEntry, 'date' | 'timestamp'>): DateValue {
if (entry.timestamp) return parseAbsoluteToLocal(entry.timestamp)
if (entry.date) return parseDate(entry.date)
throw new Error('Invalid journal entry: must have either date or timestamp')
}
db.entries.postCreate((_data, instance) => {
Object.defineProperty(instance, '_dateValue', {
value: dateValue(instance),
writable: false,
configurable: false,
enumerable: false,
})
})
Because RxDB documents are immutable, we can be sure this cache value reflects the truth of the underlying string representation, while being a more useful value type.
Replication
We do use full push/pull/stream replication, but I am intentionally keeping backend details out of Part 1. The short version is that frontend collections replicate over HTTP and SSE (server-sent events), and the app remains useful even when connectivity is poor.
In Part 2, I walk through our Go implementation of RxDB replication primitives, how we map checkpoints in Postgres, and what we changed after running this in production.
In Summary
The practical takeaway from our experience with RxDB is pretty simple:
- treat RxDB as the data layer, not just a cache
- keep your collection model intentionally small
- spend time on frontend ergonomics and compatibility with your framework so reactive data is easy to consume
That combination has given us a responsive UI and a much cleaner data flow model than “API call per view” patterns.
Part 2 dives into the backend side: Go, Postgres, pull streams, and how we keep replication reliable.