Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
40 changes: 40 additions & 0 deletions debugging/troubleshooting.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -374,6 +374,46 @@
2. The **initial sync** on a client can take a while in cases where the operations history is large. See [Compacting Buckets](/maintenance-ops/compacting-buckets) to optimize sync performance.
3. You can get big performance gains by using **transactions & batching** as explained in this [blog post](https://www.powersync.com/blog/flutter-database-comparison-sqlite-async-sqflite-objectbox-isar).

### Diagnosing Sync Latency

When a user reports that a write took too long to reach their device, there is no single trace that covers the full path. Instead, isolate each stage of the pipeline to find the bottleneck.

The downstream pipeline (source database to client) has two stages:

1. **Source database to PowerSync Service** (replication).
2. **PowerSync Service to client** (sync session).

A full end-to-end picture also includes the upstream path (client write → your backend API → source database commit), which sits outside PowerSync and is not covered by the diagnostics below.

#### Measuring Downstream Latency (Source to Device)

The most reliable way to measure the downstream pipeline is to put a timestamp in the data itself. When a row is written or updated in your source database, set a column to the current server time (e.g. `updated_at = NOW()`). On the client, compare that timestamp to the time the row arrives in the local database. The difference is the time from the source write being committed to the row being visible on the device.

This does not include the time taken for the client to send a write to your backend, the backend's processing, or the backend's commit to the source database. To measure those, instrument your backend API directly (e.g. log request received and source DB commit timestamps).

Check warning on line 392 in debugging/troubleshooting.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

debugging/troubleshooting.mdx#L392

Did you really mean 'backend's'?

Check warning on line 392 in debugging/troubleshooting.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

debugging/troubleshooting.mdx#L392

Did you really mean 'backend's'?

This number covers the downstream pipeline as a whole but does not tell you which stage is slow. Use the per-stage diagnostics below to break it down.

#### Stage 1: Source Database to PowerSync Service

Check the **Replication Lag** chart in the **Metrics** view of the [PowerSync Dashboard](https://dashboard.powersync.com/). This shows whether replication from your source database is keeping up. Replicator logs in the **Logs** view surface any replication errors that would cause delays at this stage.

For a deeper walkthrough of what drives replication lag, how to interpret it for your specific source (Postgres, MongoDB, MySQL, SQL Server), and how to reduce it, see [Replication Lag](/maintenance-ops/replication-lag).

#### Stage 2: PowerSync Service to Client

Service/API logs in the [PowerSync Dashboard](https://dashboard.powersync.com/) record a **Sync stream started** event when a client connects and a **Sync stream complete** event when the session ends. Together they show how many operations were synced, how much data was transferred, and how long the connection stayed open. See [Correlating User Reports to Sync Sessions](/maintenance-ops/monitoring-and-alerting#correlating-user-reports-to-sync-sessions) for the full list of fields on each event.

[Custom metadata](/maintenance-ops/monitoring-and-alerting#custom-metadata-in-sync-logs) attached at `connect()` time is included in both events, so you can also filter by app version, environment, or other context you set.

#### Common Causes of Latency

A few patterns account for most latency reports:

* **Large initial sync**: if your sync rules result in a large dataset, the first sync after connecting will be slow. Inspect bucket sizes and sync state with the [Sync Diagnostics Client](/tools/diagnostics-client).
* **Upload queue blocking downloads**: by default, uploads are processed before downloads, so a backlogged upload queue delays receiving new data. Buckets and streams at [priority 0](/sync/advanced/prioritized-sync) are not blocked by uploads, but come with the trade-off of potential sync inconsistencies.
* **Replication lag on the source database**: high write volume, long-running transactions, bulk updates, or backfills can cause replication to fall behind faster than the service can drain it. See [Replication Lag](/maintenance-ops/replication-lag) for source-specific causes and fixes.

Check warning on line 414 in debugging/troubleshooting.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

debugging/troubleshooting.mdx#L414

Did you really mean 'backfills'?
* **Too many buckets per user**: incremental sync overhead scales roughly linearly with the number of buckets per user. See [Too Many Buckets](#too-many-buckets-psync_s2305) above.

### Web: Logging queries on the performance timeline

Enabling the `debugMode` flag in the [Web SDK](/client-sdks/reference/javascript-web) logs all SQL queries on the Performance timeline in Chrome's Developer Tools (after recording). This can help identify slow-running queries.
Expand Down
15 changes: 14 additions & 1 deletion maintenance-ops/monitoring-and-alerting.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -110,6 +110,19 @@ You can manage logs with the following options:

* **Stack Traces**: Option to show or hide stack traces for errors.

### Correlating User Reports to Sync Sessions

When a user reports a sync issue at a specific time, you can find their session in the Service/API logs by filtering on their `user_id`. Two events describe each sync session:

* **Sync stream started**: logged when the client connects. Fields include `user_id`, `client_id`, `app_metadata` (if set), `client_params`, `user_agent`, and `rid` (request id).
* **Sync stream complete**: logged when the session ends. Fields include `user_id`, `client_id`, `app_metadata` (if set), `operations_synced`, `operation_counts` (broken down by `put`, `remove`, `move`, `clear`), `data_synced_bytes`, `data_sent_bytes`, `stream_ms` (session duration), `close_reason`, and `rid`.

Both events share the same `rid`, so a started/complete pair for a single session can be matched by filtering on it.

Together these tell you when a user connected, how much data was synced, and how long the connection stayed open. This is useful for investigating reports like "data took a while to appear at 2:15 PM": locate the matching session and inspect its size and duration.

For diagnosing sync latency end-to-end, see [Diagnosing Sync Latency](/debugging/troubleshooting#diagnosing-sync-latency).

## Custom Metadata in Sync Logs

Custom metadata in sync logs allows clients to attach additional context to their PowerSync connection for improved observability and analytics. This metadata appears in the Service/API logs, making it easier to track, debug, and analyze sync behavior across your app. For example, you can tag connections with app version, feature flags, or business context.
Expand Down Expand Up @@ -269,7 +282,7 @@ You can specify application metadata when calling `PowerSyncDatabase.connect()`.

### View Custom Metadata in Logs

Custom metadata appears in the **Service/API logs** section of the [PowerSync Dashboard](https://dashboard.powersync.com/). Navigate to your project and instance, then go to the **Logs** view. The metadata is included in **Sync Stream Started** and **Sync Stream Completed** log entries.
Custom metadata appears in the **Service/API logs** section of the [PowerSync Dashboard](https://dashboard.powersync.com/). Navigate to your project and instance, then go to the **Logs** view. The metadata is included in **Sync stream started** and **Sync stream complete** log entries.

<Note>
Make sure the **Metadata** checkbox is enabled in the logs view to see custom metadata in log entries.
Expand Down