Skip to content

Commit 46339f7

Browse files
committed
Added support for first sync
1 parent 2284480 commit 46339f7

7 files changed

Lines changed: 280 additions & 59 deletions

File tree

docs/internal/schema-migrations.md

Lines changed: 42 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -46,25 +46,56 @@ SELECT cloudsync_alter_apply();
4646
```
4747

4848
`cloudsync_alter_apply()` applies the queued migration locally and stores the
49-
generated payload in `cloudsync_pending_migration`. After that, an authorized
50-
client uploads it with:
49+
generated payload in `cloudsync_pending_migration`. On the next
50+
`cloudsync_network_sync()` or `cloudsync_network_send_changes()`, the network
51+
layer uploads every pending schema migration before it sends row data:
5152

5253
```sql
53-
SELECT cloudsync_network_migration_upload();
5454
SELECT cloudsync_network_sync();
5555
```
5656

57-
The zero-argument upload form uploads the next pending local migration and marks
58-
it uploaded only after the backend returns valid JSON. The one-argument form is
59-
still available for custom backends or tests:
57+
The zero-argument upload form is still available when an application wants to
58+
publish schema separately from row data. It uploads the next pending local
59+
migration and marks it uploaded only after the backend returns an accepted
60+
response. The one-argument form is available for custom backends or tests:
6061

6162
```sql
6263
SELECT cloudsync_network_migration_upload(:json_payload);
6364
```
6465

65-
While a local migration is pending upload, `cloudsync_network_send_changes()`
66-
returns an error instead of sending row changes. This prevents data produced
67-
with a new local schema from reaching a server that has not accepted that schema.
66+
If a pending migration upload fails, row changes are not sent. This prevents
67+
data produced with a new local schema from reaching a server that has not
68+
accepted that schema.
69+
70+
## Initial Schema Sync
71+
72+
The first version of a database is distributed with the same migration protocol
73+
used for later schema changes. There is no separate bootstrap format.
74+
75+
Client-to-cloud first sync:
76+
77+
1. The client must have `database_id` configured and must sync with an API key
78+
that is allowed to initiate schema migrations.
79+
2. The app defines the first schema with `cloudsync_alter_create_table()`,
80+
`cloudsync_alter_add_column()`, `cloudsync_alter_augment_table()`, and any
81+
optional commands such as `cloudsync_alter_set_block_lww()`.
82+
3. `cloudsync_alter_apply()` creates the local tables, records the schema epoch,
83+
and stores a pending upload in `cloudsync_pending_migration`.
84+
4. The app may insert initial data.
85+
5. `cloudsync_network_sync()` uploads the pending schema migration first. The
86+
backend applies it to the cloud database, records it in the per-`database_id`
87+
schema log, and only then can row data be uploaded.
88+
89+
Cloud-to-client first sync:
90+
91+
1. The backend applies and records the initial migration for the `database_id`.
92+
2. A new SQLite client only needs `database_id` and a valid API key.
93+
3. `cloudsync_network_sync()` detects that the local database has no augmented
94+
tables, calls the schema check endpoint, applies the first migration locally,
95+
and then continues with normal row download.
96+
97+
An empty client that has no local schema and no server-side migration simply
98+
returns an empty sync result.
6899

69100
## Declarative API
70101

@@ -259,7 +290,7 @@ Client-originated migration:
259290
1. Application queues operations with `cloudsync_alter_*`.
260291
2. Application calls `cloudsync_alter_apply()`.
261292
3. Extension applies the migration locally and writes `cloudsync_pending_migration`.
262-
4. Application or `cloudsync_network_sync()` uploads the pending migration.
293+
4. `cloudsync_network_sync()` uploads the pending migration before row changes; applications may call `cloudsync_network_migration_upload()` explicitly when they want a separate schema publish step.
263294
5. Backend authorizes the API key, applies the payload to the cloud database,
264295
records the migration, and returns success.
265296
6. Client sends row changes after the pending migration is uploaded.
@@ -286,7 +317,7 @@ Empty client first sync:
286317
- A migration id is idempotent through `cloudsync_migrations`.
287318
- Explicit hash guards are enforced when present.
288319
- Raw SQL runs inside the same savepoint as portable operations.
289-
- Row changes are not uploaded while `cloudsync_pending_migration` contains an unuploaded migration.
320+
- Row changes are uploaded only after all pending local schema migrations have been accepted by the backend.
290321
- V2 migrations should be blocked by the backend when stale/offline clients may still upload incompatible old-epoch payloads, unless the backend has an explicit rejection or translation policy.
291322

292323
## Tests

examples/schema-migrations/README.md

Lines changed: 13 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -21,16 +21,27 @@ SELECT cloudsync_alter_add_column('notes', 'updated_at', 'timestamp', false, '19
2121
SELECT cloudsync_alter_augment_table('notes', 'CLS', 1);
2222
SELECT cloudsync_alter_set_block_lww('notes', 'body', char(10));
2323
SELECT cloudsync_alter_apply();
24-
SELECT cloudsync_network_migration_upload();
2524
SELECT cloudsync_network_sync();
2625
```
2726

28-
`cloudsync_alter_preview()` can be used before `cloudsync_alter_apply()` to inspect the generated payload. After apply, the payload is saved in `cloudsync_pending_migration`; `cloudsync_network_migration_upload()` uploads the next pending migration and marks it uploaded on success.
27+
`cloudsync_alter_preview()` can be used before `cloudsync_alter_apply()` to inspect the generated payload. After apply, the payload is saved in `cloudsync_pending_migration`; `cloudsync_network_sync()` uploads pending migrations before it sends row changes. `cloudsync_network_migration_upload()` is still available when schema should be published separately from data.
2928

3029
The backend should authorize the API key, apply the payload to the cloud database, append it to the schema migration log for the `database_id`, and distribute it to other clients through `schema/check` or `schema/download`.
3130

3231
`client-to-server.sql` contains the same flow as an executable SQLite example. `client-to-server-v1.json` is the manual JSON equivalent for backend tests or custom tooling; application code should normally let `cloudsync_alter_apply()` generate that payload.
3332

33+
## Initial Database Sync
34+
35+
The first version of a database uses the same flow. A schema-capable client
36+
queues the first `createTable` migration, calls `cloudsync_alter_apply()`,
37+
optionally inserts initial rows, and then calls `cloudsync_network_sync()`.
38+
The schema upload is accepted by the backend before any row payload is sent.
39+
40+
For cloud-to-client bootstrap, the backend records the initial migration first.
41+
A new SQLite client with only `database_id` and an API key calls
42+
`cloudsync_network_sync()`; the client downloads the first schema migration,
43+
creates/augments the tables, and then downloads data normally.
44+
3445
## Server-Originated V2 Migration
3546

3647
The backend applies and records a payload such as `server-to-client-v2.json`, then a client can update itself before receiving data:

examples/schema-migrations/client-to-server.sql

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
--
33
-- Run this on an authorized SQLite client. The extension generates the JSON
44
-- migration payload, applies it locally, stores it in cloudsync_pending_migration,
5-
-- and uploads it with cloudsync_network_migration_upload().
5+
-- and uploads it automatically before row data on the next cloudsync_network_sync().
66

77
SELECT cloudsync_alter_create_table('notes');
88
SELECT cloudsync_alter_add_column('notes', 'id', 'text', false);
@@ -17,5 +17,11 @@ SELECT cloudsync_alter_set_block_lww('notes', 'body', char(10));
1717
SELECT cloudsync_alter_preview();
1818

1919
SELECT cloudsync_alter_apply();
20-
SELECT cloudsync_network_migration_upload();
20+
21+
-- Optional initial data can be inserted here. cloudsync_network_sync() uploads
22+
-- the pending schema migration first and sends row data only after the backend
23+
-- accepts that schema.
24+
INSERT INTO notes (id, title, body, updated_at)
25+
VALUES (cloudsync_uuid(), 'First note', 'Created before first sync', '1970-01-01T00:00:00Z');
26+
2127
SELECT cloudsync_network_sync();

src/network/network.c

Lines changed: 107 additions & 42 deletions
Original file line numberDiff line numberDiff line change
@@ -77,6 +77,7 @@ typedef struct {
7777
} network_read_data;
7878

7979
static int cloudsync_network_migration_check_internal(sqlite3_context *context, char **result_json, char **err_out);
80+
static int cloudsync_network_migration_upload_next_pending(sqlite3_context *context, char **result_json, char **err_out);
8081
static bool network_migration_apply_error(const char *message);
8182
static void network_result_to_sqlite_error(sqlite3_context *context, NETWORK_RESULT res, const char *default_error_message);
8283
network_data *cloudsync_network_data(sqlite3_context *context);
@@ -979,71 +980,124 @@ void cloudsync_network_migration_download(sqlite3_context *context, int argc, sq
979980
}
980981

981982
void cloudsync_network_migration_upload(sqlite3_context *context, int argc, sqlite3_value **argv) {
982-
cloudsync_context *data = (cloudsync_context *)sqlite3_user_data(context);
983983
network_data *netdata = cloudsync_network_data(context);
984984
if (!netdata || !netdata->schema_upload_endpoint) {
985985
sqlite3_result_error(context, "Unable to retrieve CloudSync schema migration upload endpoint.", -1);
986986
return;
987987
}
988988

989-
bool from_pending = false;
990-
char *pending_id = NULL;
991-
char *pending_payload = NULL;
992989
const char *payload = NULL;
993990

994991
if (argc == 0) {
995-
pending_id = cloudsync_pending_migration_next_id(data);
996-
if (!pending_id) {
997-
sqlite3_result_error(context, "No pending schema migration to upload.", -1);
992+
char *result = NULL;
993+
int rc = cloudsync_network_migration_upload_next_pending(context, &result, NULL);
994+
if (rc == SQLITE_OK && result) sqlite3_result_text(context, result, -1, cloudsync_memory_free);
995+
else if (rc == SQLITE_OK) sqlite3_result_text(context, "{\"status\":\"uploaded\"}", -1, SQLITE_TRANSIENT);
996+
} else {
997+
payload = (const char *)sqlite3_value_text(argv[0]);
998+
999+
if (!payload || payload[0] == '\0') {
1000+
sqlite3_result_error(context, "cloudsync_network_migration_upload expects a JSON text payload.", -1);
9981001
return;
9991002
}
1000-
pending_payload = cloudsync_pending_migration_payload(data, pending_id);
1001-
if (!pending_payload) {
1002-
sqlite3_result_error(context, "Unable to load pending schema migration payload.", -1);
1003-
cloudsync_memory_free(pending_id);
1003+
if (!json_is_valid_root_object(payload, strlen(payload))) {
1004+
sqlite3_result_error(context, "cloudsync_network_migration_upload expects a valid JSON object payload.", -1);
10041005
return;
10051006
}
1006-
payload = pending_payload;
1007-
from_pending = true;
1008-
} else {
1009-
payload = (const char *)sqlite3_value_text(argv[0]);
1007+
1008+
NETWORK_RESULT res = network_receive_buffer(netdata, netdata->schema_upload_endpoint, netdata->authentication, true, true, (char *)payload, CLOUDSYNC_HEADER_SQLITECLOUD);
1009+
if (network_validate_json_response(context, &res, "CloudSync schema migration upload endpoint", NULL)) {
1010+
char *upload_error = network_migration_upload_error_message(&res);
1011+
if (res.code == CLOUDSYNC_NETWORK_ERROR) {
1012+
network_set_sqlite_result(context, &res);
1013+
} else if (upload_error) {
1014+
sqlite3_result_error(context, upload_error, -1);
1015+
} else {
1016+
network_set_sqlite_result(context, &res);
1017+
}
1018+
if (upload_error) cloudsync_memory_free(upload_error);
1019+
}
1020+
network_result_cleanup(&res);
10101021
}
1022+
}
10111023

1012-
if (!payload || payload[0] == '\0') {
1013-
sqlite3_result_error(context, "cloudsync_network_migration_upload expects a JSON text payload.", -1);
1014-
if (pending_id) cloudsync_memory_free(pending_id);
1015-
if (pending_payload) cloudsync_memory_free(pending_payload);
1016-
return;
1024+
static int cloudsync_network_migration_upload_next_pending(sqlite3_context *context, char **result_json, char **err_out) {
1025+
if (result_json) *result_json = NULL;
1026+
if (err_out) *err_out = NULL;
1027+
1028+
cloudsync_context *data = (cloudsync_context *)sqlite3_user_data(context);
1029+
network_data *netdata = cloudsync_network_data(context);
1030+
if (!netdata || !netdata->schema_upload_endpoint) {
1031+
const char *message = "Unable to retrieve CloudSync schema migration upload endpoint.";
1032+
if (err_out) *err_out = cloudsync_string_dup(message);
1033+
else sqlite3_result_error(context, message, -1);
1034+
return SQLITE_ERROR;
10171035
}
1018-
if (!json_is_valid_root_object(payload, strlen(payload))) {
1019-
sqlite3_result_error(context, "cloudsync_network_migration_upload expects a valid JSON object payload.", -1);
1020-
if (pending_id) cloudsync_memory_free(pending_id);
1021-
if (pending_payload) cloudsync_memory_free(pending_payload);
1022-
return;
1036+
1037+
char *pending_id = cloudsync_pending_migration_next_id(data);
1038+
if (!pending_id) {
1039+
const char *message = "No pending schema migration to upload.";
1040+
if (err_out) *err_out = cloudsync_string_dup(message);
1041+
else sqlite3_result_error(context, message, -1);
1042+
return SQLITE_ERROR;
10231043
}
10241044

1025-
NETWORK_RESULT res = network_receive_buffer(netdata, netdata->schema_upload_endpoint, netdata->authentication, true, true, (char *)payload, CLOUDSYNC_HEADER_SQLITECLOUD);
1026-
if (network_validate_json_response(context, &res, "CloudSync schema migration upload endpoint", NULL)) {
1027-
int rc = DBRES_OK;
1028-
char *upload_error = network_migration_upload_error_message(&res);
1029-
if (res.code == CLOUDSYNC_NETWORK_ERROR) {
1030-
network_set_sqlite_result(context, &res);
1031-
} else if (upload_error) {
1032-
sqlite3_result_error(context, upload_error, -1);
1045+
char *pending_payload = cloudsync_pending_migration_payload(data, pending_id);
1046+
if (!pending_payload) {
1047+
const char *message = "Unable to load pending schema migration payload.";
1048+
if (err_out) *err_out = cloudsync_string_dup(message);
1049+
else sqlite3_result_error(context, message, -1);
1050+
cloudsync_memory_free(pending_id);
1051+
return SQLITE_ERROR;
1052+
}
1053+
1054+
int rc = SQLITE_ERROR;
1055+
if (!json_is_valid_root_object(pending_payload, strlen(pending_payload))) {
1056+
const char *message = "cloudsync_network_migration_upload expects a valid JSON object payload.";
1057+
if (err_out) *err_out = cloudsync_string_dup(message);
1058+
else sqlite3_result_error(context, message, -1);
1059+
goto cleanup;
1060+
}
1061+
1062+
NETWORK_RESULT res = network_receive_buffer(netdata, netdata->schema_upload_endpoint, netdata->authentication, true, true, pending_payload, CLOUDSYNC_HEADER_SQLITECLOUD);
1063+
if (!network_validate_json_response(context, &res, "CloudSync schema migration upload endpoint", err_out)) {
1064+
network_result_cleanup(&res);
1065+
goto cleanup;
1066+
}
1067+
1068+
char *upload_error = network_migration_upload_error_message(&res);
1069+
if (res.code == CLOUDSYNC_NETWORK_ERROR) {
1070+
if (err_out) *err_out = res.buffer ? cloudsync_string_dup(res.buffer) : cloudsync_string_dup("CloudSync schema migration upload failed.");
1071+
else network_set_sqlite_result(context, &res);
1072+
} else if (upload_error) {
1073+
if (err_out) *err_out = cloudsync_string_dup(upload_error);
1074+
else sqlite3_result_error(context, upload_error, -1);
1075+
} else {
1076+
int db_rc = cloudsync_pending_migration_mark_uploaded(data, pending_id);
1077+
if (db_rc == DBRES_OK) {
1078+
if (result_json && res.code == CLOUDSYNC_NETWORK_BUFFER && res.buffer) {
1079+
*result_json = cloudsync_memory_zeroalloc(res.blen + 1);
1080+
if (*result_json) memcpy(*result_json, res.buffer, res.blen);
1081+
else rc = SQLITE_NOMEM;
1082+
}
1083+
if (rc != SQLITE_NOMEM) rc = SQLITE_OK;
1084+
else if (!err_out) sqlite3_result_error_code(context, SQLITE_NOMEM);
1085+
if (!result_json && !err_out) network_set_sqlite_result(context, &res);
10331086
} else {
1034-
if (from_pending) rc = cloudsync_pending_migration_mark_uploaded(data, pending_id);
1035-
if (rc == DBRES_OK) {
1036-
network_set_sqlite_result(context, &res);
1037-
} else {
1087+
if (err_out) *err_out = cloudsync_string_dup(cloudsync_errmsg(data));
1088+
else {
10381089
sqlite3_result_error(context, cloudsync_errmsg(data), -1);
1039-
sqlite3_result_error_code(context, rc);
1090+
sqlite3_result_error_code(context, db_rc);
10401091
}
10411092
}
1042-
if (upload_error) cloudsync_memory_free(upload_error);
10431093
}
1094+
if (upload_error) cloudsync_memory_free(upload_error);
10441095
network_result_cleanup(&res);
1096+
1097+
cleanup:
10451098
if (pending_id) cloudsync_memory_free(pending_id);
10461099
if (pending_payload) cloudsync_memory_free(pending_payload);
1100+
return rc;
10471101
}
10481102

10491103
int network_extract_query_param (const char *query, const char *key, char *output, size_t output_size) {
@@ -1417,9 +1471,20 @@ int cloudsync_network_send_changes_internal (sqlite3_context *context, int argc,
14171471
}
14181472
}
14191473

1420-
if (cloudsync_pending_migration_count(data) > 0) {
1421-
sqlite3_result_error(context, "A pending schema migration must be uploaded before sending row changes.", -1);
1422-
return SQLITE_ERROR;
1474+
while (cloudsync_pending_migration_count(data) > 0) {
1475+
char *migration_result = NULL;
1476+
char *migration_err = NULL;
1477+
int mrc = cloudsync_network_migration_upload_next_pending(context, &migration_result, &migration_err);
1478+
if (migration_result) cloudsync_memory_free(migration_result);
1479+
if (migration_err) {
1480+
sqlite3_result_error(context, migration_err, -1);
1481+
cloudsync_memory_free(migration_err);
1482+
return SQLITE_ERROR;
1483+
}
1484+
if (mrc != SQLITE_OK) {
1485+
if (mrc == SQLITE_NOMEM) sqlite3_result_error_code(context, SQLITE_NOMEM);
1486+
return mrc;
1487+
}
14231488
}
14241489

14251490
// retrieve payload

test/integration.c

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -894,7 +894,9 @@ static int test_mock_migration_upload_error_keeps_pending(void) {
894894
if (rc != SQLITE_OK) goto cleanup;
895895
rc = db_exec(db, "INSERT INTO upload_notes (id) VALUES ('u1');");
896896
if (rc != SQLITE_OK) goto cleanup;
897-
rc = expect_sql_error_contains(db, "SELECT cloudsync_network_send_changes();", "pending schema migration");
897+
rc = expect_sql_error_contains(db, "SELECT cloudsync_network_send_changes();", "missing schema api key");
898+
if (rc != SQLITE_OK) goto cleanup;
899+
rc = db_expect_int(db, "SELECT count(*) FROM cloudsync_pending_migration WHERE uploaded_at IS NULL;", 1);
898900

899901
cleanup:
900902
if (db) {
@@ -932,7 +934,9 @@ static int test_mock_migration_upload_missing_status_keeps_pending(void) {
932934
if (rc != SQLITE_OK) goto cleanup;
933935
rc = db_exec(db, "INSERT INTO upload_missing_status_notes (id) VALUES ('u1');");
934936
if (rc != SQLITE_OK) goto cleanup;
935-
rc = expect_sql_error_contains(db, "SELECT cloudsync_network_send_changes();", "pending schema migration");
937+
rc = expect_sql_error_contains(db, "SELECT cloudsync_network_send_changes();", "accepted status");
938+
if (rc != SQLITE_OK) goto cleanup;
939+
rc = db_expect_int(db, "SELECT count(*) FROM cloudsync_pending_migration WHERE uploaded_at IS NULL;", 1);
936940

937941
cleanup:
938942
if (db) {

0 commit comments

Comments
 (0)