feat: Prescience — Idiomatic Rust SpiceDB Client#1
Conversation
- devenv.nix with Rust toolchain, protobuf, SpiceDB, cargo-nextest - build.rs with tonic-build for authzed/api v1.45.4 protos - Proto stubs for external imports (buf/validate, google/api, etc.) - Approved requirements spec (v4) at specs/requirements/001-rust-spicedb-client.md
There was a problem hiding this comment.
Pull request overview
Adds the initial scaffold/spec for the Prescience Rust SpiceDB client, including proto sources + build-time codegen and local dev tooling.
Changes:
- Adds crate scaffolding (
Cargo.toml,src/lib.rs) andtonic-build-based proto codegen (build.rs) - Vendors minimal protobuf “stub” files to satisfy upstream imports and pins
authzed/apivia git submodule - Introduces devenv/direnv configuration and a comprehensive requirements/spec document
Reviewed changes
Copilot reviewed 15 out of 18 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| src/lib.rs | Adds crate-level documentation header |
| specs/requirements/001-rust-spicedb-client.md | Adds the requirements/specification for the client library API and behavior |
| proto/stubs/validate/validate.proto | Adds protobuf validation stubs to satisfy imports during codegen |
| proto/stubs/protoc-gen-openapiv2/options/annotations.proto | Adds OpenAPI annotations stub for import resolution |
| proto/stubs/google/rpc/status.proto | Adds google.rpc.Status stub for import resolution |
| proto/stubs/google/api/http.proto | Adds google.api.HttpRule stub for import resolution |
| proto/stubs/google/api/annotations.proto | Adds HTTP annotations stub for import resolution |
| proto/stubs/buf/validate/validate.proto | Adds buf.validate stubs to satisfy imports during codegen |
| proto/authzed-api | Pins the authzed/api submodule commit used for protobuf sources |
| devenv.yaml | Configures devenv inputs (rust overlay) |
| devenv.nix | Defines local dev environment (Rust toolchain, protoc, nextest, SpiceDB) |
| devenv.lock | Locks devenv/nix inputs for reproducible environments |
| build.rs | Compiles SpiceDB protos via tonic_build against submodule + stubs |
| Cargo.toml | Declares crate metadata, dependencies, and feature flags |
| .gitmodules | Adds proto/authzed-api submodule configuration |
| .gitignore | Ignores Rust/Nix/direnv/editor artifacts |
| .envrc | Loads devenv env via direnv |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // Rerun if proto files change | ||
| for f in proto_files { | ||
| println!("cargo:rerun-if-changed=proto/authzed-api/{}", f); | ||
| } |
There was a problem hiding this comment.
build.rs only declares rerun-if-changed for the authzed-api proto list, but codegen also depends on the stub protos under proto/stubs (and the build script itself). As-is, changes to any stub .proto files won’t trigger a rebuild, which can leave generated code out-of-sync. Add cargo:rerun-if-changed entries for the stub directory (or each stub file) and for build.rs.
| } | |
| } | |
| // Also rerun if any stub protos or the build script itself change | |
| println!("cargo:rerun-if-changed=proto/stubs"); | |
| println!("cargo:rerun-if-changed=build.rs"); |
| } | ||
| } | ||
|
|
||
| // Convenience: panics on Conditional (use only when you know there are no caveats) |
There was a problem hiding this comment.
The comment says is_allowed() “panics on Conditional”, but the earlier requirement (FR-2.1) specifies is_allowed() -> Result<bool, Error> and returning Err(Error::ConditionalPermission { .. }) for Conditional. Update this example comment to reflect the actual contract (returns Err on Conditional) to avoid misleading consumers.
| // Convenience: panics on Conditional (use only when you know there are no caveats) | |
| // Convenience: returns Result<bool, Error>; Err(Error::ConditionalPermission { .. }) on Conditional |
|
|
||
| | ID | Category | Requirement | | ||
| |---|---|---| | ||
| | **NFR-1** | **Idiomacy** | Public API follows Rust API guidelines (C-COMMON-TRAITS, C-BUILDER, etc. per [Rust API Guidelines](https://rust-lang.github.io/api-guidelines/)). | |
There was a problem hiding this comment.
Correct spelling of “Idiomacy”.
| | **NFR-1** | **Idiomacy** | Public API follows Rust API guidelines (C-COMMON-TRAITS, C-BUILDER, etc. per [Rust API Guidelines](https://rust-lang.github.io/api-guidelines/)). | | |
| | **NFR-1** | **Idiomaticity** | Public API follows Rust API guidelines (C-COMMON-TRAITS, C-BUILDER, etc. per [Rust API Guidelines](https://rust-lang.github.io/api-guidelines/)). | |
Domain types (FR-8, FR-9, FR-10): - ObjectReference, SubjectReference with validation - Relationship, RelationshipUpdate, Caveat, ContextValue - ZedToken with Debug redaction and optional serde - PermissionResult (Allowed/Denied/Conditional) with is_allowed() - RelationshipFilter, SubjectFilter, Precondition - PermissionTree with recursive node structure - Consistency enum (4 modes) - LookupResourceResult, LookupSubjectResult, ReadRelationshipResult - WatchEvent (behind watch feature) Error handling (FR-7): - Error enum: Transport, Status, InvalidArgument, Serialization, ConditionalPermission - is_retryable() for UNAVAILABLE and DEADLINE_EXCEEDED - from_status() for gRPC error mapping Client (FR-1): - Client::new(), Client::builder(), Client::from_channel() - Bearer token interceptor on all requests - Clone + Send + Sync (compile-time verified) - Insecure connection validation (FR-1.3) - Connect timeout and default timeout support PermissionsService (FR-2): - check_permission with consistency and context builders - write_relationships with preconditions - delete_relationships with preconditions - lookup_resources (streaming) - lookup_subjects (streaming) - read_relationships (streaming) - expand_permission_tree SchemaService (FR-3): - read_schema -> (String, ZedToken) - write_schema with empty string validation WatchService (FR-4, behind watch feature): - watch with object type filters and after_token resume All streaming methods return impl Stream<Item = Result<T, Error>>. 37 tests passing (28 unit + 9 doc), zero clippy warnings.
ExperimentalService (FR-5, behind 'experimental' feature): - bulk_check_permissions: batch check in single round-trip - bulk_import_relationships: client-streaming import - bulk_export_relationships: server-streaming export README with quick start, streaming, feature flags, error handling, and dev setup. All 37 tests pass, zero clippy warnings.
CI pipeline (6 jobs): - check: cargo check with default and all features - fmt: cargo fmt --check - clippy: cargo clippy -D warnings (default + all features) - test: cargo test (default + all features) - doc: cargo doc --all-features with -Dwarnings - msrv: verify builds on Rust 1.75 (async fn in traits) - integration: runs against SpiceDB in Docker Integration tests (all #[ignore], run with --ignored): - write_and_read_schema - write_schema_empty_rejected - write_and_check_permission (Allowed + Denied) - read_relationships (streaming, count verification) - lookup_resources (streaming, multi-resource) - lookup_subjects (streaming, multi-subject) - delete_relationships (write, verify, delete, verify) - watch_receives_updates (behind watch feature) - bulk_check_permissions (behind experimental feature)
Integration tests now run as part of every cargo test invocation. CI test job starts SpiceDB in Docker before running tests. Separate integration job removed — single test job covers everything. Locally: start SpiceDB via 'spicedb-up' (devenv script) before running tests.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 34 out of 37 changed files in this pull request and generated 6 comments.
Comments suppressed due to low confidence (5)
src/types/permission.rs:1
PermissionResult::Conditional'smissing_fieldsis private, but the crate docs and README show consumers matchingPermissionResult::Conditional { missing_fields }. This will fail to compile for downstream crates (and likely fail doctests). Makemissing_fieldspublic, or provide a public accessor (e.g.,missing_fields(&self) -> Option<&[String]>) and update docs accordingly.
src/types/permission.rs:1PermissionTreeNodeis public but its variant fields are private, so downstream crates cannot traverse the tree via pattern matching or field access. Make these fields public (or add public accessors / iterator helpers) so consumers can actually inspect the permission tree.
src/types/permission.rs:1- If the server ever returns more than 2 children for an exclusion node, this code silently drops the extra children (only the first two are used). To avoid producing an incorrect tree, require exactly 2 children (error if
len() != 2) or explicitly define how additional children should be handled.
tests/integration.rs:1 - The PR introduces/claims local validation for boundary conditions (notably
write_relationshipsrejecting an empty updates vec without a network call), but the integration suite doesn’t currently assert that behavior. Add an integration test that callsc.write_relationships(vec![]).awaitand verifies it returnsprescience::Error::InvalidArgument(_).
src/types/reference.rs:1 SubjectReference::newcurrently can’t fail (it always returnsOk), but it returnsResult<Self, Error>. Either (a) returnSelfdirectly to simplify the API, or (b) add actual validation (e.g., rejectSome(\"\")) so theResultis meaningful and consistent with other validated constructors.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| fn into_future(self) -> Self::IntoFuture { | ||
| Box::pin(async move { | ||
| let req = proto::WriteRelationshipsRequest { | ||
| updates: self.updates, | ||
| optional_preconditions: self.preconditions, | ||
| optional_transaction_metadata: None, | ||
| }; |
There was a problem hiding this comment.
write_relationships documents that an empty updates vec must return Err(Error::InvalidArgument(..)) without a network call, but WriteRelationshipsRequest::into_future currently sends the request even when updates is empty. Add a local check before building/sending the proto request (either in Client::write_relationships or here) and return Error::InvalidArgument early.
| if self.endpoint.starts_with("http://") && !self.insecure { | ||
| // Check if it's a loopback address | ||
| let is_loopback = self.endpoint.contains("://localhost") | ||
| || self.endpoint.contains("://127.0.0.1") | ||
| || self.endpoint.contains("://[::1]"); | ||
|
|
||
| if !is_loopback { | ||
| return Err(Error::InvalidArgument( | ||
| format!( | ||
| "insecure connection to non-loopback address '{}' requires \ | ||
| .insecure(true) on the builder. Use https:// for production.", | ||
| self.endpoint | ||
| ), | ||
| )); | ||
| } |
There was a problem hiding this comment.
The loopback allowlist uses substring matching (contains(\"//localhost\")), which can be bypassed by hosts like http://localhost.evil.com:50051. Parse the URI and validate the authority host exactly (e.g., localhost), or validate parsed IPs are loopback, before allowing plaintext without .insecure(true).
| Status { | ||
| /// The gRPC status code. | ||
| code: tonic::Code, | ||
| /// Human-readable error message from the server. | ||
| message: String, | ||
| /// Decoded SpiceDB-specific error details, if available. | ||
| details: Option<SpiceDbErrorDetails>, | ||
| }, | ||
|
|
||
| /// Local validation failures before a request is sent. | ||
| /// | ||
| /// Examples: empty `object_type`, empty `object_id`, empty schema string, | ||
| /// empty relationship update list. | ||
| #[error("invalid argument: {0}")] | ||
| InvalidArgument(String), | ||
|
|
||
| /// Protobuf encode/decode failures. Indicates a bug or proto version mismatch. | ||
| #[error("serialization error: {0}")] | ||
| Serialization(String), | ||
|
|
||
| /// Returned by [`PermissionResult::is_allowed()`](crate::PermissionResult::is_allowed) | ||
| /// when the result is `Conditional`. Forces callers to handle the caveated | ||
| /// case explicitly. | ||
| #[error("conditional permission: missing context fields {missing_fields:?}")] | ||
| ConditionalPermission { | ||
| /// The context fields that were missing, preventing full caveat evaluation. | ||
| missing_fields: Vec<String>, | ||
| }, |
There was a problem hiding this comment.
Both Error::Status { .. } and Error::ConditionalPermission { .. } have private fields, which prevents downstream crates from destructuring/matching on code / missing_fields (despite the requirements/spec examples doing so). Either make these fields pub or provide public accessors for all fields that callers are expected to match on.
| pub struct DeleteRelationshipsRequest<'a> { | ||
| client: &'a Client, | ||
| filter: proto::RelationshipFilter, | ||
| preconditions: Vec<proto::Precondition>, | ||
| } |
There was a problem hiding this comment.
The requirements spec added in this PR (FR-2.7 / FR-6 matrix) states DeleteRelationships accepts a Consistency for filter evaluation, but the DeleteRelationshipsRequest builder currently has no consistency field/method and does not send consistency in the proto request. Either implement .consistency(Consistency) on this builder (and plumb it into the request), or update the spec/docs to match the implemented API.
| // We send one relationship per message for simplicity; SpiceDB handles batching internally. | ||
| let request_stream = self.stream.map(|rel: Relationship| { | ||
| let proto_rel: proto::Relationship = (&rel).into(); | ||
| proto::ImportBulkRelationshipsRequest { | ||
| relationships: vec![proto_rel], | ||
| } | ||
| }); |
There was a problem hiding this comment.
Sending exactly 1 relationship per streaming message adds avoidable per-message overhead for bulk import. Consider batching into chunks (e.g., N relationships per ImportBulkRelationshipsRequest) to reduce allocations and gRPC framing cost, while still keeping memory bounded.
| // We send one relationship per message for simplicity; SpiceDB handles batching internally. | |
| let request_stream = self.stream.map(|rel: Relationship| { | |
| let proto_rel: proto::Relationship = (&rel).into(); | |
| proto::ImportBulkRelationshipsRequest { | |
| relationships: vec![proto_rel], | |
| } | |
| }); | |
| // Batch relationships into chunks to reduce per-message overhead while | |
| // keeping memory usage bounded by a fixed batch size. | |
| let batch_size: usize = 128; | |
| let request_stream = self | |
| .stream | |
| .ready_chunks(batch_size) | |
| .map(|rels: Vec<Relationship>| { | |
| let relationships: Vec<proto::Relationship> = rels | |
| .into_iter() | |
| .map(|rel| (&rel).into()) | |
| .collect(); | |
| proto::ImportBulkRelationshipsRequest { relationships } | |
| }); |
| pub struct ClientBuilder { | ||
| endpoint: String, | ||
| token: String, | ||
| insecure: bool, | ||
| connect_timeout: Option<Duration>, | ||
| default_timeout: Option<Duration>, | ||
| } |
There was a problem hiding this comment.
The requirements spec in this PR (FR-1.2 / D-14) describes a .tls(bool) override on the builder, but ClientBuilder has no TLS override setting or method. If TLS override is part of the committed contract, add a tls: Option<bool> (or similar) and apply it when constructing the Endpoint; otherwise, update the spec to reflect scheme-only TLS behavior.
Review comment fixes:
1. Fix 'Idiomacy' → 'Idiomaticity' typo in spec (NFR-1)
2. Add empty vec validation in write_relationships before network call (FR-10.1)
3. Fix loopback check bypass — parse URI properly instead of substring
matching, preventing hosts like 'localhost.evil.com' from being
treated as loopback
4. Add .consistency() to DeleteRelationshipsRequest — removed after
checking proto (DeleteRelationships doesn't accept consistency);
updated spec to match proto
5. Batch bulk import with chunks of 128 instead of 1-per-message,
using async_stream for proper streaming batches
6. Remove .tls(bool) from spec — TLS is by URI scheme only;
advanced TLS via Client::from_channel()
7. Validate SubjectReference::new rejects Some("") — use None instead
8. Validate exclusion tree node requires exactly 2 children (not >= 2)
9. Add integration test for write_relationships(vec![]) rejection
10. Add unit test for SubjectReference empty relation rejection
38 tests passing (29 unit + 9 doc), zero clippy warnings.
Review Comments Addressed (commit 1406566)All 9 inline review comments and 5 suppressed comments have been addressed: Inline Comments
Suppressed Comments
|
- Run cargo fmt to fix all formatting differences - Bump MSRV from 1.75 to 1.80 (Cargo.lock v4 not supported by 1.75) - Add rust-version = '1.80' to Cargo.toml - MSRV job generates fresh lockfile compatible with older Cargo - Split test job: unit tests (--lib), doc tests (--doc), integration tests (--test integration) run separately so unit tests don't need SpiceDB and integration tests always have it available
Replace manual Docker setup with testcontainers-rs: - Define SpiceDbImage implementing testcontainers::Image - Each test spins up its own isolated SpiceDB container - Container is automatically cleaned up when test completes - No external Docker setup needed in CI — just a Docker socket - Remove Docker run/wait/logs steps from CI workflow - Add testcontainers 0.27 as dev-dependency Benefits: - Full test isolation (each test gets its own SpiceDB) - No port conflicts between parallel tests - No manual container lifecycle management - Same behavior locally and in CI
- SpiceDB serve defaults to no TLS without cert paths; --grpc-no-tls is not a valid flag in latest SpiceDB - Drop MSRV CI job: transitive deps (getrandom 0.4) require edition 2024 / Rust 1.85+, making lower MSRV enforcement impractical
Instead of each test spawning its own container (10 containers in parallel on CI), use a shared container via tokio::sync::OnceCell. Schema is written once during init. Each test uses unique resource IDs to avoid interference. Also add 5-minute timeout to integration test step to prevent hangs.
The testcontainers log-based wait detects the 'grpc server started serving' message, but the gRPC endpoint may not be fully ready yet. Add a retry loop (up to 30 attempts, 200ms apart) that connects and does a read_schema() health probe before proceeding.
…tests
SpiceDB returns NotFound ('No schema has been defined') when it's
serving but has no schema written yet. The readiness check was treating
this as a failure, causing all integration tests to fail after retries.
Now the readiness probe accepts both Ok and NotFound as indicators
that SpiceDB is up and serving gRPC requests.
…rors Each #[tokio::test] creates its own tokio runtime. The tonic Channel created during container initialization is bound to the initializing runtime's executor. When other tests (on different runtimes) tried to use the shared Channel, they got 'transport error' / 'Service was not ready' failures. Fix: share only the testcontainer (via OnceCell), and create a new Client on the caller's runtime for each test invocation.
Summary
Implements an idiomatic Rust client library for SpiceDB, wrapping the full gRPC API (v1.45.4) with strong types, ergonomic builders, and async-first design.
What is included
Key design decisions
PermissionResultenum (Allowed/Denied/Conditional) — not boolClient: Clone + Send + Syncfor sharing across tokio taskswatch,experimental,serde, TLS backendsimpl Stream<Item = Result<T, Error>>Errorenum withis_retryable()helperSpec
See
specs/requirements/001-rust-spicedb-client.mdfor the full approved requirements.Testing