Embedded Cardano node modules for Scalus: a rollback-aware streaming engine, Cardano network protocols (Node-to-Node and Node-to-Client), ChainStore back-ends (RocksDB), and Mithril-verified snapshot restore.
scalus-streaming-core— rollback-awareBlockchainStreamProviderengine, ADTs, chain-sync adapters (JVM + JS).scalus-streaming-fs2— fs2 flavor (JVM + JS).scalus-streaming-ox— ox flavor (JVM only).scalus-cardano-network— Ouroboros N2N (TCP) + N2C (Unix-domain socket) transports: mini-protocol mux, handshakes, keep-alive, chain-sync, local-tx-submission (JVM + JS, JS stubs raise on connect).scalus-cardano-network-it— yaci-devkit testcontainers integration tests (JVM only).scalus-chain-store-rocksdb— RocksDB-backedChainStore(JVM only).scalus-chain-store-mithril— Mithril-verified snapshot restore via embeddedmithril-client-wasmon Chicory.
Snapshots of org.scalus artifacts are pulled from
Sonatype Central Snapshots.
The current pinned scalus version is set by scalusVersion in build.sbt.
sbt jvm/compile
sbt jvm/test
A streaming provider is configured by two sources: ChainSyncSource (where live
chain events come from) and BackupSource (where snapshot reads and submit
fall through when the engine can't answer locally).
Most public-relay deployments. Live chain events come from a public N2N relay over
TCP; historical reads and submit go through Blockfrost.
StreamProviderConfig(
appId = "com.example.app",
cardanoInfo = CardanoInfo.preview,
chainSync = ChainSyncSource.N2N(host, port, networkMagic),
backup = BackupSource.Blockfrost(apiKey, BlockfrostNetwork.Preview)
)Deployments that co-locate with a cardano-node over its Unix-domain socket.
Live chain events come via N2C ChainSync; submit goes via LocalTxSubmission.
JVM-only (UDS).
val socket = "/var/run/cardano-node/preview.socket"
StreamProviderConfig(
appId = "com.example.app",
cardanoInfo = CardanoInfo.preview,
chainSync = ChainSyncSource.N2C(socket, networkMagic),
backup = BackupSource.LocalNode(socket, networkMagic)
)BackupSource.LocalNode is submit-only until M12 (LocalStateQuery): its read
methods (findUtxos, fetchLatestParams, currentSlot, getDatum,
checkTransaction) raise UnsupportedOperationException. Use the engine's own
state for anything answerable from the rollback buffer + ChainStore, or pair
with a BackupSource.Blockfrost for read coverage during the M11 window.
When ChainSyncSource.N2C and BackupSource.LocalNode reference the same
socket path, sharing the connection is a planned optimisation; today each
component opens its own.
For deployments that already have a local cardano-node:
- Keep
ChainSyncSource.N2N(...)running side-by-side with the new N2C provider during the migration window. The engine state is identical between the two sync sources; you can flip-test without warm-restart loss. - Once
ChainSyncSource.N2C(socketPath, networkMagic)is wired and you've verified subscribers see the expected blocks/rollbacks (e.g. viasubscribeTip()), retire the N2N config. - Replace
BackupSource.Blockfrost(...)withBackupSource.LocalNode(socketPath, networkMagic)only after auditing every snapshot-method call site in your application — pre-M12 reads will fail. If you depend onfindUtxosorfetchLatestParams, either keep Blockfrost as the backup or layer an HTTPS read shim until M12 ships LSQ. - Read deployment health via
provider.backupDiagnostics:connectedSinceMillis,lastSubmittedHash,submitCount,rejectCount.LocalNodeProviderimplements the trait; Blockfrost returnsNonetoday.