The Boundless Prover Node is a computational proving system that participates in the Boundless decentralized proving market. Provers stake USDC, bid on computational tasks, generate zero-knowledge proofs using GPU acceleration, and earn rewards for successful proof generation.
This guide covers both automated and manual installation methods for Ubuntu 20.04/22.04 systems.
- Boundless Prover Market
- Notes
- Requirements
- Rent GPU
- Automated Setup
- Manual Setup
- Bento (Prover) & Broker Optimizations
- Safe Update or Stop Prover
- Debugging
First, you need to know how Boundless Prover market actually works to realize what you are doing.
- Request Submission: Developers submit computational tasks as "orders" on Boundless, offering ETH/ERC-20 rewards
- Prover Stakes USDC: Provers must deposit
USDCas stake before bidding on orders - Bidding Process: Provers detect orders and submit competitive bids (
mcycle_price) - Order Locking: Winning provers lock orders using staked USDC, committing to prove within deadline
- Proof Generation: Provers compute and submit proofs using GPU acceleration
- Rewards/Slashing: Valid proofs earn rewards; invalid/late proofs result in stake slashing
- The prover is in beta phase, while I admit that my guide is really perfect, you may get some troubles in the process of running it, so you can wait until official incentivized testnet with more stable network and more updates to this guide, or start exprimenting now.
- I advice to start with testnet networks due to loss of stake funds
- I will update this github guide constantly, so you always have to check back here later and follow me on X for new updates.
- CPU - 16 threads, reasonable single core boost performance (>3Ghz)
- Memory - 32 GB
- Disk - 100 GB NVME/SSD
- GPU
- Minimum: one 8GB vRAM GPU
- Recommended to be competetive: 10x GPU with min 8GB vRAM
- Recomended GPU models are 4090, 5090 and L4.
- You better test it out with single GPUs by lowering your configurations later by reading the further sections.
- Supported: Ubuntu 20.04/22.04
- No support: Ubuntu 24.04
- If you are running on Windows os locally, install Ubuntu 22 WSL using this Guide
Recommended GPU Providers
- Vast.ai: SSH-Key needed
For an automated installation and prover management, you can use this script that handles all dependencies, configuration, setup, and prover management automatically.
# Update packages
apt update && apt upgrade -y
# download wget
apt install wget# Download the installation script
wget https://raw.githubusercontent.com/0xmoei/boundless/main/install_prover.sh -O install_prover.sh
# Make it executable
chmod +x install_prover.sh
# Run the installer
./install_prover.sh- Installation may take time since we are installing drivers and building big files, so no worries.
- The script will automatically detect your GPU configuration
- You'll be prompted for:
- Network selection (mainnet/testnet)
- RPC URL: Read Get RPC for more details
- Private key (input is hidden)
- Broker config parameters: Visit Broker Optimization to read parameters details
After installation, to Run or Configure your Prover, you have to navigate to the installation directory and run Management Script prover.sh:
cd ~/boundless
./prover.shThe management script provides a menu with:
- Service Management: Start/stop broker, view logs, health checks
- Configuration: Change network, update private key, edit broker config
- Stake Management: Deposit USDC stake, check balance
- Performance Testing: Run benchmarks with order IDs
- Monitoring: Real-time GPU monitoring
The prover.sh script manages all broker configurations (.e.g broker.toml), but to optimize and add some RAM and CPU to your compose.yml, you can navigate to x-exec-agent-common & gpu-prove-agent sections
- Re-run your broker after doing configurations to
compose.yml
Even if you setup using automated script, I recommend you to read Manual Setup and Bento (Prover) & Broker Optimizations sections to learn to optimize your prover.
Here is the step by step guide to Install and run your Prover smoothly, but please pay attention to these notes:
- Read every single word of this guide, if you really want to know what you are doing.
- There is an Prover+Broker Optimization section where you need to read after setting up prover.
apt update && apt upgrade -y
apt install curl iptables build-essential git wget lz4 jq make gcc nano automake autoconf tmux htop nvme-cli libgbm1 pkg-config libssl-dev tar clang bsdmainutils ncdu unzip libleveldb-dev libclang-dev ninja-build -ygit clone https://github.com/boundless-xyz/boundless
cd boundless
git checkout release-0.11To run a Boundless prover, you'll need the following dependencies:
- Docker compose
- GPU Drivers
- Docker Nvidia Support
- Rust programming language
Justcommand runner- CUDA Tollkit
For a quick set up of Boundless dependencies on Ubuntu 22.04 LTS, you can run:
bash ./scripts/setup.shHowever, we need to install some dependecies manually:
\\ Execute command lines one by one
# Install rustup:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
. "$HOME/.cargo/env"
# Update rustup:
rustup update
# Install the Rust Toolchain:
apt update
apt install cargo
# Verify Cargo:
cargo --version
# Install rzup:
curl -L https://risczero.com/install | bash
source ~/.bashrc
# Verify rzup:
rzup --version
# Install RISC Zero Rust Toolchain:
rzup install rust
# Install cargo-risczero:
cargo install cargo-risczero
rzup install cargo-risczero
# Update rustup:
rustup update
# Install Bento-client:
cargo install --git https://github.com/risc0/risc0 bento-client --bin bento_cli
echo 'export PATH="$HOME/.cargo/bin:$PATH"' >> ~/.bashrc
source ~/.bashrc
# Verify Bento-client:
bento_cli --version
# Install Boundless CLI:
cargo install --locked boundless-cli
export PATH=$PATH:/root/.cargo/bin
source ~/.bashrc
# Verify boundless-cli:
boundless -h- In the beginning, to configure your Prover, You need to know what's your GPUs IDs (if multiple GPUs), CPU cores and RAM.
- Also the following tools are best to monitor your hardware during proving.
- If your Nvidia driver and Cuda tools are installed succesfully, run the following command to see your GPUs status:
nvidia-smi
- You can now monitor Nvidia driver & Cuda Version, GPU utilization & memory usage.
- In the image below, there are four GPUs with 0-3 IDs, you'll need it when adding GPU to your configuration.
- Check your system GPUs IDs (e.g. 0 through X):
nvidia-smi -Llscpu
To see the status of your CPU and RAM.
htopThe best for real-time monitoring your GPUs in a seprated terminal while your prover is proving.
nvtopThe default compose.yml file defines all services within Prover.
- Default
compose.ymlonly supporting single-GPU and default CPU, RAM utilization. - Edit
compose.ymlby this command:nano compose.yml
- The current
compose.ymlis set for1GPU by default, you can bypass editing it if you only have one GPU. - In single GPUs, you can increase the RAM & CPU of the
x-exec-agent-commonandgpu_prove_agent0services incompose.ymlinstead to maximize the utilization of your system
-
4 GPUs: To add more GPUs or modify CPU and RAM sepcified to each GPU, replace the current compose file with my custom compose.yml with 4 custom GPUs
-
More/Less than 4 GPUs: Follow this detailes step by step guide to add or remove the number of 4 GPUs in my custom
compose.ymlfile
x-exec-agent-commonservice in yourcompose.ymlis doing the preflight process of orders to estimate if prover can bid on them or not.- More exec agents will be able to preflight and prove more orders concurrently.
- Increasing is from default value:
2depends on how many concurrent proofs you want to allow. - You see smth like below code as your
x-exec-agent-commonservice in yourcompose.ymlwhere you can increase it's memory and cpu cores:
x-exec-agent-common: &exec-agent-common
<<: *agent-common
mem_limit: 4G
cpus: 3
environment:
<<: *base-environment
RISC0_KECCAK_PO2: ${RISC0_KECCAK_PO2:-17}
entrypoint: /app/agent -t exec --segment-po2 ${SEGMENT_SIZE:-21}gpu_prove_agentservice in yourcompose.ymlhandles proving the orders after they got locked by utilizing your GPUs.- In single GPUs, you can increase performance by increasing CPU/RAM of GPU agents.
- The default number of its CPU and RAM is fine but if you have good system spec, you can increase them for each GPU.
- You see smth like below code as your
gpu_prove_agentXservice in yourcompose.ymlwhere you can increase the memory and cpu cores of each gpu agent.gpu_prove_agent0: <<: *agent-common runtime: nvidia mem_limit: 4G cpus: 4 entrypoint: /app/agent -t prove deploy: resources: reservations: devices: - driver: nvidia device_ids: ['0'] capabilities: [gpu]
- While the default CPU/RAM for each GPU is enough, for single GPUs, you can increase them to increase efiiciency, but don't maximize and always keep some CPU/RAML for other jobs.
Note: SEGMENT_SIZE of the prover is set to 21 by default in x-exec-agent-common service which applies on all GPUs, and 21 is compatible only with >20GB vRAM GPUs, if you have less vRAM, you have to modify it, you can read the Segment Size section of the guide to modify it.
Boundless is comprised of two major components:
Bentois the local proving infrastructure. Bento will take the locked orders fromBroker, prove them and return the result toBroker.Brokerinteracts with the Boundless market.Brokercan submit or request proves from the market.
To get started with a test proof on a new proving machine, let's run Bento to benchmark our GPUs:
just bento- This will spin up
bentowithout thebroker.
Check the logs :
just bento logsRun a test proof:
RUST_LOG=info bento_cli -c 32- If everything works well, you should see something like the following as
Job Done!:
- To check if all your GPUs are utilizing:
- Increase
32to1024/2048/4096 - Open new terminal with
nvtopcommand - Run the test proof and monitor your GPUs utilization.
- Increase
Boundless is available on Base Mainnet, Base Sepolia and Ethereum Sepolia.
There are three .env files with the official configurations of each network (.env.base, .env.base-sepolia, .env.eth-sepolia).
- According to what network you want to run your prover on, you'll need an RPC endpoint that supports
eth_newBlockFilterevent.- You can search for
eth_newBlockFilterin the documents of third-party RPC providers to see if they support it or not.
- You can search for
RPC providers I know they support eth_newBlockFilter and I recommend:
- Alchemy:
- Alchemy is the best provider so far
- BlockPi:
- Support free Base Mainnet, Base Sepolia. ETH sepolia costly as $49
- Chainstack:
- You have to change the value of
lookback_blocksfrom300to0, because chainstack's free plan doesn't supporteth_getlogs, so you won't be able to check last 300 blocks for open orders at startup (which is not very important i believe) - Check Broker Optimization section to know how to change
lookback_blocksvalue inbroker.toml
- You have to change the value of
- Run your own RPC node:
- This is actually the best way but costly in terms of needing ~550-650 GB Disk
- Guide for ETH Sepolia
- Quicknode supports
eth_newBlockFilterbut was NOT compatible with prover somehow idk. It blew up my prover.
-
In this step I modify
.env.base, you can replace it with any of above (Sepolia networks). -
Currently, Base mainnet has very low demand of orders, you may want to go for Base Sepolia by modifying
.env.base-sepoliaor ETH Sepolia by modifying.env.eth-sepolia -
Configure
.env.basefile:
nano .env.baseAdd the following variables to the .env.base.
export RPC_URL="":- RPC has to be between
""
- RPC has to be between
export PRIVATE_KEY=: Add your EVM wallet private key
- Inject
.env.baseto prover:
source .env.base- After each terminal close or before any prover startup, you have to run this to inject the network before running
brokeror doingDepositcommands (both in next steps).
.env.broker is a custom environment file same as previous .env files but with more options to configure, you can also use it but you have to refer to Deployments page to replace contract addresses of each network.
-
I recommend to bypass using it, since you may want to switch between network sometimes. It's easier to swap among those above preserved .env files.
-
Create
.env.broker:
cp .env.broker-template .env.broker- Configure
.env.brokerfile:
nano .env.brokerAdd the following variables to the .env.broker.
-
export RPC_URL="": To get Base network rpc url, Use third-parties .e.g Alchemy or paid ones.- RPC has to be between
""
- RPC has to be between
-
export PRIVATE_KEY=: Add your EVM wallet private key -
Find the value of following variables here:
export BOUNDLESS_MARKET_ADDRESS=export SET_VERIFIER_ADDRESS=export VERIFIER_ADDRESS=(add it to .env manually)export ORDER_STREAM_URL=
-
Inject
.env.brokerchanges to prover:
source .env.broker
- After each terminal close, you have to run this to inject the network before running
brokeror doingDepositcommands (both in next steps).
Provers will need to deposit USDC to the Boundless Market contract to use as stake when locking orders.
Note that USDC has a different address on each network. Refer to the Deployments page for the addresses. USDC can be obtained on testnets from the Circle Faucet. You can alsi Bridge USDC.
Add boundless CLI to bash:
source ~/.bashrc
Deposit Stake:
boundless account deposit-stake STAKE_AMOUNT
- Deposit Stake Balance:
boundless account stake-balance
You can now start broker (which runs both bento + broker i.e. the full proving stack!):
just brokerCheck the total proving logs:
just broker logsCheck the broker logs, which has the most important logs of your order lockings and fulfillments:
docker compose logs -f broker
# For last 100 logs
docker compose logs -fn 100
- You may stuck at
Subscribed to offchain Order stream, but it starts detecting orders soon.
There are many factors to be optimized to win in provers competetion where you can read the official guide for broker or prover
Here I simplified everything with detailed steps:
Larger segment sizes more proving (bento) performance, but require more GPU VRAM. To pick the right SEGMENT_SIZE for your GPU VRAM, see the official performance optimization page.
SEGMENT_SIZEincompose.ymlunder thex-exec-agent-commonservice is21by default.- Also you can change the value of
SEGMENT_SIZEin.env.brokerbefore running the prover. - Note, when you set a number for
SEGMENT_SIZEin env or default yml files, it sets that number for each GPU identically. - You can add
SEGMENT_SIZEvariable with its value to the preserved network.envs like.env.base-sepolia, etc if you are using them. - If you changed
SEGMENT_SIZEin.env.broker, then head back to network configuration section to use.env.brokeras your network configurationn.
Install psql:
apt update
apt install postgresql postgresql-client
psql --version- Recommended: Benchmark by simulating an order id: (make sure Bento is running):
boundless proving benchmark --request-ids <IDS>- You can use the order IDs listed here
- You can add multiples by adding comma-seprated ones.
- Recommended to pick a few requests of varying sizes and programs, biased towards larger proofs for a more representative benchmark.
- As in the image above, the prover is estimated to handle ~430,000 cycles per second (~430 khz).
- Use a lower amount of the recommented
peak_prove_khzin yourbroker.toml(I explain it more in the next step)
You can use
nvtopcommand in a seprated terminal to check your GPU utilizations.
- Benchmark using Harness Test
- Optionally you can benchmark GPUs by a ITERATION_COUNT:.
RUST_LOG=info bento_cli -c <ITERATION_COUNT>
<ITERATION_COUNT> is the number of times the synthetic guest is executed. A value of 4096 is a good starting point, however on smaller or less performant hosts, you may want to reduce this to 2048 or 1024 while performing some of your experiments. For functional testing, 32 is sufficient.
- Check
khz&cyclesproved in the harness test
bash scripts/job_status.sh JOB_ID
- replace
JOB_IDwith the one prompted to you when running a test. - Now you get the
hzwhich has to be devided by 1000x to be inkhzand thecyclesit proved. - If got error
not_found, it's cuz you didn't create.env.brokerand the script is using theSEGMENT_SIZEvalue in.env.brokerto query your Segment size, docp .env.broker-template .env.brokerto fix.
- Broker is one of the containers of the prover, it's not proving itself, it's for onchain activities, and initializing with orders like locking orders or setting amount of stake bids, etc.
broker.tomlhas the settings to configure how your broker interact on-chain and compete with other provers.
Copy the template to the main config file:
cp broker-template.toml broker.tomlEdit broker.toml file:
nano broker.toml- You can see an example of the official
broker.tomlhere
Once your broker is running, before the gpu-based prover gets into work, broker must compete with other provers to lock-in the orders. Here is how to optimize broker to lock-in faster than other provers:
- Decreasing the
mcycle_pricewould tune your Broker tobidat lower prices for proofs.
- Once an order detected, the broker runs a preflight execution to estimate how many
cyclesthe request needs. As you see in the image, a prover proved orders with millions or thousands of cycles. mcycle_priceis actually price of a prover for proving each 1 million cycles. Final price =mcycle_pricexcycles- The less you set
mcycle_price, the higher chance you outpace other provers.
- To get idea of what
mcycle_priceare other provers using, find an order in explorer with your prefered network, go to details page of the order and look forETH per Megacycle
- Increasing
lockin_priority_gasto consume more gas to outrun other bidders. You might need to first remove#to uncomment it's line, then set the gas. It's based on Gwei.
Read more about them in official doc
-
peak_prove_khz: Maximum number of cycles per second (in kHz) your proving backend can operate.- You can set the
peak_prove_khzby following the previous step (Benchmarking Bento)
- You can set the
-
max_concurrent_proofs: Maximum number of orders the can lock. Increasing it, increase the ability to lock more orders, but if you prover cannot prove them in the specified deadline, your stake assets will get slashed.- When the numbers of running proving jobs reaches that limit, the system will pause and wait for them to get finished instead of locking more orders.
- It's set as
2by default, and really depends on your GPU and your configuration, you have to test it out if you want to inscrease it.
-
min_deadline: Min seconds left before the deadline of the order to consider bidding on a request or not.- Requesters set a deadline for their order, if a prover can't prove during this, it gets slashed.
- By setting the min deadline, your prover won't accept requests with a deadline less than that.
- As in the following image of an order in explorer, the order fullfilled after the deadline and prover got slashed because of the delay in delivering
You can run multiple brokers simultaneously with a single Bento client to generate proofs on different networks.
- You configurations might be different than mine and so you can ask AI chats to modify them. I give you the clues and the example of my current codes
- Generally, you have to make changes in these files:
compose.yml,broker.toml,.envfiles (e.g..env.base-sepolia)
Step 1: Add the broker2 Service:
In the services section, after your existing broker service, add the following broker2 service. This mirrors the original broker configuration but uses a different database and configuration file.
- What do we change in
brokerto add tobroker2? - Name changes to
broker2 source: ./broker2.tomlbroker2-data:/db/- Updated
--db-urlto'sqlite:///db/broker2.db'
Step 2: Environment Variables (.env files) for Multi-Broker Setup:
We were using .env files (e.g..env.base) for setting the network, we need to link these .env files with each broker (.e.g broker, broker1, broker3) in our compose.yml file, so each broker runs on the specified network at startup.
- add the following lines after
volumesof eachbrokerservice
env_file:
- .env.base
Step 3: Add the broker2-data Volume:
- At the end of your
compose.yml, in thevolumessection, add the new volume forbroker2:
For example the broker, broker2 services in my compose.yml, supporting two base & eth sepolia networks with above configurations:
broker:
restart: always
depends_on:
- rest_api
- gpu_prove_agent0
- exec_agent0
- exec_agent1
- aux_agent
- snark_agent
- redis
- postgres
profiles: [broker]
build:
context: .
dockerfile: dockerfiles/broker.dockerfile
mem_limit: 2G
cpus: 2
stop_grace_period: 3h
volumes:
- type: bind
source: ./broker.toml
target: /app/broker.toml
- broker-data:/db/
network_mode: host
env_file:
- .env.base
environment:
RUST_LOG: ${RUST_LOG:-info,broker=debug,boundless_market=debug}
entrypoint: /app/broker --db-url 'sqlite:///db/broker.db' --set-verifier-address ${SET_VERIFIER_ADDRESS} --boundless-market-address ${BOUNDLESS_MARKET_ADDRESS} --config-file /app/broker.toml --bento-api-url http://localhost:8081
ulimits:
nofile:
soft: 65535
hard: 65535
broker2:
restart: always
depends_on:
- rest_api
- gpu_prove_agent0
- exec_agent0
- exec_agent1
- aux_agent
- snark_agent
- redis
- postgres
profiles: [broker]
build:
context: .
dockerfile: dockerfiles/broker.dockerfile
mem_limit: 2G
cpus: 2
stop_grace_period: 3h
volumes:
- type: bind
source: ./broker2.toml
target: /app/broker.toml
- broker2-data:/db/
network_mode: host
env_file:
- .env.eth-sepolia
environment:
RUST_LOG: ${RUST_LOG:-info,broker=debug,boundless_market=debug}
entrypoint: /app/broker --db-url 'sqlite:///db/broker2.db' --set-verifier-address ${SET_VERIFIER_ADDRESS} --boundless-market-address ${BOUNDLESS_MARKET_ADDRESS} --config-file /app/broker.toml --bento-api-url http://localhost:8081
ulimits:
nofile:
soft: 65535
hard: 65535
volumes:
redis-data:
postgres-data:
minio-data:
grafana-data:
broker-data:
broker2-data:Each broker instance requires separate broker.toml files (e.g., broker.toml, broker2.toml, etc.)
You can create the new broker config file that the second broker will use:
# Copy from an existing broker config file
cp broker.toml broker2.toml
# Or creating one from a fresh template
cp broker-template.toml broker2.tomlThen, modify configuation values for each network, keeping the following in mind:
-
The
peak_prove_khzsetting is shared across all brokers.- For example, if you have benchmarked your broker to be able to prove at
500kHz, the values in each config should not sum up to be more than500kHz. - For instance:
broker.toml:peak_prove_khz = 250&broker2.toml:peak_prove_khz = 250
- For example, if you have benchmarked your broker to be able to prove at
-
max_concurrent_preflightssetting limits the number of pricing tasks (preflight executions) a broker can run simultaneously. The totalmax_concurrent_preflightsacross all brokers (for all networks) should be less than or equal to the number ofexec_agentservices in yourcompose.yml- For instance: If you have two
exec_agentservices (exec_agent0andexec_agent1). Thus, the sum ofmax_concurrent_preflightsacrossbrokerandbroker2should not exceed2.
- For instance: If you have two
-
max_concurrent_proofs- Unlike
peak_prove_khz, themax_concurrent_proofssetting is specific to each broker and not shared. It controls the maximum number of proof generation tasks a single broker can process simultaneously. - For instance: With only one GPU, your cluster can typically handle only one proof at a time, as proof generation is GPU-intensive. So you'd better to set
max_concurrent_proofs = 1
- Unlike
-
lockin_priority_gas: Make sure you configure the gwei for according to each network
Ensure either through the broker logs or through indexer page of your prover that your broker does not have any incomplete locked orders before stopping or update, othervise you might get slashed for your staked assets.
- Optionally to not accept more order requests by your prover temporarily, you can set
max_concurrent_proofsto0, wait forlockedorders to befulfilled, then go through the next step to stop the node.
# Optional, no need if you don't want to upgrade the node's repository
just broker clean
# Or stop the broker without cleaning volumes
just broker downSee releases for latest tag to use.
git checkout <new_version_tag>
# Example: git checkout v0.10.0just brokerDuring the build process of just broker, you might endup to Too many open files (os error 24) error.
nano /etc/security/limits.conf
- Add:
* soft nofile 65535
* hard nofile 65535
nano /lib/systemd/system/docker.service
- Add or modify the following under
[Service]section.
LimitNOFILE=65535
systemctl daemon-reload
systemctl restart docker
- Now restart terminal and rerun your inject network command, then run
just broker
Getting tens of Locked orders on prover's explorer
- It's due to RPC issues, check your logs.
- You can increase
txn_timeout = 45inbroker.tomlfile to increase the seconds of transactions confirmations.








