Fully Homomorphic Encryption (FHE) allows computations on encrypted data without decrypting it, enabling privacy-preserving computation. Despite its potential, FHE adoption remains limited due to challenges such as:
- High computational overhead
- Parameter tuning complexity
- Tedious ciphertext management
CHEHAB addresses these challenges through a specialized FHE compiler built with the following goals:
- A Domain-Specific Language (DSL) for describing FHE computations.
- Automatic parameter selection and ciphertext maintenance.
- Multiple optimization techniques:
- Constant Folding (CF)
- Common Subexpression Elimination (CSE)
- Term Rewriting System (TRS) for advanced transformations
Currently, CHEHAB supports the BFV scheme and targets the Microsoft SEAL library.
Below is a simple example written in CHEHAB's DSL:
#include "fheco/fheco.hpp"
using namespace fheco;
void example()
{
Ciphertext c0("c0");
Ciphertext c1 = c0 << 1;
Ciphertext c2 = c0 << 5;
Ciphertext c3 = c0 << 6;
Ciphertext c4 = c1 + c0;
Ciphertext c5 = c2 + c3;
Ciphertext c6 = c4 + c5;
c6.set_output("c6");
}- GCC and G++ compilers
- CMake
- SEAL library (v4.1.0)
- Rust (for TRS/e-graph optimizer via
egg) - Python dependencies (for RL optimization)
This directory contains an environment.yml for setting up the Conda environment.
conda env create -f environment.yml -n chehabEnv
conda activate chehabEnv
cd RL/pytrs
pip3 install -e . # To install the pytrs packagecd /scratch/<your_user_id>/
git clone https://github.com/microsoft/SEAL.git
cd SEAL
git checkout v4.1.1
cmake -S . -B build -DCMAKE_BUILD_TYPE=Release -DSEAL_USE_CXX17=ON -DSEAL_BUILD_TESTS=OFF -DCMAKE_INSTALL_PREFIX="$CONDA_PREFIX" -G Ninja
cmake --build build
cmake --install build
cd CHEHAB
cmake -S . -B build
cd build
make- Benchmarks:
benchmarks/<benchmark_name>/ - Equality saturation framework :
egraphs/ - Reinforcment learning framework :
RL/ - Core compiler
src/
Most benchmarks use a generator script generate_<benchmark>.py to:
- Generate random input values
- Run reference plaintext computations
- Save inputs and outputs in
<benchmark>_io_example.txt - The goal is to have the same input values used when runnning executable FHE code and to check that decrypted result is the same as the one obtained when running python script on plaintext values.
A benchmark is run in two phases :
- The first execution triggers our compiler to translate the program written in our DSL into a program using FHE native primitives.
- The second execution corresponds to the concrete homomorphic evaluation. In the following steps, we use the box blur benchmark as an example.
cd build/benchmarks/box_blur
python3 generate_box_blur.py --slot_count 4
./box_blur 1 4 1 0 1 1 1./<benchmark> <vectorize_code> <slot_count> <optimization_method> <window> <call_quantifier> <cse> <const_folding>| Argument | Description |
|---|---|
vectorize_code |
0/1 - Scalar or vectorized code |
slot_count |
Number of input slots |
optimization_method |
0 for e-graph, 1 for RL |
window |
Vectorization window size |
call_quantifier |
0/1 - Enable metric collection |
cse |
0/1 - Enable common subexpression elimination |
const_folding |
0/1 - Enable constant folding |
- Navigate to the
hedirectory, where you can find the created files to build the final executable. This automatically links the generated file with the SEAL library.
cd he
rm -r build
mkdir build
cmake -S . -B build
cd build
make- Within the
builddirectory, you will find themainexecutable that triggers the concrete homomorphic evaluation. You only need to execute themainfile.
./mainThis repository includes Docker/Compose workflows that install Microsoft SEAL, build dependencies, and the Python environment inside the container.
From the repo root:
docker compose build chehab-main
docker compose run --rm -it chehab-main /bin/bashInside the container, ensure the Conda environment is active (it is auto-activated for interactive bash, but you can run the following if needed):
source /opt/conda/etc/profile.d/conda.sh
conda activate chehabEnvRun the benchmark sweep (writes CSV results):
python run_benchmarks.pyResults are written under results/ (and are available on the host via the bind mount configured in docker-compose.yml).
Start the web service and open the browser UI:
docker compose build chehab-demo
docker compose up chehab-demoThen browse to http://localhost:8000.
After producing a CSV (e.g., results/results_RL.csv), generate plots:
python results/generate_graphs.py --metric exec --csv results/results_RL.csv --label "CHEHAB RL" --output results/exec_time.png
python results/generate_graphs.py --metric compile --csv results/results_RL.csv --label "CHEHAB RL" --output results/compile_time.png
python results/generate_graphs.py --metric noise --csv results/results_RL.csv --label "CHEHAB RL" --output results/noise_budget.png