Minimal, runnable pipeline for the paper experiments.
main_cpsat.py– CP-SAT formulation for cut selectionrun_full_flow.py– BLIF -> cut enumeration -> CP-SAT -> rebuildcut_enumeration.cpp(source) +tools/cut_enumeration(built binary)rebuild_from_cpsat.cpp(source) +tools/rebuild_from_cpsat(built binary)blif_to_aig.py– helper to convert BLIF to AIG (from Mockturtle tools)experiments-dac19-flow/run.py– downstream DAC'19 evaluation flowfull_adder/sample BLIF + cuts (add if not present)
python -m venv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txtBuild the two helper binaries from your Mockturtle checkout (paths may vary):
cmake --build build --target cut_enumeration rebuild_from_cpsat
cp build/examples/cut_enumeration build/examples/rebuild_from_cpsat tools/Run CP-SAT end-to-end on the sample:
python run_full_flow.py full_adder/full_adder.blif \
--output-dir out \
--objective overall \
--cut-enum-bin tools/cut_enumeration \
--rebuild-bin tools/rebuild_from_cpsat \
--stats-csv out/full_adder_stats.csv \
--summary-csv out/summary_stats.csvConvert rebuilt BLIF to AIG (needed for DAC flow):
python tools/blif_to_aig.py out/full_adder_rebuilt.blif out/full_adder_rebuilt.aigRun DAC'19 flow on that AIG:
python experiments-dac19-flow/run.py --input out/full_adder_rebuilt.aig --out-dir out/dac19_results(Or just execute the run.py after copy+pasting the chosen .aig inside the benchmarks folder, it will automatically pick it up.)
Process a whole directory of BLIFs in one shot:
python run_full_flow.py path/to/blif_dir --output-dir out_runs \
--cut-enum-bin tools/cut_enumeration --rebuild-bin tools/rebuild_from_cpsat
# each BLIF gets cuts / chosen_cuts / rebuilt files + stats in out_runs/Run DAC'19 flow on all rebuilt AIGs from that batch:
for f in out_runs/*_rebuilt.blif; do
base=${f%.blif}
python tools/blif_to_aig.py "$f" "${base}.aig"
python experiments-dac19-flow/run.py --input "${base}.aig" --out-dir out_runs/dac19_results
done- Cut size K (controls maximum cut fan-in for enumeration): add
--cut-size 3(or 4, 5, etc.) torun_full_flow.py. This value is passed directly to thecut_enumerationbinary. - Objective choice (
--objective):og= lambda_invinv + lambda_areaarea (default weights below)inv= minimize inverter countarea= minimize areadepth= minimize depth (requires depth modeling)overall= alpha_depthdepth + beta_areaarea + gamma_inv*inv
- Objective weights (for
ogandoverallmodes) live inmain_cpsat.pynear the bottom ofsolve_circuit(Can start experimentinmg by changing the weights):Adjust and rerun to change the trade-off between depth/area/inverter counts.lambda_inv = 10 lambda_area = 1 alpha_depth = 100 beta_area = 10 gamma_inv = 1
- Depth fixing: pass
--fix-depth Ntomain_cpsat.pyCLI if you call it directly (or extendrun_full_flow.pyto expose it) to enforce a depth target. - Output placement: use
--output-dir(and optionally--stats-csv/--summary-csv) to keep all generated artifacts inside this repo, e.g.,--output-dir out --stats-csv out/<name>_stats.csv --summary-csv out/summary_stats.csv. Otherwise defaults are near the input BLIF and may include absolute paths in CSVs. - Benchmarks: CP-SAT consumes BLIFs; DAC'19 flow consumes AIGs. Use
tools/blif_to_aig.pyto convert rebuilt BLIFs before runningexperiments-dac19-flow/run.py. The repo includes the DAC'19benchmarks/AIG set for reference; you can drop your own rebuilt AIGs there or point--inputto their paths.
--objective {og,inv,area,depth,overall}pick cost function (see above).--cut-size Kmaximum cut size passed tocut_enumeration.--output-dir DIRbase directory for all generated artifacts.--output-stem NAMEoverride base filename (defaults to BLIF stem).--cuts-json / --chosen-json / --rebuilt-blif / --rebuilt-diroverride specific artifact paths.--tools-dir DIRsearch directory for binaries; or use--cut-enum-binand--rebuild-binto point explicitly.--stats-csv PATHand--summary-csv PATHcontrol where timing/metric rows are appended.--final-toolis fixed tonone(no mapper step in this trimmed setup).
tools/should contain the binariescut_enumerationandrebuild_from_cpsat. Build them from your Mockturtle checkout (e.g.,cmake --build build --target cut_enumeration rebuild_from_cpsat) and copy the resulting executables frombuild/examples/intotools/. Avoid absolute-path symlinks so a fresh clone works anywhere.- If you prefer to rebuild locally, both source files (
cut_enumeration.cpp,rebuild_from_cpsat.cpp) are included; compile against the mockturtle headers, e.g.:g++ -std=c++17 -O3 -I../Mockturtle-mMIG-main/include \ -o tools/cut_enumeration cut_enumeration.cpp g++ -std=c++17 -O3 -I../Mockturtle-mMIG-main/include \ -o tools/rebuild_from_cpsat rebuild_from_cpsat.cpp - DAC'19 flow prerequisites (in
experiments-dac19-flow/): installcirkit==3.0a2.dev5(pip install cirkit==3.0a2.dev5) and ensure theabcbinary is on yourPATH(build from https://github.com/berkeley-abc/abc). Benchmarks (benchmarks/*.aig) are already included here; the result folders are historical. - Benchmarks: only
full_adderis provided for a smoke test. Add your EPFL/other BLIFs to run broader sweeps. - Results directories under
experiments-dac19-floware historical; they aren’t needed for the smoke test. - If you move this repo, update the symlink targets or pass
--cut-enum-bin/--rebuild-binexplicitly.