Skip to content

Commit f29cfb1

Browse files
zazabapclaudeisPANN
authored
feat: add 12 Tier 1b high-confidence reduction rules (#770) (#779)
* feat: add 12 Tier 1b high-confidence reduction rules (#770) Implement 12 verified reduction rules from Garey & Johnson (30-80 lines each): - KSatisfiability(K3) → MinimumVertexCover (#197): truth-setting + clause triangles - Partition → SequencingWithinIntervals (#205): enforcer task gadget - MinimumVertexCover → MinimumFeedbackArcSet (#208): vertex-splitting with penalty arcs - KSatisfiability(K3) → KClique (#229): Karp's non-contradictory edge construction - HamiltonianCircuit → BiconnectivityAugmentation (#252): {1,2}-weighted potential edges - HamiltonianCircuit → StrongConnectivityAugmentation (#254): {1,2}-weighted potential arcs - HamiltonianCircuit → StackerCrane (#261): vertex-splitting with mandatory arcs - HamiltonianCircuit → RuralPostman (#262): vertex-splitting with required edges - Partition → ShortestWeightConstrainedPath (#360): +1 offset layered graph - MaximumIndependentSet → IntegralFlowBundles (#366): Sahni's flow-bundle construction - HamiltonianCircuit → QuadraticAssignment (#373): cycle cost + penalty distance matrices - HamiltonianPath → ConsecutiveOnesSubmatrix (#432): vertex-edge incidence matrix Each rule includes full test coverage (closed-loop, edge cases, extraction). Fixes #197, Fixes #205, Fixes #208, Fixes #229, Fixes #252, Fixes #254, Fixes #261, Fixes #262, Fixes #360, Fixes #366, Fixes #373, Fixes #432 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address PR #779 review comments - MIS→IntFlowBundles: remove BruteForce::solve() from reduce_to(), set requirement=1 (any IS of size ≥ 1 gives a feasible flow) - Partition→SWCP: use checked_add for a_i+1 and weight_bound overflow - MVC→FAS: use checked_add for big_m overflow - HP→ConsecOnesSub: use .get() instead of indexing for Tucker fallback safety - Partition→SeqIntervals: fix odd-sum test (forward-only reduction is correct) - MIS→IFB tests: update all requirement assertions from optimal to 1 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * ci: retrigger CI (flaky HC model spec test) * fix: use 4-cycle for HC model example (fixes flaky CI) The prism graph example produced a non-deterministic ILP solution on CI that failed HC validation. The 4-cycle has fewer valid permutations, making the ILP solution more predictable. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Revert "fix: use 4-cycle for HC model example (fixes flaky CI)" This reverts commit 9038b53. * fix: use K4 for HC model example to avoid ILP solver non-determinism The prism graph (6 vertices) produced different ILP solutions on CI vs locally due to HiGHS version differences. The QAP→ILP reduction path (introduced by HC→QAP in this PR) sometimes extracted an invalid permutation on CI. K4 (complete graph on 4 vertices) makes every permutation a valid HC, eliminating solver non-determinism as a failure source. See #780 for the underlying QAP→ILP investigation. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Revert "fix: use K4 for HC model example to avoid ILP solver non-determinism" This reverts commit 2366d66. * fix: disable HiGHS presolve to avoid incorrect MIP solutions HiGHS presolve has known bugs that can return suboptimal solutions for certain MIP instances (see ERGO-Code/HiGHS#2173, scipy/scipy#24141). On CI (Ubuntu 24.04), presolve deterministically returns obj=18 instead of the optimal obj=6 for the QAP→ILP formulation of HC on the prism graph. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * ci: add --locked to all cargo commands to enforce Cargo.lock versions CI was resolving good_lp 1.15.0 instead of lockfile's 1.14.2, potentially causing different solver behavior. Pin all dependencies via --locked flag. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: pin good_lp to =1.14.2 and revert --locked CI flags CI resolved good_lp 1.15.0 (vs lockfile's 1.14.2) since Cargo.lock is gitignored. Pin the exact version in Cargo.toml instead. Revert --locked flags since Cargo.lock is not committed. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * update * feat: overhead-aware ILP path selection (fixes #780) (#785) * feat: overhead-aware ILP path selection (fixes #780) Replace MinimizeSteps with MinimizeStepsThenOverhead in ILP path selection. When two paths have the same step count, the one producing smaller intermediate/final problems wins (e.g., HC→HP→ILP over HC→QAP→ILP). Key changes: - Add source_size_fn to ReductionEntry for extracting source problem dimensions from &dyn Any instances - Add MinimizeStepsThenOverhead cost function (step count dominates, log(output_size) breaks ties) - Add MinimizeOutputSize cost function for pure overhead minimization - Add ReductionGraph::compute_source_size() and evaluate_path_overhead() - Update best_path_to_ilp to compute actual input sizes and compare paths by final ILP output size - Add ProblemSize::total() and Default derive Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address review findings for overhead-aware path selection Fix misleading comment, document two-level path selection strategy, and add multi-step test for evaluate_path_overhead. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Co-authored-by: Xiwei Pan <90967972+isPANN@users.noreply.github.com> Co-authored-by: Xiwei Pan <xiwei.pan@connect.hkust-gz.edu.cn> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Co-authored-by: Xiwei Pan <xiwei.pan@connect.hkust-gz.edu.cn> Co-authored-by: Xiwei Pan <90967972+isPANN@users.noreply.github.com>
1 parent 772d001 commit f29cfb1

36 files changed

Lines changed: 3628 additions & 18 deletions

Cargo.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ serde_json = "1.0"
3131
thiserror = "2.0"
3232
num-bigint = "0.4"
3333
num-traits = "0.2"
34-
good_lp = { version = "1.8", default-features = false, optional = true }
34+
good_lp = { version = "=1.14.2", default-features = false, optional = true }
3535
inventory = "0.3"
3636
ordered-float = "5.0"
3737
rand = "0.10"

problemreductions-cli/src/test_support.rs

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -180,6 +180,7 @@ problemreductions::inventory::submit! {
180180
}),
181181
capabilities: EdgeCapabilities::aggregate_only(),
182182
overhead_eval_fn: |_| ProblemSize::new(vec![]),
183+
source_size_fn: |_| ProblemSize::new(vec![]),
183184
}
184185
}
185186

@@ -202,6 +203,7 @@ problemreductions::inventory::submit! {
202203
}),
203204
capabilities: EdgeCapabilities::aggregate_only(),
204205
overhead_eval_fn: |_| ProblemSize::new(vec![]),
206+
source_size_fn: |_| ProblemSize::new(vec![]),
205207
}
206208
}
207209

problemreductions-macros/src/lib.rs

Lines changed: 52 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -252,6 +252,47 @@ fn generate_overhead_eval_fn(
252252
})
253253
}
254254

255+
/// Generate a function that extracts the source problem's size fields from `&dyn Any`.
256+
///
257+
/// Collects all variable names referenced in the overhead expressions, generates
258+
/// getter calls for each, and returns a `ProblemSize`.
259+
fn generate_source_size_fn(
260+
fields: &[(String, String)],
261+
source_type: &Type,
262+
) -> syn::Result<TokenStream2> {
263+
let src_ident = syn::Ident::new("__src", proc_macro2::Span::call_site());
264+
265+
// Collect all unique variable names from overhead expressions
266+
let mut var_names = std::collections::BTreeSet::new();
267+
for (_, expr_str) in fields {
268+
let parsed = parser::parse_expr(expr_str).map_err(|e| {
269+
syn::Error::new(
270+
proc_macro2::Span::call_site(),
271+
format!("error parsing overhead expression \"{expr_str}\": {e}"),
272+
)
273+
})?;
274+
for v in parsed.variables() {
275+
var_names.insert(v.to_string());
276+
}
277+
}
278+
279+
let getter_tokens: Vec<_> = var_names
280+
.iter()
281+
.map(|var| {
282+
let getter = syn::Ident::new(var, proc_macro2::Span::call_site());
283+
let name_lit = var.as_str();
284+
quote! { (#name_lit, #src_ident.#getter() as usize) }
285+
})
286+
.collect();
287+
288+
Ok(quote! {
289+
|__any_src: &dyn std::any::Any| -> crate::types::ProblemSize {
290+
let #src_ident = __any_src.downcast_ref::<#source_type>().unwrap();
291+
crate::types::ProblemSize::new(vec![#(#getter_tokens),*])
292+
}
293+
})
294+
}
295+
255296
/// Generate the reduction entry code
256297
fn generate_reduction_entry(
257298
attrs: &ReductionAttrs,
@@ -288,21 +329,27 @@ fn generate_reduction_entry(
288329
let source_variant_body = make_variant_fn_body(source_type, &type_generics)?;
289330
let target_variant_body = make_variant_fn_body(&target_type, &type_generics)?;
290331

291-
// Generate overhead and eval fn
292-
let (overhead, overhead_eval_fn) = match &attrs.overhead {
332+
// Generate overhead, eval fn, and source size fn
333+
let (overhead, overhead_eval_fn, source_size_fn) = match &attrs.overhead {
293334
Some(OverheadSpec::Legacy(tokens)) => {
294335
let eval_fn = quote! {
295336
|_: &dyn std::any::Any| -> crate::types::ProblemSize {
296337
panic!("overhead_eval_fn not available for legacy overhead syntax; \
297338
migrate to parsed syntax: field = \"expression\"")
298339
}
299340
};
300-
(tokens.clone(), eval_fn)
341+
let size_fn = quote! {
342+
|_: &dyn std::any::Any| -> crate::types::ProblemSize {
343+
crate::types::ProblemSize::new(vec![])
344+
}
345+
};
346+
(tokens.clone(), eval_fn, size_fn)
301347
}
302348
Some(OverheadSpec::Parsed(fields)) => {
303349
let overhead_tokens = generate_parsed_overhead(fields)?;
304350
let eval_fn = generate_overhead_eval_fn(fields, source_type)?;
305-
(overhead_tokens, eval_fn)
351+
let size_fn = generate_source_size_fn(fields, source_type)?;
352+
(overhead_tokens, eval_fn, size_fn)
306353
}
307354
None => {
308355
return Err(syn::Error::new(
@@ -337,6 +384,7 @@ fn generate_reduction_entry(
337384
reduce_aggregate_fn: None,
338385
capabilities: #capabilities,
339386
overhead_eval_fn: #overhead_eval_fn,
387+
source_size_fn: #source_size_fn,
340388
}
341389
}
342390

src/rules/cost.rs

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,39 @@ impl PathCostFn for MinimizeSteps {
2727
}
2828
}
2929

30+
/// Minimize total output size (sum of all output field values).
31+
///
32+
/// Prefers reduction paths that produce smaller intermediate and final problems.
33+
/// Breaks ties that `MinimizeSteps` cannot resolve (e.g., two 2-step paths
34+
/// where one produces 144 ILP variables and the other 1,332).
35+
pub struct MinimizeOutputSize;
36+
37+
impl PathCostFn for MinimizeOutputSize {
38+
fn edge_cost(&self, overhead: &ReductionOverhead, size: &ProblemSize) -> f64 {
39+
let output = overhead.evaluate_output_size(size);
40+
output.total() as f64
41+
}
42+
}
43+
44+
/// Minimize steps first, then use output size as tiebreaker.
45+
///
46+
/// Each edge has a primary cost of `STEP_WEIGHT` (ensuring fewer-step paths
47+
/// always win) plus a small overhead-based cost that breaks ties between
48+
/// equal-step paths.
49+
pub struct MinimizeStepsThenOverhead;
50+
51+
impl PathCostFn for MinimizeStepsThenOverhead {
52+
fn edge_cost(&self, overhead: &ReductionOverhead, size: &ProblemSize) -> f64 {
53+
// Use a large step weight to ensure step count dominates.
54+
// The overhead tiebreaker uses log1p to compress the range,
55+
// keeping it far smaller than STEP_WEIGHT for any realistic problem size.
56+
const STEP_WEIGHT: f64 = 1e9;
57+
let output = overhead.evaluate_output_size(size);
58+
let overhead_tiebreaker = (1.0 + output.total() as f64).ln();
59+
STEP_WEIGHT + overhead_tiebreaker
60+
}
61+
}
62+
3063
/// Custom cost function from closure.
3164
pub struct CustomCost<F>(pub F);
3265

src/rules/graph.rs

Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -904,6 +904,54 @@ impl ReductionGraph {
904904
result
905905
}
906906

907+
/// Evaluate the cumulative output size along a reduction path.
908+
///
909+
/// Walks the path from start to end, applying each edge's overhead
910+
/// expressions to transform the problem size at each step.
911+
/// Returns `None` if any edge in the path cannot be found.
912+
pub fn evaluate_path_overhead(
913+
&self,
914+
path: &ReductionPath,
915+
input_size: &ProblemSize,
916+
) -> Option<ProblemSize> {
917+
let mut current_size = input_size.clone();
918+
for pair in path.steps.windows(2) {
919+
let src = self.lookup_node(&pair[0].name, &pair[0].variant)?;
920+
let dst = self.lookup_node(&pair[1].name, &pair[1].variant)?;
921+
let edge_idx = self.graph.find_edge(src, dst)?;
922+
let edge = &self.graph[edge_idx];
923+
current_size = edge.overhead.evaluate_output_size(&current_size);
924+
}
925+
Some(current_size)
926+
}
927+
928+
/// Compute the source problem's size from a type-erased instance.
929+
///
930+
/// Iterates over all registered reduction entries with a matching source name
931+
/// and merges their `source_size_fn` results to capture all size fields.
932+
/// Different entries may reference different getter methods (e.g., one uses
933+
/// `num_vertices` while another also uses `num_edges`).
934+
pub fn compute_source_size(name: &str, instance: &dyn Any) -> ProblemSize {
935+
let mut merged: Vec<(String, usize)> = Vec::new();
936+
let mut seen: HashSet<String> = HashSet::new();
937+
938+
for entry in inventory::iter::<ReductionEntry> {
939+
if entry.source_name == name {
940+
let result = std::panic::catch_unwind(std::panic::AssertUnwindSafe(|| {
941+
(entry.source_size_fn)(instance)
942+
}));
943+
if let Ok(size) = result {
944+
for (k, v) in size.components {
945+
if seen.insert(k.clone()) {
946+
merged.push((k, v));
947+
}
948+
}
949+
}
950+
}
951+
}
952+
ProblemSize { components: merged }
953+
}
954+
907955
/// Get all incoming reductions to a problem (across all its variants).
908956
pub fn incoming_reductions(&self, name: &str) -> Vec<ReductionEdgeInfo> {
909957
let Some(indices) = self.name_to_nodes.get(name) else {
Lines changed: 167 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,167 @@
1+
//! Reduction from HamiltonianCircuit to BiconnectivityAugmentation.
2+
//!
3+
//! Based on the Eswaran & Tarjan (1976) approach:
4+
//!
5+
//! Given a Hamiltonian Circuit instance G = (V, E) with n vertices, construct a
6+
//! BiconnectivityAugmentation instance as follows:
7+
//!
8+
//! 1. Start with an edgeless graph on n vertices.
9+
//! 2. For each pair (u, v) with u < v, create a potential edge with:
10+
//! - weight 1 if {u, v} is in E
11+
//! - weight 2 if {u, v} is not in E
12+
//! 3. Set budget B = n.
13+
//!
14+
//! G has a Hamiltonian circuit iff there exists a biconnectivity augmentation of
15+
//! cost exactly n using only weight-1 edges (i.e., original edges).
16+
//!
17+
//! The selected weight-1 edges form a Hamiltonian cycle in G, which is necessarily
18+
//! biconnected. Any augmentation using a weight-2 edge would cost at least n+1,
19+
//! exceeding the budget of n (since at least n edges are needed for biconnectivity).
20+
21+
use crate::models::graph::{BiconnectivityAugmentation, HamiltonianCircuit};
22+
use crate::reduction;
23+
use crate::rules::traits::{ReduceTo, ReductionResult};
24+
use crate::topology::{Graph, SimpleGraph};
25+
26+
/// Result of reducing HamiltonianCircuit to BiconnectivityAugmentation.
27+
///
28+
/// Stores the target problem and the mapping from potential edge indices to
29+
/// vertex pairs for solution extraction.
30+
#[derive(Debug, Clone)]
31+
pub struct ReductionHamiltonianCircuitToBiconnectivityAugmentation {
32+
target: BiconnectivityAugmentation<SimpleGraph, i32>,
33+
/// Number of vertices in the original graph.
34+
num_vertices: usize,
35+
/// Potential edges as (u, v) pairs, in the same order as the target's potential_weights.
36+
potential_edges: Vec<(usize, usize)>,
37+
}
38+
39+
impl ReductionResult for ReductionHamiltonianCircuitToBiconnectivityAugmentation {
40+
type Source = HamiltonianCircuit<SimpleGraph>;
41+
type Target = BiconnectivityAugmentation<SimpleGraph, i32>;
42+
43+
fn target_problem(&self) -> &Self::Target {
44+
&self.target
45+
}
46+
47+
fn extract_solution(&self, target_solution: &[usize]) -> Vec<usize> {
48+
let n = self.num_vertices;
49+
if n < 3 {
50+
return vec![0; n];
51+
}
52+
53+
// Collect selected edges (those with config value 1)
54+
let mut adj: Vec<Vec<usize>> = vec![vec![]; n];
55+
for (i, &(u, v)) in self.potential_edges.iter().enumerate() {
56+
if i < target_solution.len() && target_solution[i] == 1 {
57+
adj[u].push(v);
58+
adj[v].push(u);
59+
}
60+
}
61+
62+
// Check that every vertex has exactly degree 2 (Hamiltonian cycle)
63+
if adj.iter().any(|neighbors| neighbors.len() != 2) {
64+
return vec![0; n];
65+
}
66+
67+
// Walk the cycle starting from vertex 0
68+
let mut circuit = Vec::with_capacity(n);
69+
circuit.push(0);
70+
let mut prev = 0;
71+
let mut current = adj[0][0];
72+
while current != 0 {
73+
circuit.push(current);
74+
let next = if adj[current][0] == prev {
75+
adj[current][1]
76+
} else {
77+
adj[current][0]
78+
};
79+
prev = current;
80+
current = next;
81+
82+
// Safety: if we've visited more than n vertices, something is wrong
83+
if circuit.len() > n {
84+
return vec![0; n];
85+
}
86+
}
87+
88+
if circuit.len() == n {
89+
circuit
90+
} else {
91+
vec![0; n]
92+
}
93+
}
94+
}
95+
96+
#[reduction(
97+
overhead = {
98+
num_vertices = "num_vertices",
99+
num_edges = "0",
100+
num_potential_edges = "num_vertices * (num_vertices - 1) / 2",
101+
}
102+
)]
103+
impl ReduceTo<BiconnectivityAugmentation<SimpleGraph, i32>> for HamiltonianCircuit<SimpleGraph> {
104+
type Result = ReductionHamiltonianCircuitToBiconnectivityAugmentation;
105+
106+
fn reduce_to(&self) -> Self::Result {
107+
let n = self.num_vertices();
108+
let graph = self.graph();
109+
110+
// Edgeless initial graph
111+
let initial_graph = SimpleGraph::empty(n);
112+
113+
// Create potential edges for all pairs (u, v) with u < v
114+
let mut potential_weights = Vec::new();
115+
let mut potential_edges = Vec::new();
116+
for u in 0..n {
117+
for v in (u + 1)..n {
118+
let weight = if graph.has_edge(u, v) { 1 } else { 2 };
119+
potential_weights.push((u, v, weight));
120+
potential_edges.push((u, v));
121+
}
122+
}
123+
124+
// Budget = n (exactly enough for n weight-1 edges)
125+
let budget = n as i32;
126+
127+
let target = BiconnectivityAugmentation::new(initial_graph, potential_weights, budget);
128+
129+
ReductionHamiltonianCircuitToBiconnectivityAugmentation {
130+
target,
131+
num_vertices: n,
132+
potential_edges,
133+
}
134+
}
135+
}
136+
137+
#[cfg(feature = "example-db")]
138+
pub(crate) fn canonical_rule_example_specs() -> Vec<crate::example_db::specs::RuleExampleSpec> {
139+
use crate::export::SolutionPair;
140+
141+
vec![crate::example_db::specs::RuleExampleSpec {
142+
id: "hamiltoniancircuit_to_biconnectivityaugmentation",
143+
build: || {
144+
// Square graph (4-cycle): 0-1-2-3-0
145+
let source = HamiltonianCircuit::new(SimpleGraph::cycle(4));
146+
// Potential edges for 4 vertices (indices 0..5):
147+
// 0: (0,1) w=1, 1: (0,2) w=2, 2: (0,3) w=1,
148+
// 3: (1,2) w=1, 4: (1,3) w=2, 5: (2,3) w=1
149+
// HC 0-1-2-3-0 selects edges (0,1),(1,2),(2,3),(0,3) => indices 0,3,5,2
150+
// Config: [1, 0, 1, 1, 0, 1]
151+
crate::example_db::specs::rule_example_with_witness::<
152+
_,
153+
BiconnectivityAugmentation<SimpleGraph, i32>,
154+
>(
155+
source,
156+
SolutionPair {
157+
source_config: vec![0, 1, 2, 3],
158+
target_config: vec![1, 0, 1, 1, 0, 1],
159+
},
160+
)
161+
},
162+
}]
163+
}
164+
165+
#[cfg(test)]
166+
#[path = "../unit_tests/rules/hamiltoniancircuit_biconnectivityaugmentation.rs"]
167+
mod tests;

0 commit comments

Comments
 (0)