-
Notifications
You must be signed in to change notification settings - Fork 0
Description
Problem Summary
The current tropical tensor network MAP decoder implementation works perfectly for d=3 but fails for d>=5 due to prohibitively high tree width (space complexity).
Current Results
| Distance | Variables | Factors | Tree Width (sc) | Memory Required |
|---|---|---|---|---|
| d=3 | 78 | 102 | 9 | ~4 KB ✓ |
| d=5 | 502 | 622 | 36 | ~550 GB ✗ |
| d=7 | 1558 | 1894 | 80 | ~10¹⁵ GB ✗ |
For d=3, the tropical TN decoder achieves exact agreement with MWPM (differs=0 across all error rates), validating the implementation correctness.
Root Cause
The factor graph for circuit-level noise rotated surface codes has tree width that grows exponentially with distance. This is a fundamental limitation of exact MAP inference, not an implementation issue.
Key observations:
- The time dimension (d rounds) creates high connectivity in the factor graph
- Standard variable elimination/contraction ordering cannot reduce tree width below ~36 for d=5
- Slicing only splits the network into disconnected components without reducing per-component tree width
Potential Solutions: Approximate Tensor Network Contraction
Based on literature survey, several approximate methods could enable d>=5 decoding:
1. Bond Dimension Truncation (MPS/MPO)
- Reference: Bravyi et al., "Efficient Algorithms for Maximum Likelihood Decoding in the Surface Code" [arXiv:1405.4883]
- Method: Use Matrix Product States to approximate intermediate tensors during contraction
- Complexity: O(n χ³) where χ is bond dimension (approximation parameter)
- Results: Significant improvement over MWPM with χ≥4 for code capacity noise
- Key insight: Uses DMRG-style boundary contraction
2. Sweep Line Contraction
- Reference: Chubb, "General tensor network decoding of 2D Pauli codes" [arXiv:2101.04125]
- Implementation: SweepContractor.jl
- Method: Sweep a line across the 2D tensor network, maintaining bounded bond dimension
- Complexity: O(n log n + n χ³)
- Results: State-of-the-art thresholds, numerically consistent with optimal
3. Belief Propagation on Tensor Networks
- Combine BP messages with tensor structure
- May inherit BP's limitations on loopy graphs
4. Variational Methods
- Optimize tensor network structure variationally
- Potentially slower but more general
Proposed Implementation Plan
Phase 1: MPS-based Approximate Contraction
Implement MPS boundary contraction following Bravyi et al.:
- Arrange factor graph on a cylinder/strip
- Contract row-by-row using MPO-MPS multiplication
- Truncate to bond dimension χ after each step
- Recover approximate MPE assignment via backtracking
Phase 2: Sweep Contraction
Port/adapt SweepContractor.jl approach:
- Planarize the tensor network
- Use sweep line algorithm with truncation
- Works for general 2D tensor networks
Phase 3: Benchmarking
Compare approximate TN decoder against:
- MWPM baseline
- BP-OSD decoder
- Measure threshold degradation vs χ
Technical Considerations
For Circuit-Level Noise
The factor graph is 3D (space × time), not 2D. This complicates:
- Boundary contraction ordering
- Sweep line direction selection
- Bond dimension requirements may be higher
Tropical Semiring Adaptation
Standard MPS truncation uses SVD on probability semiring. For tropical (max-plus) semiring:
- Need tropical SVD analog or
- Convert to log-probabilities and use standard SVD approximation
- May lose exactness of tropical operations
Related Issues
- This explains why slicing alone doesn't help: sliced components maintain same tree width
- Connected component handling was fixed in recent commits but doesn't address core scalability
References
- Bravyi et al., arXiv:1405.4883 - MPS decoder
- Chubb, arXiv:2101.04125 - Sweep contraction
- Schotte et al., arXiv:2012.04610 - TN for Fibonacci code
/cc @maintainers