Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
42 changes: 42 additions & 0 deletions docs/getting_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -428,6 +428,48 @@ A: Use UAI format if you want to:
- Perform exact inference or marginal probability calculations
- Use tensor network methods for decoding

## Benchmark Results

This section shows the performance benchmarks for the decoders included in BPDecoderPlus.

### Decoder Threshold Comparison

The threshold is the physical error rate below which increasing the code distance reduces the logical error rate. Our benchmarks compare BP and BP+OSD decoders:

![Threshold Plot](images/threshold_plot.png)

The threshold plot shows logical error rate vs physical error rate for different code distances. Lines that cross indicate the threshold point.

### BP vs BP+OSD Comparison

![Threshold Comparison](images/threshold_comparison.png)

BP+OSD (Ordered Statistics Decoding) significantly improves upon standard BP, especially near the threshold region.

### Decoding Examples

**BP Failure Case:**

![BP Failure Demo](images/bp_failure_demo.png)

This shows a case where standard BP fails to find the correct error pattern.

**OSD Success Case:**

![OSD Success Demo](images/osd_success_demo.png)

The same syndrome decoded successfully with BP+OSD post-processing.

### Benchmark Summary

| Decoder | Threshold (approx.) | Notes |
|---------|---------------------|-------|
| BP (damped) | ~8% | Fast, but limited by graph loops |
| BP+OSD | ~10% | Higher threshold, slightly slower |
| MWPM (reference) | ~10.3% | Gold standard for comparison |

The BP+OSD decoder achieves near-MWPM performance while being more scalable to larger codes.

## Next Steps

1. **Generate your first dataset** using the Quick Start command
Expand Down
Binary file added docs/images/bp_failure_demo.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/osd_success_demo.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/threshold_comparison.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/threshold_plot.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
8 changes: 8 additions & 0 deletions docs/javascripts/mathjax.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
window.MathJax = {
tex: {
inlineMath: [["\\(", "\\)"]],
displayMath: [["\\[", "\\]"]],
processEscapes: true,
processEnvironments: true
}
};
42 changes: 30 additions & 12 deletions docs/mathematical_description.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,35 +7,53 @@ See https://github.com/TensorBFS/TensorInference.jl for the Julia reference.

### Factor Graph Notation

- Variables are indexed by x_i with domain size d_i.
- Factors are indexed by f and connect a subset of variables.
- Each factor has a tensor (potential) phi_f defined over its variables.
- Variables are indexed by \(x_i\) with domain size \(d_i\).
- Factors are indexed by \(f\) and connect a subset of variables.
- Each factor has a tensor (potential) \(\phi_f\) defined over its variables.

### Messages

Factor to variable message:
**Factor to variable message:**

mu_{f->x}(x) = sum_{all y in ne(f), y != x} phi_f(x, y, ...) * product_{y != x} mu_{y->f}(y)
\[
\mu_{f \to x}(x) = \sum_{\{y \in \text{ne}(f), y \neq x\}} \phi_f(x, y, \ldots) \prod_{y \neq x} \mu_{y \to f}(y)
\]

Variable to factor message:
**Variable to factor message:**

mu_{x->f}(x) = product_{g in ne(x), g != f} mu_{g->x}(x)
\[
\mu_{x \to f}(x) = \prod_{g \in \text{ne}(x), g \neq f} \mu_{g \to x}(x)
\]

### Damping

To improve stability on loopy graphs, a damping update is applied:

mu_new = damping * mu_old + (1 - damping) * mu_candidate
\[
\mu_{\text{new}} = \alpha \cdot \mu_{\text{old}} + (1 - \alpha) \cdot \mu_{\text{candidate}}
\]

where \(\alpha\) is the damping factor (typically between 0 and 1).

### Convergence

We use an L1 difference threshold between consecutive factor->variable
messages to determine convergence.
We use an \(L_1\) difference threshold between consecutive factor-to-variable
messages to determine convergence:

\[
\max_{f,x} \| \mu_{f \to x}^{(t)} - \mu_{f \to x}^{(t-1)} \|_1 < \epsilon
\]

### Marginals

After convergence, variable marginals are computed as:

b(x) = (1 / Z) * product_{f in ne(x)} mu_{f->x}(x)
\[
b(x) = \frac{1}{Z} \prod_{f \in \text{ne}(x)} \mu_{f \to x}(x)
\]

The normalization constant \(Z\) is obtained by summing the unnormalized vector:

The normalization constant Z is obtained by summing the unnormalized vector.
\[
Z = \sum_x \prod_{f \in \text{ne}(x)} \mu_{f \to x}(x)
\]
6 changes: 6 additions & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,12 @@ markdown_extensions:
- pymdownx.details
- attr_list
- md_in_html
- pymdownx.arithmatex:
generic: true

extra_javascript:
- javascripts/mathjax.js
- https://unpkg.com/mathjax@3/es5/tex-mml-chtml.js

nav:
- Home: index.md
Expand Down