Skip to content

🆕Multichannel Image Reading#825

Merged
shaneahmed merged 167 commits intodevelopfrom
multichannel-reading
Feb 14, 2026
Merged

🆕Multichannel Image Reading#825
shaneahmed merged 167 commits intodevelopfrom
multichannel-reading

Conversation

@measty
Copy link
Copy Markdown
Collaborator

@measty measty commented Jun 21, 2024

Summary

This PR adds ** multichannel (e.g., immunofluorescence) image support** across readers, the TileServer, and the Bokeh visualization app. It introduces a new MultichannelToRGB post‑processing pipeline that composites N‑channel data to RGB, a channel/color selection UI (with enhancement control) in the app, and corresponding TileServer endpoints to sync state. It also hardens TIFF/OME channel‑metadata parsing, adds qptiff samples for testing, and refactors WSIPatchDataset to improve input validation and tissue‑mask handling. Additional tests cover edge cases across readers, metadata parsing, UI, and server routes.

multi_channel

Multichannel Image Support, TileServer & Bokeh UI Enhancements, and Reader/Dataset Hardening


Key Changes

✨ Features

  • Post‑processing for multichannel images

    • New tiatoolbox.utils.postproc_defs.MultichannelToRGB class for converting multi‑channel arrays to RGB, with configurable color dict, channel activity/order, and an enhance factor. Includes extensive unit tests.
  • WSIReader API: post_proc

    • WSIReader.open(..., post_proc="auto" | None | callable) now propagates to all reader types.
      • "auto" applies MultichannelToRGB for multiplex TIFF/virtual inputs (returns RGB).
      • None skips post‑processing (returns native channel count).
      • A callable allows custom post‑processing.
  • Bokeh app – Channel/color UI

    • New UI block (channel table, color table with color picker, “Apply Changes” button, and an Enhance slider) to control which channels are visible and their colors. UI auto‑populates from server.
  • TileServer endpoints

    • GET /tileserver/channels → current {channels, active} state.
    • PUT /tileserver/channels → set color map & active channels.
    • PUT /tileserver/enhance → set global enhancement factor.
    • Safe fallbacks when no multichannel post‑proc is present.
  • QPTIFF sample integration

    • Adds multiplex_example.qptiff (+ small variant) to remote samples; fixtures & tests use them end‑to‑end through app and server.

🧠 TIFF/OME Metadata & Reader Hardening

  • Robust parsing of OME-XML channel colors/dyes, ScanColorTable, and FilterColors blocks, with sane fallbacks (auto‑generated colors, tolerant of missing/invalid values). Objective power inference falls back to MPP when missing. Extensive edge‑case tests included.
  • Reader selection for TIFF now prefers the most appropriate backend; qptiff supported via TIFFWSIReader. Some paths that previously resolved to OpenSlide may now return TIFFWSIReader.

🧰 Dataset Refactor

  • WSIPatchDataset:
    • Input validation split into _validate_inputs, with clearer errors.
    • Mask creation factored into _setup_mask_reader (now retries with MPP when power is unavailable).
    • Patch filtering moved to _filter_patches.
    • Fixes auto‑mask behavior when only MPP is present.

🖼️ Docs

  • Adds “Multichannel Images” section to the visualization docs explaining channel selection and color overrides in the UI (including performance notes).

🔧 Other

  • Bokeh slide list now includes *.qptiff; minor UI and server startup robustness tweaks; small lint/style fix.

Breaking / Behavior‑Changing Notes

  • Output shape change with default settings: When opening multiplex images, post_proc="auto" (default) returns RGB (3 channels). To obtain raw N‑channel data, callers must pass post_proc=None.
  • Reader selection: Some TIFF files (incl. qptiff/tiled‑tiff) may now open via TIFFWSIReader instead of OpenSlideWSIReader. Tests and assertions updated to accept either where appropriate.
  • Stricter dataset validation: WSIPatchDataset now validates shapes early and raises more precise errors.

Usage Examples

1) Programmatic reading (RGB composite vs raw channels)

from tiatoolbox.wsicore.wsireader import WSIReader

# Auto composite to RGB (default multichannel behavior)
wsi = WSIReader.open("sample.ome.tiff", post_proc="auto")
rgb = wsi.read_rect((0, 0), (100, 100))     # rgb.shape == (100, 100, 3)

# Get native channels (no post-processing)
wsi_raw = WSIReader.open("sample.ome.tiff", post_proc=None)
raw = wsi_raw.read_rect((0, 0), (100, 100)) # raw.shape == (100, 100, N)

@codecov
Copy link
Copy Markdown

codecov Bot commented Jun 21, 2024

Codecov Report

❌ Patch coverage is 99.77629% with 1 line in your changes missing coverage. Please review.
✅ Project coverage is 99.41%. Comparing base (78b797e) to head (db012cf).

Files with missing lines Patch % Lines
tiatoolbox/visualization/bokeh_app/main.py 98.07% 0 Missing and 1 partial ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##           develop     #825      +/-   ##
===========================================
+ Coverage    99.37%   99.41%   +0.03%     
===========================================
  Files           71       72       +1     
  Lines         9175     9540     +365     
  Branches      1197     1267      +70     
===========================================
+ Hits          9118     9484     +366     
+ Misses          31       29       -2     
- Partials        26       27       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@shaneahmed shaneahmed linked an issue Jun 25, 2024 that may be closed by this pull request
@shaneahmed shaneahmed changed the title ENH: initial draft multichannel reading 🆕Multichannel Image Reading Jun 25, 2024
@measty measty marked this pull request as ready for review February 6, 2026 22:24
Comment thread tiatoolbox/models/dataset/classification.py
Comment thread tiatoolbox/models/dataset/classification.py
Copy link
Copy Markdown
Member

@shaneahmed shaneahmed left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good and I can successfully load the images in TIAViz. However, I am not clear how to read the channels in python. Could you add some examples in the docstring?

@shaneahmed shaneahmed merged commit d9f133f into develop Feb 14, 2026
2 of 3 checks passed
@shaneahmed shaneahmed deleted the multichannel-reading branch February 14, 2026 09:32
@shaneahmed shaneahmed mentioned this pull request Mar 11, 2026
shaneahmed added a commit that referenced this pull request Mar 11, 2026
## TIAToolbox v2.0.0 (2026-03-11)

### ✨ Major Updates and Feature Improvements

#### ⚙️ Engine Redesign (PR #578)
TIAToolbox 2.0.0 introduces a completely re-engineered inference engine designed for significant performance, scalability, and memory-efficiency improvements.

#### Key Enhancements
- A modern processing stack built on **Dask** (parallel/distributed execution) and **Zarr** (chunked, out-of-core storage)
- **Standardised output formats** across all engines:
  - Python `dict`
  - **Zarr**
  - **AnnotationStore** (SQLite-backed)
  - **QuPath JSON**
- Cleaner runtime behavior with reduced warning noise and a unified progress bar
- More predictable memory usage through chunked streaming
- Broader test coverage across engine components

### 🗺️ Improved QuPath Support
Enhancements include:

- Better handling of **GeoJSON**
- Support for **multipoint geometries** (#841)
- Improved semantic output helpers:
  - `dict_to_store_semantic_segmentor` (#926)
  - OME-TIFF probability overlays (#929)

### 🔬 New Nucleus Detection Engine
A dedicated nucleus detection pipeline has been added, built on the redesigned engine for improved accuracy and efficient large-scale processing.

#### 🧠 KongNet Model Family
TIAToolbox 2.0.0 introduces **KongNet**, a high-performance architecture that achieved top results across multiple international challenges:

- 🥇 **1st place: MONKEY Challenge (overall detection)**
- 🥇 **1st place: MIDOG (mitosis detection)**
- ⭐ Top-tier performance on **PUMA**

Multiple pretrained variants are available (CoNIC, PanNuke, MONKEY, PUMA, MIDOG), each with standardised IO configurations.

### 🧬 Expanded Foundation Model Support
Additional foundation models are now supported (#906), broadening the range of high-capacity architectures available for feature extraction and downstream tasks.

### 🖼️ SAM Segmentation in TIAViz
TIAViz now integrates Meta’s Segment Anything Model (SAM), enabling:

- Interactive segmentation
- Rapid region extraction
- Exploratory annotation workflows

Simplified SAM usage (#968) streamlines its integration into analysis pipelines.

### 🧩 Enhanced WSIReader & Metadata Handling
Major improvements include:

- More robust cross-vendor **metadata extraction** (#1001)
- **Multichannel image support** (PR #825) for immunofluorescence and non-RGB modalities
- Simplified Windows installation using `openslide-bin` (no manual DLL steps)
- macOS Tileserver fix (#976)
- Improved DICOM reading (#934)

### ☁️ New Cloud-Native Reader: FsspecJSONWSIReader (PR #897)
A new reader supporting **fsspec-compatible filesystems**, enabling seamless access to WSIs stored on:

- S3
- GCS
- Azure
- HPC clusters
- Any fsspec-supported backend

This enables cloud-native and distributed data workflows.
Contributed by @aacic

### 🤗 Pretrained Models Migrated to Hugging Face
All pretrained models and sample assets have been migrated (#945, #983), improving:

- Download reliability
- Versioning and reproducibility
- Caching and CI integration
- Licensing clarity per model family

### 🛡️ Security, Compatibility & Tooling

#### 🔐 Security & Dependency Updates
- Dependency upgrades
- Internal security improvements
- Explicit workflow permissions added (#1021, #1023)

#### 🐍 Python Version Support
- **Dropped:** Python **3.9**
- **Added:** Python **3.13**
- **Supported:** Python 3.10–3.13
- Updated CUDA wheel source to **cu126**

#### 🛠️ Developer Tooling & CI/CD
- Expanded **mypy** type-checking coverage (#912, #931, #935, #951)
- Updated pre-commit hooks and general formatting
- CI uses **CPU-only PyTorch** for faster, more reliable builds (#974, #979)
- Updated pip install workflow (#1013)
- Added new **Python 3.13 Docker images** (#1014, #1019)

### 🧹 Bug Fixes & Stability Improvements
- Fixed multi-GPU behaviour with `torch.compile` (#923)
- Fixed DICOM reading issue (#934)
- Fixed annotation contour handling with holes (#956)
- Fixed consecutive annotation load bug (#927)
- Fixed SCCNN model issues (#970)
- Fixed MapDe `dist_filter` shape issue (#914)
- Improved notebook reliability on Colab (#1026#1030)
- macOS TileServer issues resolved (#976)

### 🧭 Migration Guide for Users

#### 🔄 Updating from 1.x to 2.0.0

#### Update calls: replace `.predict()` with `.run()`
```python
# Old
results = segmentor.predict(imgs=[...], ioconfig=config)

# New
results = segmentor.run(images=[...], ioconfig=config)
```

#### Use `patch_mode`: replace `mode="patch"` with `patch_mode=True` and `mode="tile"` or "wsi" with `patch_mode=False`
```python
# Old
results = segmentor.predict(imgs=[...], mode="patch", ioconfig=config)

# New
results = segmentor.run(images=[...], patch_mode=True, ioconfig=config)
```

```python
# Old
results = segmentor.predict(imgs=[...], mode="wsi", ioconfig=config)

# New
results = segmentor.run(images=[...], patch_mode=False, ioconfig=config)

```

#### Use the new I/O configs
```python
from tiatoolbox.models.engine.io_config import IOSegmentorConfig

config = IOSegmentorConfig(
    patch_input_shape=(256, 256),
    stride_shape=(240, 240),
    input_resolutions=[{"resolution": 0.25, "units": "mpp"}],
    save_resolution={"units": "baseline", "resolution": 1.0}
)
```

#### Specify the output format
```python
results = segmentor.run(
    images=[...],
    ioconfig=ioconfig,
    output_type="zarr",  # or "dict", "annotationstore", "qupath"
    save_dir="outputs/"
)
```

#### Update imports
- `tiatoolbox.typing` → `tiatoolbox.type_hints`

#### Install requirements
- Python **3.10+** required
- On Windows: install OpenSlide via `pip install openslide-bin`

**Full Changelog:** v1.6.0...v2.0.0

---------

Signed-off-by: Shan E Ahmed Raza <13048456+shaneahmed@users.noreply.github.com>
Co-authored-by: measty <20169086+measty@users.noreply.github.com>
Co-authored-by: Jiaqi-Lv <60471431+Jiaqi-Lv@users.noreply.github.com>
Co-authored-by: adamshephard <39619155+adamshephard@users.noreply.github.com>
Co-authored-by: Mostafa Jahanifar <74412979+mostafajahanifar@users.noreply.github.com>
Co-authored-by: John Pocock <John-P@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Yijie Zhu <120978607+YijieZhu15@users.noreply.github.com>
Co-authored-by: Aleksandar Acic <32873451+aacic@users.noreply.github.com>
Co-authored-by: Abdol A <u2271662@live.warwick.ac.uk>
Co-authored-by: Abishek <abishekraj6797@gmail.com>
Co-authored-by: behnazelhaminia <30952176+behnazelhaminia@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Adam Shephard <adam.shephard@warwick.ac.uk>
Co-authored-by: gozdeg <gozdegunesli@gmail.com>
Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
Co-authored-by: mbasheer04 <78800844+mbasheer04@users.noreply.github.com>
Co-authored-by: vqdang <24943262+vqdang@users.noreply.github.com>
@shaneahmed shaneahmed mentioned this pull request Mar 11, 2026
shaneahmed added a commit that referenced this pull request Mar 12, 2026
🔖 Release 2.0.0 (#1031)
## TIAToolbox v2.0.0 (2026-03-11)

### ✨ Major Updates and Feature Improvements

#### ⚙️ Engine Redesign (PR #578)
TIAToolbox 2.0.0 introduces a completely re-engineered inference engine designed for significant performance, scalability, and memory-efficiency improvements.

#### Key Enhancements
- A modern processing stack built on **Dask** (parallel/distributed execution) and **Zarr** (chunked, out-of-core storage)
- **Standardised output formats** across all engines:
  - Python `dict`
  - **Zarr**
  - **AnnotationStore** (SQLite-backed)
  - **QuPath JSON**
- Cleaner runtime behavior with reduced warning noise and a unified progress bar
- More predictable memory usage through chunked streaming
- Broader test coverage across engine components

### 🗺️ Improved QuPath Support
Enhancements include:

- Better handling of **GeoJSON**
- Support for **multipoint geometries** (#841)
- Improved semantic output helpers:
  - `dict_to_store_semantic_segmentor` (#926)
  - OME-TIFF probability overlays (#929)

### 🔬 New Nucleus Detection Engine
A dedicated nucleus detection pipeline has been added, built on the redesigned engine for improved accuracy and efficient large-scale processing.

#### 🧠 KongNet Model Family
TIAToolbox 2.0.0 introduces **KongNet**, a high-performance architecture that achieved top results across multiple international challenges:

- 🥇 **1st place: MONKEY Challenge (overall detection)**
- 🥇 **1st place: MIDOG (mitosis detection)**
- ⭐ Top-tier performance on **PUMA**

Multiple pretrained variants are available (CoNIC, PanNuke, MONKEY, PUMA, MIDOG), each with standardised IO configurations.

### 🧬 Expanded Foundation Model Support
Additional foundation models are now supported (#906), broadening the range of high-capacity architectures available for feature extraction and downstream tasks.

### 🖼️ SAM Segmentation in TIAViz
TIAViz now integrates Meta’s Segment Anything Model (SAM), enabling:

- Interactive segmentation
- Rapid region extraction
- Exploratory annotation workflows

Simplified SAM usage (#968) streamlines its integration into analysis pipelines.

### 🧩 Enhanced WSIReader & Metadata Handling
Major improvements include:

- More robust cross-vendor **metadata extraction** (#1001)
- **Multichannel image support** (PR #825) for immunofluorescence and non-RGB modalities
- Simplified Windows installation using `openslide-bin` (no manual DLL steps)
- macOS Tileserver fix (#976)
- Improved DICOM reading (#934)

### ☁️ New Cloud-Native Reader: FsspecJSONWSIReader (PR #897)
A new reader supporting **fsspec-compatible filesystems**, enabling seamless access to WSIs stored on:

- S3
- GCS
- Azure
- HPC clusters
- Any fsspec-supported backend

This enables cloud-native and distributed data workflows.
Contributed by @aacic

### 🤗 Pretrained Models Migrated to Hugging Face
All pretrained models and sample assets have been migrated (#945, #983), improving:

- Download reliability
- Versioning and reproducibility
- Caching and CI integration
- Licensing clarity per model family

### 🛡️ Security, Compatibility & Tooling

#### 🔐 Security & Dependency Updates
- Dependency upgrades
- Internal security improvements
- Explicit workflow permissions added (#1021, #1023)

#### 🐍 Python Version Support
- **Dropped:** Python **3.9**
- **Added:** Python **3.13**
- **Supported:** Python 3.10–3.13
- Updated CUDA wheel source to **cu126**

#### 🛠️ Developer Tooling & CI/CD
- Expanded **mypy** type-checking coverage (#912, #931, #935, #951)
- Updated pre-commit hooks and general formatting
- CI uses **CPU-only PyTorch** for faster, more reliable builds (#974, #979)
- Updated pip install workflow (#1013)
- Added new **Python 3.13 Docker images** (#1014, #1019)

### 🧹 Bug Fixes & Stability Improvements
- Fixed multi-GPU behaviour with `torch.compile` (#923)
- Fixed DICOM reading issue (#934)
- Fixed annotation contour handling with holes (#956)
- Fixed consecutive annotation load bug (#927)
- Fixed SCCNN model issues (#970)
- Fixed MapDe `dist_filter` shape issue (#914)
- Improved notebook reliability on Colab (#1026#1030)
- macOS TileServer issues resolved (#976)

### 🧭 Migration Guide for Users

#### 🔄 Updating from 1.x to 2.0.0

#### Update calls: replace `.predict()` with `.run()`
```python
# Old
results = segmentor.predict(imgs=[...], ioconfig=config)

# New
results = segmentor.run(images=[...], ioconfig=config)
```

#### Use `patch_mode`: replace `mode="patch"` with `patch_mode=True` and `mode="tile"` or "wsi" with `patch_mode=False`
```python
# Old
results = segmentor.predict(imgs=[...], mode="patch", ioconfig=config)

# New
results = segmentor.run(images=[...], patch_mode=True, ioconfig=config)
```

```python
# Old
results = segmentor.predict(imgs=[...], mode="wsi", ioconfig=config)

# New
results = segmentor.run(images=[...], patch_mode=False, ioconfig=config)

```

#### Use the new I/O configs
```python
from tiatoolbox.models.engine.io_config import IOSegmentorConfig

config = IOSegmentorConfig(
    patch_input_shape=(256, 256),
    stride_shape=(240, 240),
    input_resolutions=[{"resolution": 0.25, "units": "mpp"}],
    save_resolution={"units": "baseline", "resolution": 1.0}
)
```

#### Specify the output format
```python
results = segmentor.run(
    images=[...],
    ioconfig=ioconfig,
    output_type="zarr",  # or "dict", "annotationstore", "qupath"
    save_dir="outputs/"
)
```

#### Update imports
- `tiatoolbox.typing` → `tiatoolbox.type_hints`

#### Install requirements
- Python **3.10+** required
- On Windows: install OpenSlide via `pip install openslide-bin`

**Full Changelog:** v1.6.0...v2.0.0

---------

Signed-off-by: Shan E Ahmed Raza <13048456+shaneahmed@users.noreply.github.com>
Co-authored-by: measty <20169086+measty@users.noreply.github.com>
Co-authored-by: Jiaqi-Lv <60471431+Jiaqi-Lv@users.noreply.github.com>
Co-authored-by: adamshephard <39619155+adamshephard@users.noreply.github.com>
Co-authored-by: Mostafa Jahanifar <74412979+mostafajahanifar@users.noreply.github.com>
Co-authored-by: John Pocock <John-P@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Yijie Zhu <120978607+YijieZhu15@users.noreply.github.com>
Co-authored-by: Aleksandar Acic <32873451+aacic@users.noreply.github.com>
Co-authored-by: Abdol A <u2271662@live.warwick.ac.uk>
Co-authored-by: Abishek <abishekraj6797@gmail.com>
Co-authored-by: behnazelhaminia <30952176+behnazelhaminia@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Adam Shephard <adam.shephard@warwick.ac.uk>
Co-authored-by: gozdeg <gozdegunesli@gmail.com>
Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
Co-authored-by: mbasheer04 <78800844+mbasheer04@users.noreply.github.com>
Co-authored-by: vqdang <24943262+vqdang@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add multichannel viewer [ENH] Get ValueError: Unsupported axes YX when using OME-TIFF for nuclear segmentation

7 participants