🆕Multichannel Image Reading#825
Merged
shaneahmed merged 167 commits intodevelopfrom Feb 14, 2026
Merged
Conversation
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## develop #825 +/- ##
===========================================
+ Coverage 99.37% 99.41% +0.03%
===========================================
Files 71 72 +1
Lines 9175 9540 +365
Branches 1197 1267 +70
===========================================
+ Hits 9118 9484 +366
+ Misses 31 29 -2
- Partials 26 27 +1 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
…toolbox into multichannel-reading
…Analytics/tiatoolbox into multichannel-reading
shaneahmed
reviewed
Feb 13, 2026
shaneahmed
reviewed
Feb 13, 2026
shaneahmed
reviewed
Feb 13, 2026
Member
shaneahmed
left a comment
There was a problem hiding this comment.
This looks good and I can successfully load the images in TIAViz. However, I am not clear how to read the channels in python. Could you add some examples in the docstring?
shaneahmed
approved these changes
Feb 14, 2026
Merged
shaneahmed
added a commit
that referenced
this pull request
Mar 11, 2026
## TIAToolbox v2.0.0 (2026-03-11) ### ✨ Major Updates and Feature Improvements #### ⚙️ Engine Redesign (PR #578) TIAToolbox 2.0.0 introduces a completely re-engineered inference engine designed for significant performance, scalability, and memory-efficiency improvements. #### Key Enhancements - A modern processing stack built on **Dask** (parallel/distributed execution) and **Zarr** (chunked, out-of-core storage) - **Standardised output formats** across all engines: - Python `dict` - **Zarr** - **AnnotationStore** (SQLite-backed) - **QuPath JSON** - Cleaner runtime behavior with reduced warning noise and a unified progress bar - More predictable memory usage through chunked streaming - Broader test coverage across engine components ### 🗺️ Improved QuPath Support Enhancements include: - Better handling of **GeoJSON** - Support for **multipoint geometries** (#841) - Improved semantic output helpers: - `dict_to_store_semantic_segmentor` (#926) - OME-TIFF probability overlays (#929) ### 🔬 New Nucleus Detection Engine A dedicated nucleus detection pipeline has been added, built on the redesigned engine for improved accuracy and efficient large-scale processing. #### 🧠 KongNet Model Family TIAToolbox 2.0.0 introduces **KongNet**, a high-performance architecture that achieved top results across multiple international challenges: - 🥇 **1st place: MONKEY Challenge (overall detection)** - 🥇 **1st place: MIDOG (mitosis detection)** - ⭐ Top-tier performance on **PUMA** Multiple pretrained variants are available (CoNIC, PanNuke, MONKEY, PUMA, MIDOG), each with standardised IO configurations. ### 🧬 Expanded Foundation Model Support Additional foundation models are now supported (#906), broadening the range of high-capacity architectures available for feature extraction and downstream tasks. ### 🖼️ SAM Segmentation in TIAViz TIAViz now integrates Meta’s Segment Anything Model (SAM), enabling: - Interactive segmentation - Rapid region extraction - Exploratory annotation workflows Simplified SAM usage (#968) streamlines its integration into analysis pipelines. ### 🧩 Enhanced WSIReader & Metadata Handling Major improvements include: - More robust cross-vendor **metadata extraction** (#1001) - **Multichannel image support** (PR #825) for immunofluorescence and non-RGB modalities - Simplified Windows installation using `openslide-bin` (no manual DLL steps) - macOS Tileserver fix (#976) - Improved DICOM reading (#934) ### ☁️ New Cloud-Native Reader: FsspecJSONWSIReader (PR #897) A new reader supporting **fsspec-compatible filesystems**, enabling seamless access to WSIs stored on: - S3 - GCS - Azure - HPC clusters - Any fsspec-supported backend This enables cloud-native and distributed data workflows. Contributed by @aacic ### 🤗 Pretrained Models Migrated to Hugging Face All pretrained models and sample assets have been migrated (#945, #983), improving: - Download reliability - Versioning and reproducibility - Caching and CI integration - Licensing clarity per model family ### 🛡️ Security, Compatibility & Tooling #### 🔐 Security & Dependency Updates - Dependency upgrades - Internal security improvements - Explicit workflow permissions added (#1021, #1023) #### 🐍 Python Version Support - **Dropped:** Python **3.9** - **Added:** Python **3.13** - **Supported:** Python 3.10–3.13 - Updated CUDA wheel source to **cu126** #### 🛠️ Developer Tooling & CI/CD - Expanded **mypy** type-checking coverage (#912, #931, #935, #951) - Updated pre-commit hooks and general formatting - CI uses **CPU-only PyTorch** for faster, more reliable builds (#974, #979) - Updated pip install workflow (#1013) - Added new **Python 3.13 Docker images** (#1014, #1019) ### 🧹 Bug Fixes & Stability Improvements - Fixed multi-GPU behaviour with `torch.compile` (#923) - Fixed DICOM reading issue (#934) - Fixed annotation contour handling with holes (#956) - Fixed consecutive annotation load bug (#927) - Fixed SCCNN model issues (#970) - Fixed MapDe `dist_filter` shape issue (#914) - Improved notebook reliability on Colab (#1026–#1030) - macOS TileServer issues resolved (#976) ### 🧭 Migration Guide for Users #### 🔄 Updating from 1.x to 2.0.0 #### Update calls: replace `.predict()` with `.run()` ```python # Old results = segmentor.predict(imgs=[...], ioconfig=config) # New results = segmentor.run(images=[...], ioconfig=config) ``` #### Use `patch_mode`: replace `mode="patch"` with `patch_mode=True` and `mode="tile"` or "wsi" with `patch_mode=False` ```python # Old results = segmentor.predict(imgs=[...], mode="patch", ioconfig=config) # New results = segmentor.run(images=[...], patch_mode=True, ioconfig=config) ``` ```python # Old results = segmentor.predict(imgs=[...], mode="wsi", ioconfig=config) # New results = segmentor.run(images=[...], patch_mode=False, ioconfig=config) ``` #### Use the new I/O configs ```python from tiatoolbox.models.engine.io_config import IOSegmentorConfig config = IOSegmentorConfig( patch_input_shape=(256, 256), stride_shape=(240, 240), input_resolutions=[{"resolution": 0.25, "units": "mpp"}], save_resolution={"units": "baseline", "resolution": 1.0} ) ``` #### Specify the output format ```python results = segmentor.run( images=[...], ioconfig=ioconfig, output_type="zarr", # or "dict", "annotationstore", "qupath" save_dir="outputs/" ) ``` #### Update imports - `tiatoolbox.typing` → `tiatoolbox.type_hints` #### Install requirements - Python **3.10+** required - On Windows: install OpenSlide via `pip install openslide-bin` **Full Changelog:** v1.6.0...v2.0.0 --------- Signed-off-by: Shan E Ahmed Raza <13048456+shaneahmed@users.noreply.github.com> Co-authored-by: measty <20169086+measty@users.noreply.github.com> Co-authored-by: Jiaqi-Lv <60471431+Jiaqi-Lv@users.noreply.github.com> Co-authored-by: adamshephard <39619155+adamshephard@users.noreply.github.com> Co-authored-by: Mostafa Jahanifar <74412979+mostafajahanifar@users.noreply.github.com> Co-authored-by: John Pocock <John-P@users.noreply.github.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Yijie Zhu <120978607+YijieZhu15@users.noreply.github.com> Co-authored-by: Aleksandar Acic <32873451+aacic@users.noreply.github.com> Co-authored-by: Abdol A <u2271662@live.warwick.ac.uk> Co-authored-by: Abishek <abishekraj6797@gmail.com> Co-authored-by: behnazelhaminia <30952176+behnazelhaminia@users.noreply.github.com> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Co-authored-by: Adam Shephard <adam.shephard@warwick.ac.uk> Co-authored-by: gozdeg <gozdegunesli@gmail.com> Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com> Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com> Co-authored-by: mbasheer04 <78800844+mbasheer04@users.noreply.github.com> Co-authored-by: vqdang <24943262+vqdang@users.noreply.github.com>
Merged
shaneahmed
added a commit
that referenced
this pull request
Mar 12, 2026
🔖 Release 2.0.0 (#1031) ## TIAToolbox v2.0.0 (2026-03-11) ### ✨ Major Updates and Feature Improvements #### ⚙️ Engine Redesign (PR #578) TIAToolbox 2.0.0 introduces a completely re-engineered inference engine designed for significant performance, scalability, and memory-efficiency improvements. #### Key Enhancements - A modern processing stack built on **Dask** (parallel/distributed execution) and **Zarr** (chunked, out-of-core storage) - **Standardised output formats** across all engines: - Python `dict` - **Zarr** - **AnnotationStore** (SQLite-backed) - **QuPath JSON** - Cleaner runtime behavior with reduced warning noise and a unified progress bar - More predictable memory usage through chunked streaming - Broader test coverage across engine components ### 🗺️ Improved QuPath Support Enhancements include: - Better handling of **GeoJSON** - Support for **multipoint geometries** (#841) - Improved semantic output helpers: - `dict_to_store_semantic_segmentor` (#926) - OME-TIFF probability overlays (#929) ### 🔬 New Nucleus Detection Engine A dedicated nucleus detection pipeline has been added, built on the redesigned engine for improved accuracy and efficient large-scale processing. #### 🧠 KongNet Model Family TIAToolbox 2.0.0 introduces **KongNet**, a high-performance architecture that achieved top results across multiple international challenges: - 🥇 **1st place: MONKEY Challenge (overall detection)** - 🥇 **1st place: MIDOG (mitosis detection)** - ⭐ Top-tier performance on **PUMA** Multiple pretrained variants are available (CoNIC, PanNuke, MONKEY, PUMA, MIDOG), each with standardised IO configurations. ### 🧬 Expanded Foundation Model Support Additional foundation models are now supported (#906), broadening the range of high-capacity architectures available for feature extraction and downstream tasks. ### 🖼️ SAM Segmentation in TIAViz TIAViz now integrates Meta’s Segment Anything Model (SAM), enabling: - Interactive segmentation - Rapid region extraction - Exploratory annotation workflows Simplified SAM usage (#968) streamlines its integration into analysis pipelines. ### 🧩 Enhanced WSIReader & Metadata Handling Major improvements include: - More robust cross-vendor **metadata extraction** (#1001) - **Multichannel image support** (PR #825) for immunofluorescence and non-RGB modalities - Simplified Windows installation using `openslide-bin` (no manual DLL steps) - macOS Tileserver fix (#976) - Improved DICOM reading (#934) ### ☁️ New Cloud-Native Reader: FsspecJSONWSIReader (PR #897) A new reader supporting **fsspec-compatible filesystems**, enabling seamless access to WSIs stored on: - S3 - GCS - Azure - HPC clusters - Any fsspec-supported backend This enables cloud-native and distributed data workflows. Contributed by @aacic ### 🤗 Pretrained Models Migrated to Hugging Face All pretrained models and sample assets have been migrated (#945, #983), improving: - Download reliability - Versioning and reproducibility - Caching and CI integration - Licensing clarity per model family ### 🛡️ Security, Compatibility & Tooling #### 🔐 Security & Dependency Updates - Dependency upgrades - Internal security improvements - Explicit workflow permissions added (#1021, #1023) #### 🐍 Python Version Support - **Dropped:** Python **3.9** - **Added:** Python **3.13** - **Supported:** Python 3.10–3.13 - Updated CUDA wheel source to **cu126** #### 🛠️ Developer Tooling & CI/CD - Expanded **mypy** type-checking coverage (#912, #931, #935, #951) - Updated pre-commit hooks and general formatting - CI uses **CPU-only PyTorch** for faster, more reliable builds (#974, #979) - Updated pip install workflow (#1013) - Added new **Python 3.13 Docker images** (#1014, #1019) ### 🧹 Bug Fixes & Stability Improvements - Fixed multi-GPU behaviour with `torch.compile` (#923) - Fixed DICOM reading issue (#934) - Fixed annotation contour handling with holes (#956) - Fixed consecutive annotation load bug (#927) - Fixed SCCNN model issues (#970) - Fixed MapDe `dist_filter` shape issue (#914) - Improved notebook reliability on Colab (#1026–#1030) - macOS TileServer issues resolved (#976) ### 🧭 Migration Guide for Users #### 🔄 Updating from 1.x to 2.0.0 #### Update calls: replace `.predict()` with `.run()` ```python # Old results = segmentor.predict(imgs=[...], ioconfig=config) # New results = segmentor.run(images=[...], ioconfig=config) ``` #### Use `patch_mode`: replace `mode="patch"` with `patch_mode=True` and `mode="tile"` or "wsi" with `patch_mode=False` ```python # Old results = segmentor.predict(imgs=[...], mode="patch", ioconfig=config) # New results = segmentor.run(images=[...], patch_mode=True, ioconfig=config) ``` ```python # Old results = segmentor.predict(imgs=[...], mode="wsi", ioconfig=config) # New results = segmentor.run(images=[...], patch_mode=False, ioconfig=config) ``` #### Use the new I/O configs ```python from tiatoolbox.models.engine.io_config import IOSegmentorConfig config = IOSegmentorConfig( patch_input_shape=(256, 256), stride_shape=(240, 240), input_resolutions=[{"resolution": 0.25, "units": "mpp"}], save_resolution={"units": "baseline", "resolution": 1.0} ) ``` #### Specify the output format ```python results = segmentor.run( images=[...], ioconfig=ioconfig, output_type="zarr", # or "dict", "annotationstore", "qupath" save_dir="outputs/" ) ``` #### Update imports - `tiatoolbox.typing` → `tiatoolbox.type_hints` #### Install requirements - Python **3.10+** required - On Windows: install OpenSlide via `pip install openslide-bin` **Full Changelog:** v1.6.0...v2.0.0 --------- Signed-off-by: Shan E Ahmed Raza <13048456+shaneahmed@users.noreply.github.com> Co-authored-by: measty <20169086+measty@users.noreply.github.com> Co-authored-by: Jiaqi-Lv <60471431+Jiaqi-Lv@users.noreply.github.com> Co-authored-by: adamshephard <39619155+adamshephard@users.noreply.github.com> Co-authored-by: Mostafa Jahanifar <74412979+mostafajahanifar@users.noreply.github.com> Co-authored-by: John Pocock <John-P@users.noreply.github.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Yijie Zhu <120978607+YijieZhu15@users.noreply.github.com> Co-authored-by: Aleksandar Acic <32873451+aacic@users.noreply.github.com> Co-authored-by: Abdol A <u2271662@live.warwick.ac.uk> Co-authored-by: Abishek <abishekraj6797@gmail.com> Co-authored-by: behnazelhaminia <30952176+behnazelhaminia@users.noreply.github.com> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Co-authored-by: Adam Shephard <adam.shephard@warwick.ac.uk> Co-authored-by: gozdeg <gozdegunesli@gmail.com> Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com> Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com> Co-authored-by: mbasheer04 <78800844+mbasheer04@users.noreply.github.com> Co-authored-by: vqdang <24943262+vqdang@users.noreply.github.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
This PR adds ** multichannel (e.g., immunofluorescence) image support** across readers, the TileServer, and the Bokeh visualization app. It introduces a new
MultichannelToRGBpost‑processing pipeline that composites N‑channel data to RGB, a channel/color selection UI (with enhancement control) in the app, and corresponding TileServer endpoints to sync state. It also hardens TIFF/OME channel‑metadata parsing, adds qptiff samples for testing, and refactorsWSIPatchDatasetto improve input validation and tissue‑mask handling. Additional tests cover edge cases across readers, metadata parsing, UI, and server routes.Multichannel Image Support, TileServer & Bokeh UI Enhancements, and Reader/Dataset Hardening
Key Changes
✨ Features
Post‑processing for multichannel images
tiatoolbox.utils.postproc_defs.MultichannelToRGBclass for converting multi‑channel arrays to RGB, with configurable color dict, channel activity/order, and anenhancefactor. Includes extensive unit tests.WSIReader API:
post_procWSIReader.open(..., post_proc="auto" | None | callable)now propagates to all reader types."auto"appliesMultichannelToRGBfor multiplex TIFF/virtual inputs (returns RGB).Noneskips post‑processing (returns native channel count).Bokeh app – Channel/color UI
TileServer endpoints
GET /tileserver/channels→ current{channels, active}state.PUT /tileserver/channels→ set color map & active channels.PUT /tileserver/enhance→ set global enhancement factor.QPTIFF sample integration
multiplex_example.qptiff(+ small variant) to remote samples; fixtures & tests use them end‑to‑end through app and server.🧠 TIFF/OME Metadata & Reader Hardening
ScanColorTable, andFilterColorsblocks, with sane fallbacks (auto‑generated colors, tolerant of missing/invalid values). Objective power inference falls back to MPP when missing. Extensive edge‑case tests included.TIFFWSIReader. Some paths that previously resolved to OpenSlide may now returnTIFFWSIReader.🧰 Dataset Refactor
WSIPatchDataset:_validate_inputs, with clearer errors._setup_mask_reader(now retries with MPP when power is unavailable)._filter_patches.🖼️ Docs
🔧 Other
*.qptiff; minor UI and server startup robustness tweaks; small lint/style fix.Breaking / Behavior‑Changing Notes
post_proc="auto"(default) returns RGB (3 channels). To obtain raw N‑channel data, callers must passpost_proc=None.TIFFWSIReaderinstead ofOpenSlideWSIReader. Tests and assertions updated to accept either where appropriate.WSIPatchDatasetnow validates shapes early and raises more precise errors.Usage Examples
1) Programmatic reading (RGB composite vs raw channels)