Skip to content

[AI] Add neural restore lighttable module for AI denoise and upscale#20523

Open
andriiryzhkov wants to merge 25 commits intodarktable-org:masterfrom
andriiryzhkov:neural-restore
Open

[AI] Add neural restore lighttable module for AI denoise and upscale#20523
andriiryzhkov wants to merge 25 commits intodarktable-org:masterfrom
andriiryzhkov:neural-restore

Conversation

@andriiryzhkov
Copy link
Copy Markdown
Contributor

This is a third and improved part of the original PR #20322.

Summary

  • Add a new lighttable/darkroom utility module (neural_restore) that provides AI-based image restoration using ONNX backend models
  • Supports two operations via a tabbed notebook UI: denoise (e.g. NIND UNet) and upscale (e.g. BSRGAN at 2x/4x)
  • Includes an interactive split before/after preview with draggable divider, on-demand generation (click to start), and a detail recovery slider that uses wavelet (DWT) decomposition to recover fine texture lost during denoising
  • Batch processing runs as a background job with progress reporting, cancellation support, tiled inference with overlap to avoid seam artifacts, and memory-aware tile size selection
  • Output is written as 32-bit float TIFF and automatically imported into the library, grouped with the source image
  • Register denoise-nind and upscale-bsrgan models in the AI model registry (ai_models.json)

Details

  • Built only when USE_AI=ON (cmake option, default OFF)
  • Preview thread uses atomic sequence counter for cancellation and is joined before cleanup to avoid use-after-free
  • Pixel pipeline: linear Rec.709 → sRGB before inference, sRGB → linear after; planar NCHW layout for ONNX models
  • Detail recovery: extracts luminance residual (original − denoised), applies per-band wavelet thresholding to separate noise from texture, blends filtered detail back at user-controlled strength
denoise upscale

Fixes: #19310

@esq4
Copy link
Copy Markdown
Contributor

esq4 commented Mar 14, 2026

Excellent! I was looking forward to it :)
But one question.
Output is written as 32-bit float TIFF and automatically imported into the library, grouped with the source image
That's very hard on the disk. Have you considered integrating it into the export like in CommReteris/nind-denoise?
With this implementation, the intermediate TIFF can be created in a temporary directory (I even placed it on tmpfs on my Linux) and deleted immediately after the export is complete.

@wpferguson
Copy link
Copy Markdown
Member

How do these models fit in with the "acceptable" model conversation?

If we merge a module that requires a model that's not "acceptable", then darktable is "endorsing"/requiring the non acceptable model.

@andriiryzhkov
Copy link
Copy Markdown
Contributor Author

@wpferguson:

How do these models fit in with the "acceptable" model conversation?

These models easily meet criteria for open-source AI. Please, see details here:

@andriiryzhkov
Copy link
Copy Markdown
Contributor Author

@esq4: Good point about disk usage. The current approach (import as grouped image) is intentional — it gives you a denoised/upscaled source that you can further develop in darkroom and compare via grouping. This is conceptually different from export, which is a final output step.

More flexibility would definitely be helpful here. What I think can be added as extra parameters:

  • Choice between 16-bit and 32-bit float TIFF (halves the size when full precision isn't needed)
  • Option to auto-import into the library or not
  • Maybe, configurable output directory (so you can point it to tmpfs or a fast scratch disk)

As for export-time denoising (like nind-denoise) — that's a different use case but a valid one. It could be added to the export module down the road as a complementary feature.

@wpferguson
Copy link
Copy Markdown
Member

BSRGAN only meets open source tooling and in the limitations it says Training datasets Flickr2K/WED/OST do not have explicit open-source licenses

If I look at some of the other models (SAM) they require data destruction in 3 months. Does that mean I have to destroy my edits? It also says no commercial use, so if I sell one of my images am I in violation?

I'm sorry, but this is a minefield. Somehow we need to decide a quick way to determine if a model is acceptable.

Do we use the OSAID, and if so what MOF? It seems Class I, Open Science, is fine. However Class II, Open Tooling, seems to come with lots of limitations/questions. If we decide to use Class II, how are we going to communicate the limitations to the users? Don't expect them to read, we already know how well that works.

@victoryforce
Copy link
Copy Markdown
Collaborator

If I look at some of the other models (SAM) they require data destruction in 3 months.

You are referring to clause 7 of SA-1B DATASET RESEARCH LICENSE. But that is a requirement for the training dataset, not the model. You can use that dataset for your research, for training your own model, but you can't keep it indefinitely. What's the problem?

Does that mean I have to destroy my edits?

Absolutely not. This is a conclusion from your statement above that is not true.

It also says no commercial use, so if I sell one of my images am I in violation?

Also no. The user is using the model and not the training dataset.

I'm sorry, but this is a minefield.

I'm also sorry, but this is NOT a minefield. Wanna see photos of what a minefield actually is? :)

Somehow we need to decide a quick way to determine if a model is acceptable.

Too vague. What is "acceptable"? Why should we decide what is acceptable for users and not the users themselves?

@KarlMagnusLarsson
Copy link
Copy Markdown

KarlMagnusLarsson commented Mar 15, 2026

Thank you for this PR. It works for me. The denoise and upscale functions work. I use the nind and bsrgan models from preferences -> AI after downloading them.

If I test this PR on top of git master branch I get fallback to CPU and can not activate NVIDIA CUDA. The denose and, even more so, upscale, are very heavy using CPU and I would say that a GPU is essential (not so for AI masks).

Workaround:
If I test this PR + PR #20522 (comment) (Enable GPU for ONNX runtime on GNU/Linux) then I can run NVIDIA CUDA and GPU. (Linux Debian, NVIDIA QUADRO RTX 4000 8GB, driver 550.163.01, CUDA 12.4).

I found a couple of preliminary issues:

  • The generated 32 bit floating point TIFF (deniosed, upscaled) do not contain a color profile. The normal darktable export module allows for picking color profile and and a rendering intent, EDIT: which is then stored in output file. I can se in details that we have: Pixel pipeline: linear Rec.709 → sRGB before inference, sRGB → linear after; planar NCHW layout for ONNX models. EDIT2: I can use the darktable export module and select linear Rec.709, if I want to (convert to) and embed linear rec 709 in output result file.
  • The EXIF data is not carried over from the source into the TIFF.

@piratenpanda
Copy link
Copy Markdown
Contributor

piratenpanda commented Mar 15, 2026

I'm getting:

2026-03-15 13:45:21.317621938 [W:onnxruntime:DarktableAI, migraphx_execution_provider.cc:1267 compile_program] Model Compile: Complete migraphx_parse_onnx_buffer: Error: /longer_pathname_so_that_rpms_can_support_packaging_the_debug_info_for_all_os_profiles/src/AMDMIGraphX/src/include/migraphx/op/concat.hpp:98: normalize_compute_shape: CONCAT: all input dimensions should match along axis 2 2026-03-15 13:45:21.328168818 [E:onnxruntime:, sequential_executor.cc:572 ExecuteKernel] Non-zero status code returned while running MGXKernel_graph_main_graph_8823259983746908714_2 node. Name:'MIGraphXExecutionProvider_MGXKernel_graph_main_graph_8823259983746908714_2_2' Status Message: Failed to call function

when running this on my RX 6700 XT with HSA_OVERRIDE_GFX_VERSION=10.3.0

also bsrgan model won't download throwing an error about a missing integrity check as it couldn't download the chksum

@jenshannoschwalm
Copy link
Copy Markdown
Collaborator

Too vague. What is "acceptable"? Why should we decide what is acceptable for users and not the users themselves?

I guess you have recognized that there is some problem within the dt dev community about exactly this point. And yes, if we as devs decide - we don't want something for whatever reason that may be - it's not a user decision at all.

Maybe minefield was not the perfect wording. But you just should accept that for some of the long time devs the "who did and how a model was made" is absolutely critical. For me personally a razor-sharp yes-or-no.

@piratenpanda
Copy link
Copy Markdown
Contributor

piratenpanda commented Mar 15, 2026

CPU seems to work. But I think it should export to the default export folder. And will it in future also be possible to integrate it right into the normal export workflow?

Also without the override and AI set to off in the setting it crashes darktable:

darktable 
2026-03-15 14:09:51.924346434 [W:onnxruntime:DarktableAI, migraphx_execution_provider.cc:167 MIGraphXExecutionProvider] [MIGraphX EP] MIGraphX ENV Override Variables Set:
2026-03-15 14:09:51.968774532 [W:onnxruntime:, session_state.cc:1327 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
2026-03-15 14:09:51.968788808 [W:onnxruntime:, session_state.cc:1329 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
2026-03-15 14:09:52.013621785 [W:onnxruntime:DarktableAI, migraphx_execution_provider.cc:1262 compile_program] Model Compile: Begin

rocBLAS error: Cannot read /opt/rocm/lib/../lib/migraphx/lib/../../rocblas/library/TensileLibrary.dat: Datei oder Verzeichnis nicht gefunden for GPU arch : gfx1031
 List of available TensileLibrary Files : 
"/opt/rocm/lib/../lib/migraphx/lib/../../rocblas/library/TensileLibrary_lazy_gfx908.dat"
"/opt/rocm/lib/../lib/migraphx/lib/../../rocblas/library/TensileLibrary_lazy_gfx1101.dat"
"/opt/rocm/lib/../lib/migraphx/lib/../../rocblas/library/TensileLibrary_lazy_gfx1150.dat"
"/opt/rocm/lib/../lib/migraphx/lib/../../rocblas/library/TensileLibrary_lazy_gfx1100.dat"
"/opt/rocm/lib/../lib/migraphx/lib/../../rocblas/library/TensileLibrary_lazy_gfx1201.dat"
"/opt/rocm/lib/../lib/migraphx/lib/../../rocblas/library/TensileLibrary_lazy_gfx1030.dat"
"/opt/rocm/lib/../lib/migraphx/lib/../../rocblas/library/TensileLibrary_lazy_gfx90a.dat"
"/opt/rocm/lib/../lib/migraphx/lib/../../rocblas/library/TensileLibrary_lazy_gfx942.dat"
"/opt/rocm/lib/../lib/migraphx/lib/../../rocblas/library/TensileLibrary_lazy_gfx950.dat"
"/opt/rocm/lib/../lib/migraphx/lib/../../rocblas/library/TensileLibrary_lazy_gfx1102.dat"
"/opt/rocm/lib/../lib/migraphx/lib/../../rocblas/library/TensileLibrary_lazy_gfx1200.dat"
"/opt/rocm/lib/../lib/migraphx/lib/../../rocblas/library/TensileLibrary_lazy_gfx1151.dat"
Abgebrochen                (Speicherabzug geschrieben) darktable

I think it should just disable AI and make the checkbox grey or only allow CPU

@TurboGit
Copy link
Copy Markdown
Member

Note that as for the AI masking there is no models installed and downloaded automatically. All this stays in the hand of the end-users, so everyone can decide depending on its sensibility. I recognize that we have different perception about AI and I would never allow any model installed and/or delivered by default. Even the default build from source has no AI support, one need to pass a specific option to enable this.

I really think the current state is respecting everyone choice.

@andriiryzhkov
Copy link
Copy Markdown
Contributor Author

@piratenpanda: thank you for testing and feedback.

Regarding error you got. When running neural restore with MIGraphX provider on RX 6700 XT, the model compilation fails because MIGraphX can't handle dynamic shapes in concat operations with ORT_ENABLE_ALL optimization. I am thinking how we can optionally enable additional provider-dependent configs per model.

also bsrgan model won't download throwing an error about a missing integrity check as it couldn't download the chksum

That's interesting. It works on my side. Thinking of possible causes - can you double-check that in darktablerc you have correct repository config?

plugins/ai/repository=darktable-org/darktable-ai

Also without the override and AI set to off in the setting it crashes darktable:
...
I think it should just disable AI and make the checkbox grey or only allow CPU

Agree on disabling AI preferences when AI is disabled. This feature along with with proper AI actions block when AI is disabled implemented in PR #20534. It also should fix startup crash when AI is disabled.

I would really appreciate your help testing it further.

@wpferguson
Copy link
Copy Markdown
Member

I'm also sorry, but this is NOT a minefield. Wanna see photos of what a minefield actually is? :)

😢

Why should we decide what is acceptable for users and not the users themselves?

If the dev specifies a model necessary to run the module, it's not a user choice, it's a dev/darktable choice.

For example

I could take the GIMP lua script and replace gimp with photoshop and then add it to the lua-scripts repository and put the "blame" on the user for running it. The open source community would see that as darktable endorsing/requiring/encouraging the use of photoshop.

But...

Someone could build a script that lets you specify the external executable, such as ext_editor.lua, and you can decide for yourself what executables you want to run even if it's photoshop, capture one, dxo, etc, etc, etc. That is a user choice and darktable has no say in the user's decision.

The ONLY way the model can be a user choice is if darktable has NO SAY in the decision.

@piratenpanda
Copy link
Copy Markdown
Contributor

piratenpanda commented Mar 15, 2026

That's interesting. It works on my side. Thinking of possible causes - can you double-check that in darktablerc you have correct repository config?

Indeed it was the old one still. Changing to the right one it works fine

Agree on disabling AI preferences when AI is disabled. This feature along with with proper AI actions block when AI is disabled implemented in PR #20534. It also should fix startup crash when AI is disabled.

with the mentioned PR it does not crash anymore

@andriiryzhkov
Copy link
Copy Markdown
Contributor Author

The generated 32 bit floating point TIFF (deniosed, upscaled) do not contain a color profile.

The EXIF data is not carried over from the source into the TIFF.

Fixed - output TIFF now embeds linear Rec.709 ICC profile and source EXIF.

But I think it should export to the default export folder. And will it in future also be possible to integrate it right into the normal export workflow?

Added collapsible "output parameters" section: bit depth (8/16/32, default 16), catalog toggle, and output directory with darktable variable support (e.g. $(FILE_FOLDER)/darktable_exported). Core processing extracted to src/ai/restore.c as a reusable API - ready for export module integration.

@TurboGit
Copy link
Copy Markdown
Member

Windows CI failing with:

C:\Windows\system32\cmd.exe /C "cd . && D:\a\_temp\msys64\ucrt64\bin\cc.exe -Wall -Wno-format -Wshadow -Wtype-limits -Wvla -Wold-style-declaration -Wmaybe-uninitialized -Wno-unknown-pragmas -Wno-error=varargs -Wno-format-truncation -Wno-error=address-of-packed-member -fopenmp -march=native -msse2 -g -mfpmath=sse -O3 -DNDEBUG -O3 -ffast-math -fno-finite-math-only -fexpensive-optimizations  -shared  -Wl,--enable-runtime-pseudo-reloc -o lib\darktable\plugins\lighttable\libneural_restore.dll -Wl,--major-image-version,0,--minor-image-version,0 @CMakeFiles\neural_restore.rsp && C:\Windows\system32\cmd.exe /C "cd /D D:\a\darktable\darktable\build\lib\darktable\plugins\lighttable && D:\a\_temp\msys64\ucrt64\bin\cmake.exe -E make_directory .debug && objcopy --only-keep-debug D:/a/darktable/darktable/build/lib/darktable/plugins/lighttable/libneural_restore.dll D:/a/darktable/darktable/build/lib/darktable/plugins/lighttable/libneural_restore.dll.dbg && objcopy --strip-debug D:/a/darktable/darktable/build/lib/darktable/plugins/lighttable/libneural_restore.dll && objcopy --add-gnu-debuglink=libneural_restore.dll.dbg D:/a/darktable/darktable/build/lib/darktable/plugins/lighttable/libneural_restore.dll""
D:/a/_temp/msys64/ucrt64/bin/../lib/gcc/x86_64-w64-mingw32/15.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: bin/ai/libdarktable_ai.a(restore.c.obj):restore.c:(.text+0xba): undefined reference to `dt_ai_models_get_active_for_task'
D:/a/_temp/msys64/ucrt64/bin/../lib/gcc/x86_64-w64-mingw32/15.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: bin/ai/libdarktable_ai.a(restore.c.obj):restore.c:(.text+0x17b): undefined reference to `dt_ai_models_get_active_for_task'
D:/a/_temp/msys64/ucrt64/bin/../lib/gcc/x86_64-w64-mingw32/15.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: bin/ai/libdarktable_ai.a(restore.c.obj):restore.c:(.text+0x25b): undefined reference to `dt_ai_models_get_active_for_task'
D:/a/_temp/msys64/ucrt64/bin/../lib/gcc/x86_64-w64-mingw32/15.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: bin/ai/libdarktable_ai.a(restore.c.obj):restore.c:(.text+0x43c): undefined reference to `dt_ai_models_get_active_for_task'
D:/a/_temp/msys64/ucrt64/bin/../lib/gcc/x86_64-w64-mingw32/15.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: bin/ai/libdarktable_ai.a(restore.c.obj):restore.c:(.text+0x4bc): undefined reference to `dt_ai_models_get_active_for_task'
D:/a/_temp/msys64/ucrt64/bin/../lib/gcc/x86_64-w64-mingw32/15.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: bin/ai/libdarktable_ai.a(restore.c.obj):restore.c:(.text+0x55f): undefined reference to `dt_get_available_mem'
D:/a/_temp/msys64/ucrt64/bin/../lib/gcc/x86_64-w64-mingw32/15.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: bin/ai/libdarktable_ai.a(restore.c.obj):restore.c:(.text+0x1b96): undefined reference to `dt_alloc_aligned'
D:/a/_temp/msys64/ucrt64/bin/../lib/gcc/x86_64-w64-mingw32/15.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: bin/ai/libdarktable_ai.a(restore.c.obj):restore.c:(.text+0x1f5c): undefined reference to `dwt_denoise'
D:/a/_temp/msys64/ucrt64/bin/../lib/gcc/x86_64-w64-mingw32/15.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: bin/ai/libdarktable_ai.a(restore.c.obj):restore.c:(.text+0x228b): undefined reference to `dwt_denoise'
D:/a/_temp/msys64/ucrt64/bin/../lib/gcc/x86_64-w64-mingw32/15.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: bin/ai/libdarktable_ai.a(restore.c.obj):restore.c:(.text+0x235c): undefined reference to `dwt_denoise'
D:/a/_temp/msys64/ucrt64/bin/../lib/gcc/x86_64-w64-mingw32/15.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: bin/ai/libdarktable_ai.a(restore.c.obj):restore.c:(.text+0x23c8): undefined reference to `dt_alloc_aligned'
D:/a/_temp/msys64/ucrt64/bin/../lib/gcc/x86_64-w64-mingw32/15.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: bin/ai/libdarktable_ai.a(restore.c.obj):restore.c:(.text+0x26b1): undefined reference to `dwt_denoise'
collect2.exe: error: ld returned 1 exit status

@wpferguson
Copy link
Copy Markdown
Member

I really think the current state is respecting everyone choice.

Are we going to give the user any information about the model's licensing or are we putting that all on the user? Or are we expecting them to go digging to find out for themselves? If we don't at least point them at the information how are they supposed to make an informed choice.

Perhaps we include the OSAID rating and the MOF rating with a brief explanation (MOF I - conforms, MOF 2 - some exceptions, etc) and links?

@TurboGit
Copy link
Copy Markdown
Member

Perhaps we include the OSAID rating and the MOF rating with a brief explanation (MOF I - conforms, MOF 2 - some exceptions, etc) and links?

Probably a good idea, yes!

@TurboGit
Copy link
Copy Markdown
Member

But I think it should export to the default export folder.

So we are expecting this as the last edit step so done after all processing made to the image?

@piratenpanda
Copy link
Copy Markdown
Contributor

So we are expecting this as the last edit step so done after all processing made to the image?

Currently for me it was totally unclear how to proceed. Now I seem to understand it better.

In RawRefinerey I get a nice DNG file I can open and edit because it is creating CFA data at the end. Here I get a tiff file without any further explanation that is was a) created somewhere b) the info we need to reimport the folder again. Which is what I gather now from this?

@TurboGit
Copy link
Copy Markdown
Member

I suppose we want the created file (denoised or upscaled) to be reimported into lighttable and NOT exported. It should probably be created next to the original file to be part of the same filmstrip.

@andriiryzhkov
Copy link
Copy Markdown
Contributor Author

Fixed Windows (MSYS2/MinGW) build failure:

Root cause: restore.c was compiled as part of the darktable_ai static library but calls functions from the main darktable library (common/dwt.c, common/imagebuf.c, common/ai_models.c). On Linux/macOS this works because unresolved symbols in static libraries are resolved at final link time. On Windows with MinGW, the linker is stricter and rejects them.

Fix: moved restore.c/restore.h from src/ai/ to src/common/ai/ and compile them as part of the main lib_darktable target (alongside ai_models.c), not as part of darktable_ai. The darktable_ai library now only contains the ONNX Runtime backend and segmentation code - self-contained with no external darktable dependencies.

PS: I will do similar refactoring for AI mask tool for consistency and better maintainability.

@andriiryzhkov
Copy link
Copy Markdown
Contributor Author

@piratenpanda: This module takes a different approach from RawRefinery - it works on processed images, not raw sensor data. The pipeline exports fully developed linear RGB through darktable's processing pipeline, runs AI inference on that, and saves the result.

The typical workflow is:

  1. Make your basic raw edits (white balance, exposure, lens correction, etc.)
  2. Run AI denoise/upscale — the module processes the developed image
  3. Output TIFF is saved to the same folder as the source, automatically imported into the library, and grouped with the original image
  4. Continue editing the TIFF if needed, or use it as the final output

By default, the module handles everything - export, inference, file saving, import, and grouping so the user just clicks "process" and gets a new image in the filmstrip ready to work with. I welcome ideas on how to improve the UX and make it more obvious what is happening at each step.

As for the RawRefinery approach - it is fundamentally different. RawRefinery operates directly on raw CFA data using models trained on raw sensor noise pairs (e.g. NAFNet trained on RNIND). The denoised output is a valid CFA DNG that can be edited through the full raw processing pipeline. This is a powerful approach, but these raw-domain models are typically camera-sensor-specific and may not generalize across all cameras without calibration. There are several promising open-source raw-domain models (LED, PMRID) that could potentially be integrated as a separate feature in the future - but that would be a different module operating before demosaic, not a replacement for this one.

@KarlMagnusLarsson
Copy link
Copy Markdown

KarlMagnusLarsson commented Mar 16, 2026

Fixed - output TIFF now embeds linear Rec.709 ICC profile and source EXIF.

Thanks @andriiryzhkov. Works for me.

Pixel pipeline: linear Rec.709 → sRGB before inference, sRGB → linear after; planar NCHW layout for ONNX models

What does this mean for a wide gamut workflow? Is AI denose and upscale confined to Rec.709 (sRGB) gamut? Wide gamut monitors and some print media can do better than that.

I have not tested this, but will AI denose and upscale limit out of gamut colors, relative to Rec.709? If it confines output to Rec.709, what is the rendering intent when doing so?

The working profile in darktable is linear Rec2020. Will it be possible to keep a wider gamut when AI denoise and upscale is used, like the selected darktable working profile (default linear Rec2020)?

Rec.709 and sRGB are smart gamuts, I mean they are highly relevant and enough for many environments and it is good to cover with a color space that is as small as possible, but flowers, insects, artwork and artificial lights frequently go out of gamut relative to Rec.709 and sRGB .

@TurboGit
Copy link
Copy Markdown
Member

@wpferguson : Do you have an idea to where the model licensing should be documented? I would propose to put this at least in the RELEASE_NOTES for 5.6 but maybe you have also another place in mind?

@andriiryzhkov : Can you please add notes about licensing about the different models that can be installed in Darktable? Basically at least all those that are presented by default in the interface but possibly also including all present in darktable-ai?

@KarlMagnusLarsson
Copy link
Copy Markdown

KarlMagnusLarsson commented Mar 17, 2026

I have not tested this, but will AI denose and upscale limit out of gamut colors, relative to Rec.709? If it confines output to Rec.709, what is the rendering intent when doing so?

@andriiryzhkov, I can confirm that AI denoise limits gamut to Rec.709, which it should based on the statement in details: "...Pixel pipeline: linear Rec.709 → sRGB before inference, sRGB → linear after; planar NCHW layout for ONNX models"

I have a wide gamut monitor EIZO CG 279X.

I execute gamut check in darkroom on a CR3 raw file to verify that it is out of gamut relative to sRGB (Rec.709).

I then export the same image twice:

  1. RAW CR3 -> TIFF (Pro Photo RGB). [Out of gamut when checked with darkroom -> gamut check relative to sRGB]
  2. RAW CR3 -> AI denoise (Rec.709) -> TIFF (Pro Photo RGB)

I can verify, inspecting the exported output visually and with darkroom gamut check, that gamut is indeed restricted to Rec.709 (sRGB) when AI denoise is used to produce its TIFF output result. Again, this is expected and works as stated in 'details'.

The default working profile in darktable pipeline is linear Rec2020, which is much bigger than Rec.709 (sRGB) gamut. I use Linear Pro Photo RGB as darktable working profile which is even bigger than Rec2020.

If the typical workflow is:

1: Make your basic raw edits (white balance, exposure, lens correction, etc.)
2: Run AI denoise/upscale — the module processes the developed image
3: Output TIFF is saved to the same folder as the source, automatically imported into the library, and grouped with the original image
4: Continue editing the TIFF if needed, or use it as the final output

Then we restrict gamut at step 3 to Rec.709 (sRGB) and continued editing towards step 4 has clipped everything out of gamut relative to Rec.709.

It would be very helpful if AI denoise and AI upscale functions could make use of the same profiles and rendering intents as the export module, or use the same profile as the user selectable darktable working profile (default linear Rec2020). EDIT: 16-bits output should be used for linear Rec2020 and linear ProPhoto RGB color profiles.

@wpferguson
Copy link
Copy Markdown
Member

Do you have an idea to where the model licensing should be documented

First thought is maybe include it with the models and have a license link to open it in a browser in the model preferences?

@andriiryzhkov
Copy link
Copy Markdown
Contributor Author

@da-phil : Thank you for reporting the issues.

On the first one. This was a bug with preview image cropping on a wide side panel. Fixed in 52e85fa.

Regarding the second issue, please check in AI tab in preferences config "ONNX Runtime library". It should be either empty for default system library, or correct path for custom GPU accelerated library.

@andriiryzhkov
Copy link
Copy Markdown
Contributor Author

@TurboGit :

A question about model upgrade. ... If the new model had a version 1.1, would a proposal for upgrading be proposed? Just checking if upgrade is a currently supported feature.

Good question. Currently, only required upgrades are forced. This works through "min_version" field in model registry. For example, if DT version was upgraded, and it requires newer model than currently installed, this model will be disabled with a status "upgrade required".

If it is just a minor update, we don't have yet a mechanism to inform DT that upgrade is available. I was also thinking about it, but did not come up with a nice solution yet.

@da-phil
Copy link
Copy Markdown
Contributor

da-phil commented Mar 30, 2026

@da-phil : Thank you for reporting the issues.

On the first one. This was a bug with preview image cropping on a wide side panel. Fixed in 52e85fa.

Regarding the second issue, please check in AI tab in preferences config "ONNX Runtime library". It should be either empty for default system library, or correct path for custom GPU accelerated library.

I checked it, but it seems that the system installation is not being detected (when clicking on "detect"):
image

I first tried to select the libonnxruntime.so, installed through the ubuntu 24.04 package repositories (libonnx 1.14.1), which was not accepted by darktable (maybe too old?):

image

Then I tried the libonnxruntime version which is installed by darktable (/opt/darktable/lib/darktable/libonnxruntime.so), eventually it worked.

image

Is it possible to make the logic, which finds the shipped libonnxruntime version, a little bit smarter, by looking through the darktable install folder?

@da-phil
Copy link
Copy Markdown
Contributor

da-phil commented Mar 30, 2026

Another question: is it possible to set the amount of denoising with this model? I feel that the default setting is far too invasive for my taste and I'd like to play with the level of denoising to avoid my images turning slightly into AI slop.

@andriiryzhkov
Copy link
Copy Markdown
Contributor Author

@da-phil :

I checked it, but it seems that the system installation is not being detected (when clicking on "detect")

If you leave library path empty (which is by default) DT will use bundled ONNX Runtime with CPU support. Detect button check common locations for the library. If not located automatically you can find it manually.

But pay attention to the packages - libonnx is NOT an ONNX Runtime library! It is just model format specification package. We need libonnxruntime.

Is it possible to make the logic, which finds the shipped libonnxruntime version, a little bit smarter, by looking through the darktable install folder?

It is already smart - leave it blank.

@andriiryzhkov
Copy link
Copy Markdown
Contributor Author

andriiryzhkov commented Mar 31, 2026

@da-phil :

is it possible to set the amount of denoising with this model? I feel that the default setting is far too invasive for my taste

There are no default or custom setting for denoise itself. Model just generates a "new" image, which is denoised version of the original one. There is a detail recovery slider though. It controls how much we can bring back from the original image. Try it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

depends: external lib difficulty: hard big changes across different parts of the code base documentation-pending a documentation work is required feature: new new features to add priority: low core features work as expected, only secondary/optional features don't release notes: pending scope: image processing correcting pixels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

AI Denoising