[AI Subsystem] AI object mask and AI denoising#20322
[AI Subsystem] AI object mask and AI denoising#20322andriiryzhkov wants to merge 52 commits intodarktable-org:masterfrom
Conversation
Perfect!
Can this be simplified (no SHA256) for now to allow testers to download the models using current master? |
|
For testing purposes, you can skip the download mechanism entirely and just manually place model files in If placed manually, there are no SHA256 checks. |
|
@andriiryzhkov Thank you for such a great contribution! For macOS CI to complete successfully, libarchive should be added to .ci/Brewfile. |
|
@victoryforce thank you for the advise! Done. |
|
@MikoMikarro I am glad you are in the game! We can add OpenCV DNN as another backend provider. It has a bit limited support of neural network operators, but still usable for some models. |
@TurboGit You can download HQ-SAM B model from here https://github.com/andriiryzhkov/darktable-ai/releases/download/5.5.0.4/mask-hq-sam-b.zip. Unpack it to |
|
@andriiryzhkov : Ok I've tried this new model and still not impressed :) We can call me a standard user on this as I have never used IA for this in any software. My expectations was maybe a bit high... But seeing some demo on the SAM model I was expecting quite a better object segmentation. Here are some examples, each time I did a single click on something on the image:
I can continue... In fact I haven't seen a case where it was good. This new model is a bit better than the light version but far from the quality expected for integration. Let's me ask a question. Do you have some test cases where it works perfect? |
Is this provided by darktable or is this an external entity.? The reason I ask is, taking my example above, I merge the ext_editors script and then provide a drop down listing all the Adobe products that it will run. I can't provide that list otherwise darktable is "endorsing" Adobe's software. Another reason is darktable assumes liability for recommending the model. Better to provide the user a list of places that have model collections and let them make the choice. |
|
ref: #20322 (comment) I don't want to be made fun of, so here is the full sentence of what I said to paperdigits on matrix:
Or maybe I'm a fool... |
AI is a POLARIZING subject right now. Some people love it and some people hate or fear it, and there are STRONG opinions on both sides. My thoughts
@TurboGit you are doing fine. You make the best choice you can based on the information available to you. If you don't have enough information then open an RFC issue, though with maybe some guidelines like:
|
Yes, the latest model is a bit slow the light version was almost instantaneous on my side. But my main concern now is the quality of the segmentation. At this stage it is not helping users at all, maybe the training needs to be tweaked... I don't know and I know next to nothing about IA so I let the experts discuss this point.
That would work good for AI denoise but for masking we need fast UI interaction to display the mask add or remove to it. Would that work with Lua?
I can understand that, that's why the models are not and will never be distributed with Darktable. Also the AI feature is not activated by default.
Thanks, I've been maintainer for 7 years now, maybe that's too much for a single man :) I fear that the RFC or poll will be a place of fight :) On such hot topic I think we should discuss with the core developers and find a way forward (or not). |
Only if the AI script would support it and could create the display and interaction.
That was my thought too, which was why I added all the conditions. I could definitely see that not ending well.
I once interviewed for a job and they asked me what I wanted to do. My answer was "Let me tell you what I don't want to do. I don't want to be the boss, I don't want to be in charge. I just want to work". I feel your pain. I think you've done an incredible job of "herding cats". Dealing with lots of personalities, language barriers, users, developers, issues, and still keeping everything on track is quite an accomplishment. It's also a LOT of work for one person. Should we look at some way to share the load or delegate some tasks? |
|
This will be virtually impossible to document (and I'm not sure I'm willing to even merge such documentation) without mentioning or seeming to recommend specific models. As I said in the IRC chat, if we could provide something extremely generic to allow certain operations to be handed off to another "external program" (like lua but probably needs to be more integrated into the pixelpipe) that'd be fine with me (i.e. not explicit AI integration). If we wanted to source our own open dataset and volunteers to train a model of our own that would also be fine with me (though I'm still slightly uncomfortable about the environmental impacts of AI, at least the licensing and "data sweatshops" concerns would be alleviated). But it's really really hard to source good reliable and verifiable information about how most of these models have been trained (both from a data and a human point-of-view) and AI is such a divisive issue there's a good chance of a proper split in the community here, and difficult decisions being made by package maintainers. I for one will have to decide whether I'm comfortable enough with this to continue contributing to the project. |
|
@andriiryzhkov When I saw this PR, I stopped my work to test it. I can't comment on the implications of merging it into Darktable or on its ethical aspects. However, I've been using SAM2 and SAM3 over the last few months with Lua plugins and the external raster mask module. I am sharing my feedback based on my experience with the Lua approach, hoping it will be helpful. Like @TurboGit said, the quality of the segmentation is not good when using HQ-SAM B. This surprised me because I got ok results with SAM2.1 (even with the tiny version). I downloaded a larger HQ model from Hugging Face (sam_hq_vit_l.pth) and converted it to onnx with your script, but got the same result. Next, without converting it, I tried to run sam_hq_vit_l.pth with python. Same results. Based on that, I believe the HQ models do not perform as well as the original SAM2.1. I’ve added an example below so you can compare. Is there a way to use SAM2.1 instead of the HQ models? If so, this might fix the mask quality problem. I tried converting sam2.1_hiera_base_plus.pt to onnx, but it didn’t work. I've also tried this but it failed as well. I haven’t tried denoise yet, but I’ll test it soon. I don’t like NAFNet, but I’ll try to load NIND or RawForge. Lastly, although I think this PR is great, it think it is still worth improving the Lua integration of the raster mask module. With Lua, it doesn’t matter whether you use onnx or PyTorch and you can customize the script to your preferences. |
|
@TurboGit, @AyedaOk thank you for the feedback! Just a quick update on this — I'm actively working on extending support to additional SAM model variants (including SAM 2.1) and refining some of the post-processing algorithms. Expect to push updates next week. Happy to discuss any specific requirements or edge cases you'd like me to prioritize in the meantime. |








This PR introduces an AI subsystem into darktable with two features built on top of it:
AI Object Mask — a new mask tool that lets users select objects in the image by clicking on them. It uses the Light HQ-SAM model to segment objects, then automatically vectorizes the result into path masks (using
ras2vect) that integrate with darktable's existing mask system.AI Denoise — a denoising module powered by the NAFNet model. This was initially developed as a simpler test case for the AI subsystem and is included here as a bonus feature.
Both models are converted to ONNX format for inference. Conversion scripts live in a separate repository: https://github.com/andriiryzhkov/darktable-ai. Models are not bundled with darktable — they are downloaded from GitHub Releases after the app is installed, with SHA256 verification. A new dependency on
libarchiveis added to handle extracting the downloaded model archives.AI subsystem design
The AI subsystem is currently built on top of ONNX Runtime, though the backend is abstracted to allow adding other inference engines in the future. ONNX Runtime is used from pre-built packages distributed on GitHub. On Windows, ONNX Runtime is built with MSVC, so using pre-built binaries is the natural approach for us — I initially expected this to be a problem, but discovered this is common practice among other open-source projects and works well.
The system is organized in three layers:
Backend (
src/ai/): Wraps ONNX Runtime C API behind opaque handles. Handles session creation, tensor I/O, float16 conversion, and hardware acceleration provider selection (CoreML, CUDA, ROCm, DirectML). Providers are enabled via runtime dynamic symbol lookup rather than compile-time linking, so there are no build dependencies on vendor-specific libraries. A separatesegmentation.cimplements the SAM two-stage encoder/decoder pipeline with embedding caching and iterative mask refinement.Model management (
src/common/ai_models.c): Registry that tracks available models, their download status, and user preferences. Downloads model packages from GitHub Releases with SHA256 verification, path traversal protection, and version-aware tag matching. Uses libarchive for safe extraction with symlink and dotdot protections. Thread-safe — all public getters return struct copies, not pointers into the registry.UI and modules: The object mask tool (
src/develop/masks/object.c) runs SAM encoding in a background thread to keep the UI responsive. The user sees a "working..." overlay during encoding, then clicks to place foreground/background prompts. Right-click finalizes by vectorizing the raster mask into Bézier path forms. AI denoise module (src/libs/denoise_ai.c) and preferences tab (src/gui/preferences_ai.c) provide the remaining user-facing features.Fixes: #12295, #19078, #19310