Skip to content

[AI] Add AI features and GPU acceleration requirements to README#20702

Open
andriiryzhkov wants to merge 3 commits intodarktable-org:masterfrom
andriiryzhkov:readme_ai
Open

[AI] Add AI features and GPU acceleration requirements to README#20702
andriiryzhkov wants to merge 3 commits intodarktable-org:masterfrom
andriiryzhkov:readme_ai

Conversation

@andriiryzhkov
Copy link
Copy Markdown
Contributor

Add AI features and GPU acceleration requirements to README (CPU, CUDA, ROCm, OpenVINO, DirectML, CoreML)

@andriiryzhkov andriiryzhkov marked this pull request as ready for review March 29, 2026 20:08
README.md Outdated

### AI features (optional)

Darktable includes optional AI-powered features such as neural denoise, upscale and object
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a personal preference: I feel that "neural" is a little bit superfluous in the context of "AI-powered features".

Suggested change
Darktable includes optional AI-powered features such as neural denoise, upscale and object
Darktable includes optional AI-powered features such as denoise, upscale and object

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@da-phil Well, I have the exact opposite personal preference. Calling everything related to machine learning and neural networks AI is a bit too much for me... I understand that we have no choice but to call it all AI so that people don't ask why there's no AI features in darktable. 😄 So let's not remove the more accurate word "neural", at least.

Copy link
Copy Markdown
Contributor

@da-phil da-phil Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't get me wrong, I'm not a big fan of overusing the term AI for something which is just machine learning as well. But for marketing reasons it totally makes sense.
With my comment I just wanted to express that I feel "neural" is not needed, as the sentence already talks about "AI features", from a pure English language point of view.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@victoryforce I agree with you regarding calling everything AI. But in this paragraph, specifically, we are talking more about functions or tasks, so we can drop "neural" here and keep it in the module name.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But for marketing reasons it totally makes sense.

@da-phil (sigh...) Yes, I agree.

so we can drop "neural" here and keep it in the module name.

@andriiryzhkov I just expressed my feelings and made a suggestion, reject it if you think it won't improve the text.

README.md Outdated
* [cuDNN 9.x](https://developer.nvidia.com/cudnn-downloads) (for ONNX Runtime 1.20+)
* Recommended: 8 GB+ VRAM
* **AMD (ROCm):** Linux only.
* [ROCm 6.x](https://rocm.docs.amd.com/en/latest/deploy/linux/index.html) (for ONNX Runtime 1.20+, includes MIGraphX)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do ONNX Runtime 1.20+ versions really contain MIGraphX? I don't think so, otherwise stuff would work on my AMD GPU out of the box.
Also there is a deprecation notice on the ROCM execution provider page:

** NOTE ** ROCm Execution Provider has been removed since 1.23 release. Please > Migrate your applications to use the MIGraphX Execution Provider
ROCm 7.0 is the last offiicaly AMD supported distribution of this provider and all builds going forward (ROCm 7.1+) will have ROCm EP removed.

As far as I understood, if you want to use the "new" way of AMD GPU support for a C/C++ library which uses ONNX, you need to build it yourself:
https://onnxruntime.ai/docs/build/eps.html#amd-migraphx

If you you use one of the pre-build python packages for deep-learning frameworks, you might get it straight away, unfortunately I didn't try yet, but will get back to you.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MIGraphX EP is the recommended path in ONNX Runtime 1.20+. And AMD is migrating step by step ONNX Runtime from ROCm to MIGraphX EP. Our code supports both EPs.
As for the ONNX Runtime installation, you need to pat attenuation to versions of ROCm and ONNX Runtime. There's a strong mapping between them.

You can try this script. It installs ONNX Runtime with MIGraphX EP in user space. Just use "detect" button after it in preferences AI tab.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants