[AI] Add AI features and GPU acceleration requirements to README#20702
[AI] Add AI features and GPU acceleration requirements to README#20702andriiryzhkov wants to merge 3 commits intodarktable-org:masterfrom
Conversation
README.md
Outdated
|
|
||
| ### AI features (optional) | ||
|
|
||
| Darktable includes optional AI-powered features such as neural denoise, upscale and object |
There was a problem hiding this comment.
Just a personal preference: I feel that "neural" is a little bit superfluous in the context of "AI-powered features".
| Darktable includes optional AI-powered features such as neural denoise, upscale and object | |
| Darktable includes optional AI-powered features such as denoise, upscale and object |
There was a problem hiding this comment.
@da-phil Well, I have the exact opposite personal preference. Calling everything related to machine learning and neural networks AI is a bit too much for me... I understand that we have no choice but to call it all AI so that people don't ask why there's no AI features in darktable. 😄 So let's not remove the more accurate word "neural", at least.
There was a problem hiding this comment.
Don't get me wrong, I'm not a big fan of overusing the term AI for something which is just machine learning as well. But for marketing reasons it totally makes sense.
With my comment I just wanted to express that I feel "neural" is not needed, as the sentence already talks about "AI features", from a pure English language point of view.
There was a problem hiding this comment.
@victoryforce I agree with you regarding calling everything AI. But in this paragraph, specifically, we are talking more about functions or tasks, so we can drop "neural" here and keep it in the module name.
There was a problem hiding this comment.
But for marketing reasons it totally makes sense.
@da-phil (sigh...) Yes, I agree.
so we can drop "neural" here and keep it in the module name.
@andriiryzhkov I just expressed my feelings and made a suggestion, reject it if you think it won't improve the text.
README.md
Outdated
| * [cuDNN 9.x](https://developer.nvidia.com/cudnn-downloads) (for ONNX Runtime 1.20+) | ||
| * Recommended: 8 GB+ VRAM | ||
| * **AMD (ROCm):** Linux only. | ||
| * [ROCm 6.x](https://rocm.docs.amd.com/en/latest/deploy/linux/index.html) (for ONNX Runtime 1.20+, includes MIGraphX) |
There was a problem hiding this comment.
Do ONNX Runtime 1.20+ versions really contain MIGraphX? I don't think so, otherwise stuff would work on my AMD GPU out of the box.
Also there is a deprecation notice on the ROCM execution provider page:
** NOTE ** ROCm Execution Provider has been removed since 1.23 release. Please > Migrate your applications to use the MIGraphX Execution Provider
ROCm 7.0 is the last offiicaly AMD supported distribution of this provider and all builds going forward (ROCm 7.1+) will have ROCm EP removed.
As far as I understood, if you want to use the "new" way of AMD GPU support for a C/C++ library which uses ONNX, you need to build it yourself:
https://onnxruntime.ai/docs/build/eps.html#amd-migraphx
If you you use one of the pre-build python packages for deep-learning frameworks, you might get it straight away, unfortunately I didn't try yet, but will get back to you.
There was a problem hiding this comment.
MIGraphX EP is the recommended path in ONNX Runtime 1.20+. And AMD is migrating step by step ONNX Runtime from ROCm to MIGraphX EP. Our code supports both EPs.
As for the ONNX Runtime installation, you need to pat attenuation to versions of ROCm and ONNX Runtime. There's a strong mapping between them.
You can try this script. It installs ONNX Runtime with MIGraphX EP in user space. Just use "detect" button after it in preferences AI tab.
Add AI features and GPU acceleration requirements to README (CPU, CUDA, ROCm, OpenVINO, DirectML, CoreML)