Skip to content

UMA buffers prefer host-visible memory#22930

Open
winstonma wants to merge 2 commits into
ggml-org:masterfrom
winstonma:vulkan-uma-allocation
Open

UMA buffers prefer host-visible memory#22930
winstonma wants to merge 2 commits into
ggml-org:masterfrom
winstonma:vulkan-uma-allocation

Conversation

@winstonma
Copy link
Copy Markdown

Overview

On UMA (Unified Memory Architecture) devices the CPU and GPU share the same physical memory. Despite this, the Vulkan backend was still allocating device-local buffers without the eHostVisible flag, which prevented the CPU from directly accessing GPU tensor data. This meant that even on hardware where a zero-copy path was physically possible, the backend was forced to go through an unnecessary staging copy whenever tensor data needed to be read back to the host (e.g. during loss evaluation or prediction readback in training). The UMA zero copy was not implemented in this PR.

Additional information

Requirements

  • I have read and agree with the contributing guidelines
  • AI usage disclosure: YES, to AI to find out the zero copy for the UMA path

@winstonma winstonma requested a review from a team as a code owner May 11, 2026 01:54
@github-actions github-actions Bot added Vulkan Issues specific to the Vulkan backend ggml changes relating to the ggml tensor library for machine learning labels May 11, 2026
@ggml-gh-bot
Copy link
Copy Markdown

ggml-gh-bot Bot commented May 11, 2026

Hi @winstonma, thanks for your contribution!

Per our contribution guidelines, the automated PR checker found the following issue(s) that need your attention:

  • Multiple open PRs from a new contributor: We limit new contributors (those without a previously merged PR) to 1 open PR at a time. You currently have 3 open PRs.

Please note that maintainers reserve the right to make final decisions on PRs. If you believe there is a mistake, please comment below.

@jeffbolznv
Copy link
Copy Markdown
Contributor

Which buffers need this? What performance benefits do you see? I think this function is generally used for internal allocations that are not transferred to/from.

@winstonma winstonma force-pushed the vulkan-uma-allocation branch from 95fdd41 to 08319b9 Compare May 12, 2026 03:25
@winstonma
Copy link
Copy Markdown
Author

winstonma commented May 12, 2026

After further testing, I determined that the necessary logic could be handled more effectively on the other side. Consequently, I have replaced the previous commit with a refined implementation.

Context & Investigation

While implementing zero-copy support for the UMA path (aiming for parity with the Metal backend), I developed a test script to evaluate performance gains. Initial benchmark results were underwhelming; investigation revealed that the root cause was a memory allocation issue that prevented zero-copy from functioning correctly. Therefore this commit is being submitted.

Benchmarks

Testing conducted is testing latency of moving data from the backend (GPU/Vulkan) back to the host (CPU) over 1 million time on AMD Radeon 880M Graphics:

Before: 36.888 s
After: 0.327 s

And the uncommitted code change would improve the speed to ~0.2 s

Lastly since it is not on the hot memory read/write path the pp and tg performance won't be improved.

@winstonma winstonma changed the title UMA buffers get host-visible memory at allocation time UMA buffers prefer host-visible memory May 12, 2026
@0cc4m
Copy link
Copy Markdown
Contributor

0cc4m commented May 12, 2026

Even for UMA systems, preferring device-local is still correct. It may still be "easier" to access for the GPU, go through less intermediate steps, depending on the architecture. Posting random unverifiable numbers is not gonna help your case, we'd need something reproduceable that shows what cases this improves. There are a lot of devices this would affect.

@winstonma
Copy link
Copy Markdown
Author

The benchmark results was posted as requested. The performance gain comes from how ggml-vulkan handles eHostVisible memory more efficiently.

There are a lot of devices this would affect.

Would you mind explaining the disadvantage/limitations of falling back to device-local memory?

@0cc4m
Copy link
Copy Markdown
Contributor

0cc4m commented May 13, 2026

Your attitude is wrong, it is not my job to disprove your claims, it is your job to prove them. A benchmark result is not worth anything to me if I cannot reproduce it, and you didn't even share your script.

But you're in luck, I just found an easily measurable performance issue on Nvidia DGX Spark caused by preferring just device-local memory, and this kind of change is needed to fix it.

Comment thread ggml/src/ggml-vulkan/ggml-vulkan.cpp Outdated
Co-authored-by: Ruben Ortlam <picard12@live.de>
@winstonma
Copy link
Copy Markdown
Author

Your attitude is wrong, it is not my job to disprove your claims, it is your job to prove them. A benchmark result is not worth anything to me if I cannot reproduce it, and you didn't even share your script.

I am sorry that I make you feel this. I could agree that I don't have all the uma architecture so I am not certain, like you mentioned, how the change would affect others. So I would sincerely want to see how the fallback would affect other uma architecture (e.g. driver problem that allow the system to assign eHostVisible but break the downstream?). Since you are the maintainer so I would like to understand your a lot of devices this would affect concern.

Since you would like to reproduce the benchmark situation. Should I commit the cpp file in this branch? Would the cpp file get merged? This is my concern.

Sorry again.

@0cc4m
Copy link
Copy Markdown
Contributor

0cc4m commented May 13, 2026

I could agree that I don't have all the uma architecture so I am not certain, like you mentioned, how the change would affect others. So I would sincerely want to see how the fallback would affect other uma architecture (e.g. driver problem that allow the system to assign eHostVisible but break the downstream?). Since you are the maintainer so I would like to understand your a lot of devices this would affect concern.

Vulkan runs on many devices, so changes like this affect a lot of architectures. Besides your AMD APU, there are older AMD devices with different architectures, Intel Xe iGPUs, Intel UHD iGPUs, Nvidia iGPUs and even phone SoCs that this affects. That's why some care is necessary.

Same as with the other PR, you can upload benchmark code to a Github gist.

@0cc4m
Copy link
Copy Markdown
Contributor

0cc4m commented May 13, 2026

@rillomas That's a lot of failures in the Linux Intel Vulkan CI runner. I don't really see how this change could cause that, can you take a look?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ggml changes relating to the ggml tensor library for machine learning Vulkan Issues specific to the Vulkan backend

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants