Unified memory isn't supported in PyTorch and was considered a potential blocker for the custom ops refactor.
We found a workaround at the time, with a simple viability proof.
It's however not clear how this fits together with the current open PR #1544 and RFC #1545 and this needs to be fleshed out.
Questions: