On Thu, 19 Jan 2023 at 02:54, Alex Deucher <alexdeucher@xxxxxxxxx> wrote:
On Wed, Jan 18, 2023 at 11:50 AM Danilo Krummrich <dakr@xxxxxxxxxx> wrote:Yes the idea behind having common code wrapping drm_mm for this is to
Ok, you are just using this for maintaining the GPU VM space in the kernel.
On 1/18/23 17:30, Alex Deucher wrote:
On Wed, Jan 18, 2023 at 11:19 AM Danilo Krummrich <dakr@xxxxxxxxxx> wrote:The kernel still needs to keep track of the mappings within the various
On 1/18/23 16:37, Christian König wrote:Why not just do this in userspace? We have a range manager in
Am 18.01.23 um 16:34 schrieb Danilo Krummrich:Not even that, userspace provides both the base address and the range,
Hi Christian,Ok, that makes more sense.
On 1/18/23 09:53, Christian König wrote:
Am 18.01.23 um 07:12 schrieb Danilo Krummrich:No, it's just for keeping track of the ranges userspace has allocated.
This patch series provides a new UAPI for the Nouveau driver inThis means that the ranges are allocated by the kernel? If yes that's
order to
support Vulkan features, such as sparse bindings and sparse residency.
Furthermore, with the DRM GPUVA manager it provides a new DRM core
feature to
keep track of GPU virtual address (VA) mappings in a more generic way.
The DRM GPUVA manager is indented to help drivers implement
userspace-manageable
GPU VA spaces in reference to the Vulkan API. In order to achieve
this goal it
serves the following purposes in this context.
1) Provide a dedicated range allocator to track GPU VA
allocations and
mappings, making use of the drm_mm range allocator.
a really really bad idea.
So basically you have an IOCTL which asks kernel for a free range? Or
what exactly is the drm_mm used for here?
the kernel really just keeps track of things. Though, writing a UAPI on
top of the GPUVA manager asking for a free range instead would be
possible by just adding the corresponding wrapper functions to get a
free hole.
Currently, and that's what I think I read out of your question, the main
benefit of using drm_mm over simply stuffing the entries into a list or
something boils down to easier collision detection and iterating
sub-ranges of the whole VA space.
libdrm_amdgpu that you could lift out into libdrm or some other
helper.
VA spaces, e.g. it silently needs to unmap mappings that are backed by
BOs that get evicted and remap them once they're validated (or swapped
back in).
allow us to make the rules consistent across drivers.
Userspace (generally Vulkan, some compute) has interfaces that pretty
much dictate a lot of how VMA tracking works, esp around lifetimes,
sparse mappings and splitting/merging underlying page tables, I'd
really like this to be more consistent across drivers, because already
I think we've seen with freedreno some divergence from amdgpu and we
also have i915/xe to deal with. I'd like to at least have one place
that we can say this is how it should work, since this is something
that *should* be consistent across drivers mostly, as it is more about
how the uapi is exposed.
Dave.