IO space memcpy support for userspace.

From: Dave Airlie
Date: Thu Dec 04 2008 - 22:40:55 EST


Hi all (glibc + kernel folks).

So this isn't totally a kernel issue however I'm sure everyone who can
help is around here somewhere.

So in the kernel we have special memcpy_(from,to)io functions that are
used to copy data in out of PCI space,
however when we expose PCI device memory to userspace it has no way to
know the mapping it has been provided
is suitable for optimised userspace copy operations or not.

so eg. on certain IA64 platforms, doing a memcpy on a mmaped PCI
memory area can cause a hard lock.

Now I started to try and fix this in X but I'm wondering if this is
something glibc/kernel can solve between them.

I was firstly thinking about adding memcpy_io/memset_io/str*_io
options to glibc and just have userspace use them.
however this means code that operates on non-IO space objects gets
penalised in those cases. e.g. a sw renderer rendering
to a SW surface vs the same sw renderer rendering to a surface in
video memory. the renderer really doesn't know where the
surface is actually underneath the hood.

So further thinking about this it would be nice if the standard
memcpy/memset/str* realised, hey I'm working on a memory address or
VMA
that is a IO mapping, I really shouldn't do shiny prefetch stuff on
this. I'm not 100% sure how this could be implemented, some sort of
private
mmap return value or flag like MAP_NO_OPTIMISE (I realise this is
going the wrong way as the kernel should tell glibc it, do we even
have a channel for this info?).
Then when memory op is done it checks the memory dest/src to see if
the segment is allowed to optimise or not.

I'm sure this has come up before and I'm sure I'll either wish I never
posted this or someone will show me the crisp corpse of the last guy
who suggested it.

Dave.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/