OK.When a page in the TTM pool is being moved back and forth and also changesWhen ultimately being free all the page are set to write back again as
the caching model, what happens on the free part? Is the original caching
state put back on it? Say I allocated a DMA32 page (GFP_DMA32), and move it
to another pool for another radeon device. I also do some cache changes:
make it write-back, then un-cached, then writeback, and when I am done, I
return it back to the pool (DMA32). Once that is done I want to unload
the DRM/TTM driver. Does that page get its caching state reverted
back to what it originally had (probably un-cached)? And where is this done?
it's default of all allocated page (see ttm_pages_put). ttm_put_pages
will add page to the correct pool (uc or wc).
.. snip ..
Thomas, Jerome, Dave,How about a backend TTM alloc API? So the calls to 'ttm_get_page'Sounds doable. Thought i don't understand why you want virtualized
and 'ttm_put_page' call to a TTM-alloc API to do allocation.
The default one is the native, and it would have those 'dma_alloc_coherent'
removed. When booting under virtualized
environment a virtualisation "friendly" backend TTM alloc would
register and all calls to 'put/get/probe' would be diverted to it.
'probe' would obviously check whether it should use this backend or not.
It would mean two new files: drivers/gpu/drm/ttm/ttm-memory-xen.c and
a ttm-memory-generic.c and some header work.
It would still need to keep the 'dma_address[i]' around so that
those can be passed to the radeon/nouveau GTT, but for native it
could just contain BAD_DMA_ADDRESS - and the code in the radeon/nouveau
GTT binding is smart to figure out to do 'pci_map_single' if the
dma_addr_t has BAD_DMA_ADDRESS.
The issuer here is with the caching I had a question about. We
would need to reset the caching state back to the original one
before free-ing it. So does the TTM pool de-alloc code deal with this?
I can start this next week if you guys are comfortable with this idea.
guest to be able to use hw directly. From my point of view all deviceThat "virtualized guest" in this case is the first Linux kernel that
in a virtualized guest should be virtualized device that talks to the
host system driver.
is booted under a hypervisor. It serves as the "driver domain"
so that it can drive the network, storage, and graphics. To get the
graphics working right the patchset that introduced using the PCI DMA
in the TTM layer allows us to program the GTT with the bus address
instead of programming the bus address of a bounce buffer. The first
set of patches have a great lengthy explanation of this :-)
https://lkml.org/lkml/2010/12/6/516
Cheers,
Jerome