On Wed 04-12-19 15:16:09, Thomas HellstrÃm (VMware) wrote:
On 12/4/19 2:52 PM, Michal Hocko wrote:Please state all those details/assumptions you are operating on in the
On Tue 03-12-19 11:48:53, Thomas HellstrÃm (VMware) wrote:Functionality is unchanged. The use of a on-stack vma copy was severely
From: Thomas Hellstrom <thellstrom@xxxxxxxxxx>So essentially this should have any new side effect on functionality it
TTM graphics buffer objects may, transparently to user-space, move
between IO and system memory. When that happens, all PTEs pointing to the
old location are zapped before the move and then faulted in again if
needed. When that happens, the page protection caching mode- and
encryption bits may change and be different from those of
struct vm_area_struct::vm_page_prot.
We were using an ugly hack to set the page protection correctly.
Fix that and instead use vmf_insert_mixed_prot() and / or
vmf_insert_pfn_prot().
Also get the default page protection from
struct vm_area_struct::vm_page_prot rather than using vm_get_page_prot().
This way we catch modifications done by the vm system for drivers that
want write-notification.
is just making a hacky/ugly code less so?
frowned upon in an earlier thread, which also points to another similar
example using vmf_insert_pfn_prot().
https://lore.kernel.org/lkml/20190905103541.4161-2-thomas_os@xxxxxxxxxxxx/
In other words what are theDuring the years, it looks like the caching- and encryption flags of
consequences of having page protection inconsistent from vma's?
vma::vm_page_prot have been largely removed from usage. From what I can
tell, there are no more places left that can affect TTM. We discussed
__split_huge_pmd_locked() towards the end of that thread, but that doesn't
affect TTM even with huge page-table entries.
changelog.