On Fri, Feb 5, 2021 at 12:47 AM Christian König
<christian.koenig@xxxxxxx> wrote:
Am 05.02.21 um 09:06 schrieb John Stultz:Hrm. This may be problematic, as it's not just TTM code, but some of
diff --git a/drivers/gpu/drm/page_pool.c b/drivers/gpu/drm/page_pool.cPlease use a BSD/MIT compatible license if you want to copy this from
new file mode 100644
index 000000000000..2139f86e6ca7
--- /dev/null
+++ b/drivers/gpu/drm/page_pool.c
@@ -0,0 +1,220 @@
+// SPDX-License-Identifier: GPL-2.0
the TTM code.
the TTM logic integrated into a page-pool implementation I wrote based
on logic from the ION code (which was GPL-2.0 before it was dropped).
So I don't think I can just make it MIT. Any extra context on the
need for MIT, or suggestions on how to best resolve this?
I can do that, but am curious as to the benefit? We are mostly using+int drm_page_pool_get_size(struct drm_page_pool *pool)Maybe use an atomic for the count instead?
+{
+ int ret;
+
+ spin_lock(&pool->lock);
+ ret = pool->count;
+ spin_unlock(&pool->lock);
count where we already have to take the pool->lock anyway, and this
drm_page_pool_get_size() is only used for debugfs output so far, so I
don't expect it to be a hot path.
This is a carryover from the ION page pool implementation:+void drm_page_pool_add(struct drm_page_pool *pool, struct page *page)Hui what? What should that be good for?
+{
+ spin_lock(&pool->lock);
+ list_add_tail(&page->lru, &pool->items);
+ pool->count++;
+ atomic_long_add(1 << pool->order, &total_pages);
+ spin_unlock(&pool->lock);
+
+ mod_node_page_state(page_pgdat(page), NR_KERNEL_MISC_RECLAIMABLE,
+ 1 << pool->order);
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.kernel.org%2Fpub%2Fscm%2Flinux%2Fkernel%2Fgit%2Ftorvalds%2Flinux.git%2Ftree%2Fdrivers%2Fstaging%2Fandroid%2Fion%2Fion_page_pool.c%3Fh%3Dv5.10%23n28&data=04%7C01%7Cchristian.koenig%40amd.com%7Cc4eadb0a9cf6491d99ba08d8ca173457%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637481548325174885%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=FUjZK5NSDMUYfU7vGeE4fDU2HCF%2FYyNBwc30aoLLPQ4%3D&reserved=0
My sense is it helps with the vmstat/meminfo accounting so folks can
see the cached pages are shrinkable/freeable. This maybe falls under
other dmabuf accounting/stats discussions, so I'm happy to remove it
for now, or let the drivers using the shared page pool logic handle
the accounting themselves?
Yea, that cleans a number of things up nicely. Thank you!+static struct page *drm_page_pool_remove(struct drm_page_pool *pool)Better use list_first_entry_or_null instead of checking the count.
+{
+ struct page *page;
+
+ if (!pool->count)
+ return NULL;
This way you can also pull the lock into the function.
Yea, it felt cleaner at the time this way, but I think I will need to+struct drm_page_pool *drm_page_pool_create(unsigned int order,Why not making this an embedded object? We should not see much dynamic
+ int (*free_page)(struct page *p, unsigned int order))
+{
+ struct drm_page_pool *pool = kmalloc(sizeof(*pool), GFP_KERNEL);
pool creation.
switch to an embedded object in order to resolve the memory usage
issue you pointed out with growing the ttm_pool_dma, so thank you for
the suggestion!
I guess if we've already pruned ourself from the pool list, then your+void drm_page_pool_destroy(struct drm_page_pool *pool)Locking should be unnecessary when the pool is destroyed anyway.
+{
+ struct page *page;
+
+ /* Remove us from the pool list */
+ mutex_lock(&pool_list_lock);
+ list_del(&pool->list);
+ mutex_unlock(&pool_list_lock);
+
+ /* Free any remaining pages in the pool */
+ spin_lock(&pool->lock);
right, we can't race with the shrinker and it's maybe not necessary.
But it also seems easier to consistently follow the locking rules in a
very unlikely path rather than leaning on subtlety. Either way, I
think this becomes moot if I make the improvements you suggest to
drm_page_pool_remove().
Sounds good!+static int drm_page_pool_shrink_one(void)Better to move this up, directly after the list_first_entry().
+{
+ struct drm_page_pool *pool;
+ struct page *page;
+ int nr_freed = 0;
+
+ mutex_lock(&pool_list_lock);
+ pool = list_first_entry(&pool_list, typeof(*pool), list);
+
+ spin_lock(&pool->lock);
+ page = drm_page_pool_remove(pool);
+ spin_unlock(&pool->lock);
+
+ if (page)
+ nr_freed = drm_page_pool_free_pages(pool, page);
+
+ list_move_tail(&pool->list, &pool_list);
Thanks so much for your review and feedback! I'll try to get some of
the easy suggestions integrated, and will have to figure out what to
do about the re-licensing request.
thanks
-john