On Sun, Jul 30, 2023 at 09:57:06PM +0800, Zhu Yanjun wrote:
在 2023/7/30 19:18, Matthew Wilcox 写道:It's not used in very many places. But the first one that I see it used
On Sun, Jul 30, 2023 at 07:01:26PM +0800, Zhu Yanjun wrote:In many places, this function is used. So this function needs the folio
Does the following function have folio version?No -- I haven't needed to convert anything that uses
"
int sg_alloc_append_table_from_pages(struct sg_append_table *sgt_append,
struct page **pages, unsigned int n_pages, unsigned int offset,
unsigned long size, unsigned int max_segment,
unsigned int left_pages, gfp_t gfp_mask)
"
sg_alloc_append_table_from_pages() yet. It doesn't look like it should
be _too_ hard to add a folio version.
version.
(drivers/infiniband/core/umem.c), you can't do a straightforward folio
conversion:
pinned = pin_user_pages_fast(cur_base,
min_t(unsigned long, npages,
PAGE_SIZE /
sizeof(struct page *)),
gup_flags, page_list);
...
ret = sg_alloc_append_table_from_pages(
&umem->sgt_append, page_list, pinned, 0,
pinned << PAGE_SHIFT, ib_dma_max_seg_size(device),
npages, GFP_KERNEL);
That can't be converted to folios. The GUP might start in the middle of
the folio, and we have no way to communicate that.
This particular usage really needs the phyr work that Jason is doing so
we can efficiently communicate physically contiguous ranges from GUP
to sg.
Another problem, after folio is used, I want to know the performance afterYou know what you're working on ... I wouldn't know how best to test
folio is implemented.
How to make tests to get the performance?
your code.