On Tue, 2019-03-12 at 10:59 +0800, Jason Wang wrote:
On 2019/3/12 äå2:14, David Miller wrote:This sequence should be OK. get_user_pages() contains a flush which
From: "Michael S. Tsirkin" <mst@xxxxxxxxxx>
Date: Mon, 11 Mar 2019 09:59:28 -0400
On Mon, Mar 11, 2019 at 03:13:17PM +0800, Jason Wang wrote:"flush_dcache_page()"
On 2019/3/8 äå10:12, Christoph Hellwig wrote:If caches have virtual tags then kernel and userspace view of
On Wed, Mar 06, 2019 at 02:18:07AM -0500, Jason Wang wrote:Anything different that you worry?
This series tries to access virtqueue metadata throughHow is this going to work for CPUs with virtually tagged
kernel virtual
address instead of copy_user() friends since they had too
much
overheads like checks, spec barriers or even hardware
feature
toggling. This is done through setup kernel address through
vmap() and
resigter MMU notifier for invalidation.
Test shows about 24% improvement on TX PPS. TCP_STREAM
doesn't see
obvious improvement.
caches?
memory
might not be automatically in sync if they access memory
through different virtual addresses. You need to do things like
flush_cache_page, probably multiple times.
I get this. Then I think the current set_bit_to_user() is suspicious,
we
probably miss a flush_dcache_page() there:
static int set_bit_to_user(int nr, void __user *addr)
{
unsigned long log = (unsigned long)addr;
struct page *page;
void *base;
int bit = nr + (log % PAGE_SIZE) * 8;
int r;
r = get_user_pages_fast(log, 1, 1, &page);
if (r < 0)
return r;
BUG_ON(r != 1);
base = kmap_atomic(page);
set_bit(bit, base);
kunmap_atomic(base);
clears the cache above the user virtual address, so on kmap, the page
is coherent at the new alias. On parisc at least, kunmap embodies a
flush_dcache_page() which pushes any changes in the cache above the
kernel virtual address back to main memory and makes it coherent again
for the user alias to pick it up.
James