On Mon, Aug 05, 2019 at 12:36:40PM +0800, Jason Wang wrote:
On 2019/8/2 äå10:27, Michael S. Tsirkin wrote:Depending on CONFIG_BROKEN? I'm not sure it's a good idea.
On Fri, Aug 02, 2019 at 09:46:13AM -0300, Jason Gunthorpe wrote:
On Fri, Aug 02, 2019 at 05:40:07PM +0800, Jason Wang wrote:The topic is whether we should revert
I've also idly wondered if calling synchronize_rcu() under the variousThis must be a proper barrier, like a spinlock, mutex, orI start with synchronize_rcu() but both you and Michael raise some
synchronize_rcu.
concern.
mm locks is a deadlock situation.
Then I try spinlock and mutex:I think the topic here is correctness not performance improvement
1) spinlock: add lots of overhead on datapath, this leads 0 performance
improvement.
commit 7f466032dc9 ("vhost: access vq metadata through kernel virtual address")
or keep it in. The only reason to keep it is performance.
Maybe it's time to introduce the config option?
We want THP and NUMA to work. Both are important for performance.Now as long as all this code is disabled anyway, we can experiment a
bit.
I personally feel we would be best served by having two code paths:
- Access to VM memory directly mapped into kernel
- Access to userspace
Having it all cleanly split will allow a bunch of optimizations, for
example for years now we planned to be able to process an incoming short
packet directly on softirq path, or an outgoing on directly within
eventfd.
It's not hard consider we've already had our own accssors. But the question
is (as asked in another thread), do you want permanent GUP or still use MMU
notifiers.
Thanks