Re: [PATCH 3/5] vhost: support upto 509 memory regions

From: Paolo Bonzini
Date: Wed Jun 17 2015 - 12:47:35 EST

On 17/06/2015 18:41, Michael S. Tsirkin wrote:
> On Wed, Jun 17, 2015 at 06:38:25PM +0200, Paolo Bonzini wrote:
>> On 17/06/2015 18:34, Michael S. Tsirkin wrote:
>>> On Wed, Jun 17, 2015 at 06:31:32PM +0200, Paolo Bonzini wrote:
>>>> On 17/06/2015 18:30, Michael S. Tsirkin wrote:
>>>>> Meanwhile old tools are vulnerable to OOM attacks.
>>>> For each vhost device there will be likely one tap interface, and I
>>>> suspect that it takes way, way more than 16KB of memory.
>>> That's not true. We have a vhost device per queue, all queues
>>> are part of a single tap device.
>> s/tap/VCPU/ then. A KVM VCPU also takes more than 16KB of memory.
> That's up to you as a kvm maintainer :)

Not easy, when the CPU alone requires three (albeit non-consecutive)
pages for the VMCS, the APIC access page and the EPT root.

> People are already concerned about vhost device
> memory usage, I'm not happy to define our user/kernel interface
> in a way that forces even more memory to be used up.

So, the questions to ask are:

1) What is the memory usage like immediately after vhost is brought up,
apart from these 16K?

2) Is there anything in vhost that allocates a user-controllable amount
of memory?

3) What is the size of the data structures that support one virtqueue
(there are two of them)? Does it depend on the size of the virtqueues?

4) Would it make sense to share memory regions between multiple vhost
devices? Would it be hard to implement? It would also make memory
operations O(1) rather than O(#cpus).

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at