If you pass it on it has the same semantics, but given that you'llI'm pretty sure most if not all of the original Xen backends do theblkback - the in-kernel backend - does generate barriers when it
same. Given that I have tried to implement tagged ordering in qemu
I know that comes down to doing exactly the same draining we already
do in the kernel, just duplicated in the virtual disk backend. That
is for a userspace implementation - for a kernel implementation only
using block devices we could in theory implement it using barriers,
but that would be even more inefficient. And last time I looked
at the in-kernel xen disk backed it didn't do that either.
receives one from the guest. Could you expand on why passing a
guest barrier through to the host IO stack would be bad for
performance? Isn't this exactly the same as a local writer
generating a barrier?
usually end up having multiple guest disks on a single volume using
lvm or similar you'll end up draining even more I/O as there is one
queue for all of them. That way you can easily have one guest starve
others.
Note that we're going to get rid of the draining for common cases
anyway, but that's a separate discussion thread the "relaxed barriers"
one.
It's true that a number of the Xen backends end up implementingIf they are in Linux/Posix userspace they can't because there are
barriers via drain for simplicity's sake, but there's no inherent
reason why they couldn't implement a more complete tagged model.
not system calls to archive that. And then again there really is
no need to implement all this in the host anyway - the draining
is something we enforced on ourselves in Linux without good reason,
which we're trying to get rid of and no other OS ever did.
Just read Documentation/block/barriers.txt, it's very well describedNow where both old and new one are buggy is that that they don'tAh, OK, something specific. What level ends up dropping the empty
include the QUEUE_ORDERED_DO_PREFLUSH and
QUEUE_ORDERED_DO_POSTFLUSH/QUEUE_ORDERED_DO_FUA which mean any
explicit cache flush (aka empty barrier) is silently dropped, making
fsync and co not preserve data integrity.
barrier? Certainly an empty WRITE_BARRIER operation to the backend
will cause all prior writes to be durable, which should be enough.
Are you saying that there's an extra flag we should be passing to
blk_queue_ordered(), or is there some other interface we should be
implementing for explicit flushes?
Is there a good reference implementation we can use as a model?
there. Even the naming of the various ORDERED constant should
give enough hints.
It's one of the many backends written to the protocol specification,
I don't think it's fair to call it irrelevant. And as mentioned before
I'd be very surprised if the other backends all get it right. If you
send me pointers to one or two backends you considered "relevent" I'm
happy to look at them.