On Mon, Dec 24, 2018 at 11:43:31AM +0800, Jason Wang wrote:
On 2018/12/14 äå9:20, Michael S. Tsirkin wrote:It is ultimately under guest influence as guest supplies IOVA->GPA
On Fri, Dec 14, 2018 at 10:43:03AM +0800, Jason Wang wrote:How can this be controlled by guest? HVA was generated from qemu ram blocks
On 2018/12/13 äå10:31, Michael S. Tsirkin wrote:The difference is in security not in performance. Getting a bad HVA
I wonder how much we can gain through this. Currently, qemu IOMMU givesJust to make sure I understand this. It looks to me we should:Not really. We already have GPA->HVA, so I suggested a flag to pass
- allow passing GIOVA->GPA through UAPI
- cache GIOVA->GPA somewhere but still use GIOVA->HVA in device IOTLB for
performance
Is this what you suggest?
Thanks
GIOVA->GPA in the IOTLB.
This has advantages for security since a single table needs
then to be validated to ensure guest does not corrupt
QEMU memory.
GIOVA->GPA mapping, and qemu vhost code will translate GPA to HVA then pass
GIOVA->HVA to vhost. It looks no difference to me.
Thanks
corrupts QEMU memory and it might be guest controlled. Very risky.
which is totally under the control of qemu memory core instead of guest.
Thanks
translations. qemu translates GPA->HVA and gives the translated result
to the kernel. If it's not buggy and kernel isn't buggy it's all
fine.
But that's the approach that was proven not to work in the 20th century.
In the 21st century we are trying defence in depth approach.
My point is that a single code path that is responsible for
the HVA translations is better than two.