Re: [PATCH v1 7/8] vfio/type1: Add VFIO_IOMMU_CACHE_INVALIDATE

From: Auger Eric
Date: Thu Apr 16 2020 - 08:43:14 EST


Hi Kevin,
On 4/16/20 2:09 PM, Tian, Kevin wrote:
>> From: Liu, Yi L <yi.l.liu@xxxxxxxxx>
>> Sent: Thursday, April 16, 2020 6:40 PM
>>
>> Hi Alex,
>> Still have a direction question with you. Better get agreement with you
>> before heading forward.
>>
>>> From: Alex Williamson <alex.williamson@xxxxxxxxxx>
>>> Sent: Friday, April 3, 2020 11:35 PM
>> [...]
>>>>>> + *
>>>>>> + * returns: 0 on success, -errno on failure.
>>>>>> + */
>>>>>> +struct vfio_iommu_type1_cache_invalidate {
>>>>>> + __u32 argsz;
>>>>>> + __u32 flags;
>>>>>> + struct iommu_cache_invalidate_info cache_info;
>>>>>> +};
>>>>>> +#define VFIO_IOMMU_CACHE_INVALIDATE _IO(VFIO_TYPE,
>>> VFIO_BASE
>>>>> + 24)
>>>>>
>>>>> The future extension capabilities of this ioctl worry me, I wonder if
>>>>> we should do another data[] with flag defining that data as
>> CACHE_INFO.
>>>>
>>>> Can you elaborate? Does it mean with this way we don't rely on iommu
>>>> driver to provide version_to_size conversion and instead we just pass
>>>> data[] to iommu driver for further audit?
>>>
>>> No, my concern is that this ioctl has a single function, strictly tied
>>> to the iommu uapi. If we replace cache_info with data[] then we can
>>> define a flag to specify that data[] is struct
>>> iommu_cache_invalidate_info, and if we need to, a different flag to
>>> identify data[] as something else. For example if we get stuck
>>> expanding cache_info to meet new demands and develop a new uapi to
>>> solve that, how would we expand this ioctl to support it rather than
>>> also create a new ioctl? There's also a trade-off in making the ioctl
>>> usage more difficult for the user. I'd still expect the vfio layer to
>>> check the flag and interpret data[] as indicated by the flag rather
>>> than just passing a blob of opaque data to the iommu layer though.
>>> Thanks,
>>
>> Based on your comments about defining a single ioctl and a unified
>> vfio structure (with a @data[] field) for pasid_alloc/free, bind/
>> unbind_gpasid, cache_inv. After some offline trying, I think it would
>> be good for bind/unbind_gpasid and cache_inv as both of them use the
>> iommu uapi definition. While the pasid alloc/free operation doesn't.
>> It would be weird to put all of them together. So pasid alloc/free
>> may have a separate ioctl. It would look as below. Does this direction
>> look good per your opinion?
>>
>> ioctl #22: VFIO_IOMMU_PASID_REQUEST
>> /**
>> * @pasid: used to return the pasid alloc result when flags == ALLOC_PASID
>> * specify a pasid to be freed when flags == FREE_PASID
>> * @range: specify the allocation range when flags == ALLOC_PASID
>> */
>> struct vfio_iommu_pasid_request {
>> __u32 argsz;
>> #define VFIO_IOMMU_ALLOC_PASID (1 << 0)
>> #define VFIO_IOMMU_FREE_PASID (1 << 1)
>> __u32 flags;
>> __u32 pasid;
>> struct {
>> __u32 min;
>> __u32 max;
>> } range;
>> };
>>
>> ioctl #23: VFIO_IOMMU_NESTING_OP
>> struct vfio_iommu_type1_nesting_op {
>> __u32 argsz;
>> __u32 flags;
>> __u32 op;
>> __u8 data[];
>> };
>>
>> /* Nesting Ops */
>> #define VFIO_IOMMU_NESTING_OP_BIND_PGTBL 0
>> #define VFIO_IOMMU_NESTING_OP_UNBIND_PGTBL 1
>> #define VFIO_IOMMU_NESTING_OP_CACHE_INVLD 2
>>
>
> Then why cannot we just put PASID into the header since the
> majority of nested usage is associated with a pasid?
>
> ioctl #23: VFIO_IOMMU_NESTING_OP
> struct vfio_iommu_type1_nesting_op {
> __u32 argsz;
> __u32 flags;
> __u32 op;
> __u32 pasid;
> __u8 data[];
> };
>
> In case of SMMUv2 which supports nested w/o PASID, this field can
> be ignored for that specific case.
On my side I would prefer keeping the pasid in the data[]. This is not
always used.

For instance, in iommu_cache_invalidate_info/iommu_inv_pasid_info we
devised flags to tell whether the PASID is used.

Thanks

Eric
>
> Thanks
> Kevin
>