Re: [PATCH v5 7/9] iommufd: Associate fault object with iommufd_hw_pgtable

From: Baolu Lu
Date: Sun May 26 2024 - 23:19:45 EST


On 5/27/24 9:33 AM, Tian, Kevin wrote:
From: Jason Gunthorpe <jgg@xxxxxxxx>
Sent: Friday, May 24, 2024 10:25 PM

On Mon, May 20, 2024 at 03:39:54AM +0000, Tian, Kevin wrote:
From: Baolu Lu <baolu.lu@xxxxxxxxxxxxxxx>
Sent: Monday, May 20, 2024 10:19 AM

On 5/15/24 4:50 PM, Tian, Kevin wrote:
From: Lu Baolu <baolu.lu@xxxxxxxxxxxxxxx>
Sent: Tuesday, April 30, 2024 10:57 PM

@@ -308,6 +314,19 @@ int iommufd_hwpt_alloc(struct
iommufd_ucmd
*ucmd)
goto out_put_pt;
}

+ if (cmd->flags & IOMMU_HWPT_FAULT_ID_VALID) {
+ struct iommufd_fault *fault;
+
+ fault = iommufd_get_fault(ucmd, cmd->fault_id);
+ if (IS_ERR(fault)) {
+ rc = PTR_ERR(fault);
+ goto out_hwpt;
+ }
+ hwpt->fault = fault;
+ hwpt->domain->iopf_handler = iommufd_fault_iopf_handler;
+ hwpt->domain->fault_data = hwpt;
+ }

this is nesting specific. why not moving it to the nested_alloc()?

Nesting is currently a use case for userspace I/O page faults, but this
design should be general enough to support other scenarios as well.

Do we allow user page table w/o nesting?

What would be a scenario in which the user doesn't manage the
page table but still want to handle the I/O page fault? The fault
should always be delivered to the owner managing the page table...

userspace always manages the page table, either it updates the IOPTE
directly in a nest or it calls iommufd map operations.

Ideally the driver will allow PRI on normal cases, although it will
probably never be used.


But now it's done in a half way.

valid_flags in normal cases doesn't accept a fault ID. but we then
handle the fault ID flag generally above.

I'd like to see a consistent message throughout the path.

Okay, I see. I think valid_flags logic is doing the right thing. It
indicates that user space page fault on a paging hwpt is not supported
yet, but it leaves the room to grow it in the future.

I will post v6 of this series soon to address some obvious issues
identified during this v5 review cycle. Thanks to all review comments.

Best regards,
baolu