Re: [RFC PATCH v1 3/8] iommu/arm-smmu-v3-sva: Allocate new ASID from installed_smmus
From: Michael Shavit
Date: Mon Aug 21 2023 - 10:18:18 EST
On Mon, Aug 21, 2023 at 9:50 PM Jason Gunthorpe <jgg@xxxxxxxxxx> wrote:
>
> On Mon, Aug 21, 2023 at 09:38:40PM +0800, Michael Shavit wrote:
> > On Mon, Aug 21, 2023 at 7:54 PM Jason Gunthorpe <jgg@xxxxxxxxxx> wrote:
> > >
> > > On Mon, Aug 21, 2023 at 05:31:23PM +0800, Michael Shavit wrote:
> > > > On Fri, Aug 18, 2023 at 2:38 AM Jason Gunthorpe <jgg@xxxxxxxxxx> wrote:
> > > > >
> > > > > On Fri, Aug 18, 2023 at 02:16:25AM +0800, Michael Shavit wrote:
> > > > > > Pick an ASID that is within the supported range of all SMMUs that the
> > > > > > domain is installed to.
> > > > > >
> > > > > > Signed-off-by: Michael Shavit <mshavit@xxxxxxxxxx>
> > > > > > ---
> > > > >
> > > > > This seems like a pretty niche scenario, maybe we should just keep a
> > > > > global for the max ASID?
> > > > >
> > > > > Otherwise we need a code to change the ASID, even for non-SVA domains,
> > > > > when the domain is installed in different devices if the current ASID
> > > > > is over the instance max..
> > > >
> > > > This RFC took the other easy way out for this problem by rejecting
> > > > attaching a domain if its currently assigned ASID/VMID
> > > > is out of range when attaching to a new SMMU. But I'm not sure
> > > > which of the two options is the right trade-off.
> > > > Especially if we move VMID to a global allocator (which I plan to add
> > > > for v2), setting a global maximum for VMID of 256 sounds small.
> > >
> > > IMHO the simplest and best thing is to make both vmid and asid as
> > > local allocators. Then alot of these problems disappear
> >
> > Well that does sound like the most flexible, but IMO quite a lot more
> > complicated.
> >
> > I'll post a v2 RFC that removes the `iommu/arm-smmu-v3: Add list of
> > installed_smmus` patch and uses a flat master list in smmu_domain as
> > suggested by Robin, for comparison with the v1. But at a glance using a
> > local allocator would require:
>
> > 1. Keeping that patch so we can track the asid/vmid for a domain on a
> > per smmu instance
>
> You'd have to store the cache tag in the per-master struct on that
> list and take it out of the domain struct.
>
> Ie the list of attached masters contains the per-master cache tag
> instead of a global cache tag.
>
> The only place you need the cache tag is when iterating over the list
> of masters, so it is OK.
>
> If the list of masters is sorted by smmu then the first master of each
> smmu can be used to perform the cache tag invalidation, then the rest
> of the list is the ATC invalidation.
>
> The looping code will be a bit ugly.
I suppose that could work.... but I'm worried it's gonna be messy,
especially if we think about how the PASID feature would interact.
With PASID, there could be multiple domains attached to a master. So
we won't be able to store a single cache tag/asid for the currently
attached domain on the arm_smmu_master. Still doable however; as it
could move into the struct mapping a domain to each PASID/master pair,
with your loop still using the first entry in the list (until it meets
an entry belonging to a different SMMU) for the invalidation.
> > 2. Keeping a map in the smmu struct so that arm_smmu_share_asid can
> > find any arm_smmu_installed_smmu that need to have their asid
> > updated
>
> Yes, the global xarray moves into the smmu
>
> > (on a loop over every smmu the domain in arm_smmu_mmu_notifier_get is
> > attached to, which just at a glance looks headache inducing because of
> > sva's piggybacking on the rid domain.)
>
> Not every smmu, just the one you are *currently* attaching to. We
> don't care if the *other* smmu's have different ASIDs, maybe they are
> not using BTM, or won't use SVA.
I mean because the domain in arm_smmu_mmu_notifier_get is the RID
domain (not the SVA domain, same issue we discussed in previous
thread) , which can be attached to multiple SMMUs.