Re: [PATCH 2/9] iommu/rockchip: Attach multiple power domains

From: Tomeu Vizoso
Date: Wed Sep 11 2024 - 07:03:48 EST


On Thu, Jun 13, 2024 at 11:38 PM Sebastian Reichel
<sebastian.reichel@xxxxxxxxxxxxx> wrote:
>
> Hi,
>
> On Thu, Jun 13, 2024 at 11:34:02AM GMT, Tomeu Vizoso wrote:
> > On Thu, Jun 13, 2024 at 11:24 AM Tomeu Vizoso <tomeu@xxxxxxxxxxxxxxx> wrote:
> > > On Thu, Jun 13, 2024 at 2:05 AM Sebastian Reichel
> > > <sebastian.reichel@xxxxxxxxxxxxx> wrote:
> > > > On Wed, Jun 12, 2024 at 03:52:55PM GMT, Tomeu Vizoso wrote:
> > > > > IOMMUs with multiple base addresses can also have multiple power
> > > > > domains.
> > > > >
> > > > > The base framework only takes care of a single power domain, as some
> > > > > devices will need for these power domains to be powered on in a specific
> > > > > order.
> > > > >
> > > > > Use a helper function to stablish links in the order in which they are
> > > > > in the DT.
> > > > >
> > > > > This is needed by the IOMMU used by the NPU in the RK3588.
> > > > >
> > > > > Signed-off-by: Tomeu Vizoso <tomeu@xxxxxxxxxxxxxxx>
> > > > > ---
> > > >
> > > > To me it looks like this is multiple IOMMUs, which should each get
> > > > their own node. I don't see a good reason for merging these
> > > > together.
> > >
> > > I have made quite a few attempts at splitting the IOMMUs and also the
> > > cores, but I wasn't able to get things working stably. The TRM is
> > > really scant about how the 4 IOMMU instances relate to each other, and
> > > what the fourth one is for.
> > >
> > > Given that the vendor driver treats them as a single IOMMU with four
> > > instances and we don't have any information on them, I resigned myself
> > > to just have them as a single device.
> > >
> > > I would love to be proved wrong though and find a way fo getting
> > > things stably as different devices so they can be powered on and off
> > > as needed. We could save quite some code as well.
> >
> > FWIW, here a few ways how I tried to structure the DT nodes, none of
> > these worked reliably:
> >
> > https://gitlab.freedesktop.org/tomeu/linux/-/blob/6.10-rocket-multiple-devices-power/arch/arm64/boot/dts/rockchip/rk3588s.dtsi?ref_type=heads#L1163
> > https://gitlab.freedesktop.org/tomeu/linux/-/blob/6.10-rocket-schema-subnodes//arch/arm64/boot/dts/rockchip/rk3588s.dtsi?ref_type=heads#L1162
> > https://gitlab.freedesktop.org/tomeu/linux/-/blob/6.10-rocket-multiple-devices//arch/arm64/boot/dts/rockchip/rk3588s.dtsi?ref_type=heads#L1163
> > https://gitlab.freedesktop.org/tomeu/linux/-/blob/6.10-rocket-multiple-iommus//arch/arm64/boot/dts/rockchip/rk3588s.dtsi?ref_type=heads#L2669
> >
> > I can very well imagine I missed some way of getting this to work, but
> > for every attempt, the domains, iommus and cores were resumed in
> > different orders that presumably caused problems during concurrent
> > execution fo workloads.
> >
> > So I fell back to what the vendor driver does, which works reliably
> > (but all cores have to be powered on at the same time).
>
> Mh. The "6.10-rocket-multiple-iommus" branch seems wrong. There is
> only one iommu node in that. I would have expected a test with
>
> rknn {
> // combined device
>
> iommus = <&iommu1>, <&iommu2>, ...;
> };

You are right, I'm afraid I lost those changes...

> Otherwise I think I would go with the schema-subnodes variant. The
> driver can initially walk through the sub-nodes and collect the
> resources into the main device, so on the driver side nothing would
> really change. But that has a couple of advantages:
>
> 1. DT and DT binding are easier to read
> 2. It's similar to e.g. CPU cores each having their own node
> 3. Easy to extend to more cores in the future
> 4. The kernel can easily switch to proper per-core device model when
> the problem has been identified

You mean having subnodes containing the different resources that a
core uses such as clocks, memory resources, power domain, etc? The
problem with that is that the existing code in the kernel assumes that
those resources are directly within a device node. Or do you suggest
something else?

Thanks,

Tomeu