Re: [RFC PATCH 1/2] soc: rockchip: power-domain: Manage resource conflicts with firmware
From: Doug Anderson
Date: Wed Apr 06 2022 - 05:38:57 EST
On Tue, Apr 5, 2022 at 6:49 PM Brian Norris <briannorris@xxxxxxxxxxxx> wrote:
> On RK3399 platforms, power domains are managed mostly by the kernel
> (drivers/soc/rockchip/pm_domains.c), but there are a few exceptions
> where ARM Trusted Firmware has to be involved:
> (1) system suspend/resume
> (2) DRAM DVFS (a.k.a., "ddrfreq")
> Exception (1) does not cause much conflict, since the kernel has
> quiesced itself by the time we make the relevant PSCI call.
> Exception (2) can cause conflict, because of two actions:
> (a) ARM Trusted Firmware needs to read/modify/write the PMU_BUS_IDLE_REQ
> register to idle the memory controller domain; the kernel driver
> also has to touch this register for other domains.
> (b) ARM Trusted Firmware needs to manage the clocks associated with
> these domains.
> To elaborate on (b): idling a power domain has always required ungating
> an array of clocks; see this old explanation from Rockchip:
> Historically, ARM Trusted Firmware has avoided this issue by using a
> special PMU_CRU_GATEDIS_CON0 register -- this register ungates all the
> necessary clocks -- when idling the memory controller. Unfortunately,
> we've found that this register is not 100% sufficient; it does not turn
> the relevant PLLs on .
> So it's possible to trigger issues with something like the following:
> 1. enable a power domain (e.g., RK3399_PD_VDU) -- kernel will
> temporarily enable relevant clocks/PLLs, then turn them back off
> 2. a PLL (e.g., PLL_NPLL) is part of the clock tree for
> RK3399_PD_VDU's clocks but otherwise unused; NPLL is disabled
> 3. perform a ddrfreq transition (rk3399_dmcfreq_target() -> ...
> drivers/clk/rockchip/clk-ddr.c / ROCKCHIP_SIP_DRAM_FREQ)
> 4. ARM Trusted Firmware unagates VDU clocks (via PMU_CRU_GATEDIS_CON0)
> 5. ARM Trusted firmware idles the memory controller domain
> 6. Step 5 waits on the VDU domain/clocks, but NPLL is still off
> i.e., we hang the system.
> So for (b), we need to at a minimum manage the relevant PLLs on behalf
> of firmware. It's easier to simply manage the whole clock tree, in a
> similar way we do in rockchip_pd_power().
> For (a), we need to provide mutual exclusion betwen rockchip_pd_power()
> and firmware. To resolve that, we simply grab the PMU mutex and release
> it when ddrfreq is done.
> The Chromium OS kernel has been carrying versions of part of this hack
> for a while, based on some new custom notifiers . I've rewritten as a
> simple function call between the drivers, which is OK because:
> * the PMU driver isn't enabled, and we don't have this problem at all
> (the firmware should have left us in an OK state, and there are no
> runtime conflicts); or
> * the PMU driver is present, and is a single instance.
> And the power-domain driver cannot be removed, so there's no lifetime
> management to worry about.
> For completeness, there's a 'dmc_pmu_mutex' to guard (likely
> theoretical?) probe()-time races. It's OK for the memory controller
> driver to start running before the PMU, because the PMU will avoid any
> critical actions during the block() sequence.
>  The RK3399 TRM for PMU_CRU_GATEDIS_CON0 only talks about ungating
> clocks. Based on experimentation, we've found that it does not power
> up the necessary PLLs.
>  CHROMIUM: soc: rockchip: power-domain: Add notifier to dmc driver
> Notably, the Chromium solution only handled conflict (a), not (b).
> In practice, item (b) wasn't a problem in many cases because we
> never managed to fully power off PLLs. Now that the (upstream) video
> decoder driver performs runtime clock management, we often power off
> Signed-off-by: Brian Norris <briannorris@xxxxxxxxxxxx>
> drivers/soc/rockchip/pm_domains.c | 118 ++++++++++++++++++++++++++++++
> include/soc/rockchip/pm_domains.h | 25 +++++++
> 2 files changed, 143 insertions(+)
I've already done several pre-review of a few versions of this, so at
this point I'm pretty happy with where things are. Feel free to add:
Reviewed-by: Douglas Anderson <dianders@xxxxxxxxxxxx>