Re: [PATCH v7 1/3] soc: qcom: ice: Add OPP-based clock scaling support for ICE
From: Harshal Dev
Date: Fri Apr 03 2026 - 13:24:32 EST
On 4/3/2026 7:47 PM, Abhinaba Rakshit wrote:
> On Mon, Mar 30, 2026 at 08:09:35PM +0530, Harshal Dev wrote:
>>> +/**
>>> + * qcom_ice_scale_clk() - Scale ICE clock for DVFS-aware operations
>>> + * @ice: ICE driver data
>>> + * @target_freq: requested frequency in Hz
>>> + * @round_ceil: when true, selects nearest freq >= @target_freq;
>>> + * otherwise, selects nearest freq <= @target_freq
>>> + *
>>> + * Selects an OPP frequency based on @target_freq and the rounding direction
>>> + * specified by @round_ceil, then programs it using dev_pm_opp_set_rate(),
>>> + * including any voltage or power-domain transitions handled by the OPP
>>> + * framework. Updates ice->core_clk_freq on success.
>>> + *
>>> + * Return: 0 on success; -EOPNOTSUPP if no OPP table; -EINVAL in-case of
>>> + * incorrect flags; or error from dev_pm_opp_set_rate()/OPP lookup.
>>> + */
>>> +int qcom_ice_scale_clk(struct qcom_ice *ice, unsigned long target_freq,
>>> + bool round_ceil)
>>
>> Any particular reason for choosing round_ceil? Using round_floor would have
>> saved the need for caller to pass negation of scale_up.
>
> There isn’t a strong technical reason for choosing round_ceil specifically.
> The choice was mainly influenced by the earlier discussion here:
> https://lore.kernel.org/all/15495f8a-37b0-4768-9ee1-05fd6c70034e@xxxxxxxxxxxxxxxx/
>
> Also, this helper isn’t necessarily limited to the current caller.
> We might see additional users in the future where the semantics align more
> naturally with flags like scale_down, which map cleanly to a round_ceil‑style selection.
> That said, I agree that using round_floor could simplify the current callsite by
> avoiding the negation of scale_up.
>
> I don’t have a strong objection to switching it if you feel that would be
> more cleaner for now.
>
No issues, you can choose to do it if you spin a v8 of this patch series.
>>> +{
>>> + unsigned long ice_freq = target_freq;
>>> + struct dev_pm_opp *opp;
>>> + int ret;
>>> +
>>> + if (!ice->has_opp)
>>> + return -EOPNOTSUPP;
>>> +
[...]
>>> +
>>> static struct qcom_ice *qcom_ice_create(struct device *dev,
>>> - void __iomem *base)
>>> + void __iomem *base,
>>> + bool is_legacy_binding)
>>
>> You don't need to introduce is_legacy_binding.
>>
>> Since you only need to add the OPP table when this function gets called from ICE probe,
>> you should not touch this function. Instead, you should call devm_pm_opp_of_add_table()
>> in ICE probe before calling qcom_ice_create() then once qcom_ice_create() is success, you
>> can store the clk rate in the returned qcom_ice *engine ptr by calling clk_get_rate().
>
> This was added as part of the review comment from Krzysztof:
> https://lore.kernel.org/all/20260128-daft-seriema-of-promotion-c50eb5@quoll/
>
> While I agree moving this to qcom_ice_probe would be more cleaner without needing
> to change the API, most of our initializing code for driver by parsing the DT node
> happens through qcom_ice_create, which keeps qcom_ice_probe much simpler.
> Please let me know, if you think otherwise.
>
Seems like a suggestion from Krzysztof and not something based on strong opinion. Again,
you can choose to do this if you spin a v8, I feel it's cleaner.
> Also, I don't see any reason for moving the clk_get_rate() logic to qcom_ice_probe
> though as it will not be set on legacy targets in that case.
I thought only new DT nodes will be specifying the OPP table requiring us to store the
clk rate and restore later. If legacy DT nodes also possess the OPP table, then ignore
this comment.
>
>>> {
>>> struct qcom_ice *engine;
>>> + int err;
>>>
>>> if (!qcom_scm_is_available())
>>> return ERR_PTR(-EPROBE_DEFER);
>>> @@ -584,6 +640,26 @@ static struct qcom_ice *qcom_ice_create(struct device *dev,
>>> if (IS_ERR(engine->core_clk))
>>> return ERR_CAST(engine->core_clk);
>>>
>>> + /*
>>> + * Register the OPP table only when ICE is described as a standalone
>>> + * device node. Older platforms place ICE inside the storage controller
>>> + * node, so they don't need an OPP table here, as they are handled in
>>> + * storage controller.
>>> + */
>>> + if (!is_legacy_binding) {
>>> + /* OPP table is optional */
>>> + err = devm_pm_opp_of_add_table(dev);
>>> + if (err && err != -ENODEV) {
>>> + dev_err(dev, "Invalid OPP table in Device tree\n");
>>> + return ERR_PTR(err);
>>> + }
>>> + engine->has_opp = (err == 0);
>>
>> Let's keep it readable and simple. engine->has_opps = true; here and false in error handle above.
>
> Well there are 3 cases to it:
>
> 1. err == 0 which implies devm_pm_opp_of_add_table is successful and we can set engine->has_opp =true.
> 2. err == -ENODEV which implies there is no opp table in the DT node.
> In that case, we don't fail the driver simply go ahead and log in the check below.
> This is done since OPP-table is optional.
> 3. err == any other error code. Something very wrong happened with devm_pm_opp_of_add_table
> and driver should fail.
>
> Hence, we have the condition (err == 0) for setting has_opp flag.
My suggestion is you either explain this in concise comments or simplify the assignment of has_opp
to make it obvious.
Regards,
Harshal
>
> Abhinaba Rakshit