Re: [PATCH v2 09/17] crypto: qce: core: Add support to initialize interconnect path

From: Bhupesh Sharma
Date: Tue May 18 2021 - 11:39:29 EST


Hi Bjorn,

Thanks for the review.

On Tue, 18 May 2021 at 20:37, Bjorn Andersson
<bjorn.andersson@xxxxxxxxxx> wrote:
>
> On Wed 05 May 16:37 CDT 2021, Bhupesh Sharma wrote:
>
> > From: Thara Gopinath <thara.gopinath@xxxxxxxxxx>
> >
> > Crypto engine on certain Snapdragon processors like sm8150, sm8250, sm8350
> > etc. requires interconnect path between the engine and memory to be
> > explicitly enabled and bandwidth set prior to any operations. Add support
> > in the qce core to enable the interconnect path appropriately.
> >
> > Cc: Bjorn Andersson <bjorn.andersson@xxxxxxxxxx>
> > Cc: Rob Herring <robh+dt@xxxxxxxxxx>
> > Cc: Andy Gross <agross@xxxxxxxxxx>
> > Cc: Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx>
> > Cc: David S. Miller <davem@xxxxxxxxxxxxx>
> > Cc: Stephen Boyd <sboyd@xxxxxxxxxx>
> > Cc: Michael Turquette <mturquette@xxxxxxxxxxxx>
> > Cc: Vinod Koul <vkoul@xxxxxxxxxx>
> > Cc: dmaengine@xxxxxxxxxxxxxxx
> > Cc: linux-clk@xxxxxxxxxxxxxxx
> > Cc: linux-crypto@xxxxxxxxxxxxxxx
> > Cc: devicetree@xxxxxxxxxxxxxxx
> > Cc: linux-kernel@xxxxxxxxxxxxxxx
> > Cc: bhupesh.linux@xxxxxxxxx
> > Signed-off-by: Bhupesh Sharma <bhupesh.sharma@xxxxxxxxxx>
> > [Make header file inclusion alphabetical]
> > Signed-off-by: Thara Gopinath <thara.gopinath@xxxxxxxxxx>
>
> This says that you prepared the patch, then Thara picked up the patch
> and sorted the includes. But somehow you then sent the patch.
>
> I.e. you name should be the last - unless you jointly wrote the path, in
> which case you should also add a "Co-developed-by: Thara".

No, it's the other way around. Thara prepared the patch (as the
'From:' field suggests) and I just changed the inclusion order of the
header files and made it in alphabetical order.

I will move my S-o-b later in the order.

> > ---
> > drivers/crypto/qce/core.c | 35 ++++++++++++++++++++++++++++-------
> > drivers/crypto/qce/core.h | 1 +
> > 2 files changed, 29 insertions(+), 7 deletions(-)
> >
> > diff --git a/drivers/crypto/qce/core.c b/drivers/crypto/qce/core.c
> > index 80b75085c265..92a0ff1d357e 100644
> > --- a/drivers/crypto/qce/core.c
> > +++ b/drivers/crypto/qce/core.c
> > @@ -5,6 +5,7 @@
> >
> > #include <linux/clk.h>
> > #include <linux/dma-mapping.h>
> > +#include <linux/interconnect.h>
> > #include <linux/interrupt.h>
> > #include <linux/module.h>
> > #include <linux/mod_devicetable.h>
> > @@ -21,6 +22,8 @@
> > #define QCE_MAJOR_VERSION5 0x05
> > #define QCE_QUEUE_LENGTH 1
> >
> > +#define QCE_DEFAULT_MEM_BANDWIDTH 393600
>
> Do we know what this rate is?

I think this corresponds to the arbitrated bandwidth / instantaneous
bandwidth (in KBps)
for the qce crypto part [I think 'average/peak bandwidth' would be a
better terminology :) ].

Maybe Thara can add more comments here.

> > +
> > static const struct qce_algo_ops *qce_ops[] = {
> > #ifdef CONFIG_CRYPTO_DEV_QCE_SKCIPHER
> > &skcipher_ops,
> > @@ -202,21 +205,35 @@ static int qce_crypto_probe(struct platform_device *pdev)
> > if (ret < 0)
> > return ret;
> >
> > + qce->mem_path = of_icc_get(qce->dev, "memory");
>
> Using devm_of_icc_get() would save you some changes to the error path.

Ok, I can address this in v3.

> > + if (IS_ERR(qce->mem_path))
> > + return PTR_ERR(qce->mem_path);
> > +
> > qce->core = devm_clk_get(qce->dev, "core");
> > - if (IS_ERR(qce->core))
> > - return PTR_ERR(qce->core);
> > + if (IS_ERR(qce->core)) {
> > + ret = PTR_ERR(qce->core);
> > + goto err_mem_path_put;
> > + }
> >
> > qce->iface = devm_clk_get(qce->dev, "iface");
> > - if (IS_ERR(qce->iface))
> > - return PTR_ERR(qce->iface);
> > + if (IS_ERR(qce->iface)) {
> > + ret = PTR_ERR(qce->iface);
> > + goto err_mem_path_put;
> > + }
> >
> > qce->bus = devm_clk_get(qce->dev, "bus");
> > - if (IS_ERR(qce->bus))
> > - return PTR_ERR(qce->bus);
> > + if (IS_ERR(qce->bus)) {
> > + ret = PTR_ERR(qce->bus);
> > + goto err_mem_path_put;
> > + }
> > +
> > + ret = icc_set_bw(qce->mem_path, QCE_DEFAULT_MEM_BANDWIDTH, QCE_DEFAULT_MEM_BANDWIDTH);
> > + if (ret)
> > + goto err_mem_path_put;
> >
> > ret = clk_prepare_enable(qce->core);
> > if (ret)
> > - return ret;
> > + goto err_mem_path_disable;
> >
> > ret = clk_prepare_enable(qce->iface);
> > if (ret)
> > @@ -256,6 +273,10 @@ static int qce_crypto_probe(struct platform_device *pdev)
> > clk_disable_unprepare(qce->iface);
> > err_clks_core:
> > clk_disable_unprepare(qce->core);
> > +err_mem_path_disable:
> > + icc_set_bw(qce->mem_path, 0, 0);
>
> When you icc_put() (or devm_of_icc_get() does it for you) the path's
> votes are implicitly set to 0, so you don't need to do this.
>
> And as such, you don't need to change the error path at all.

Ok, got it. Will change v3 accordingly.

Thanks,
Bhupesh

> > +err_mem_path_put:
> > + icc_put(qce->mem_path);
> > return ret;
> > }
> >
> > diff --git a/drivers/crypto/qce/core.h b/drivers/crypto/qce/core.h
> > index 085774cdf641..228fcd69ec51 100644
> > --- a/drivers/crypto/qce/core.h
> > +++ b/drivers/crypto/qce/core.h
> > @@ -35,6 +35,7 @@ struct qce_device {
> > void __iomem *base;
> > struct device *dev;
> > struct clk *core, *iface, *bus;
> > + struct icc_path *mem_path;
> > struct qce_dma_data dma;
> > int burst_size;
> > unsigned int pipe_pair_id;
> > --
> > 2.30.2
> >