Re: [PATCH v4 28/37] memory: tegra: Register as interconnect provider
From: Dmitry Osipenko
Date: Wed Jul 01 2020 - 19:36:46 EST
01.07.2020 20:12, Georgi Djakov ÐÐÑÐÑ:
> Hi Dmitry,
>
> Thank you for updating the patches!
Hello, Georgi!
Thank you for the review!
> On 6/9/20 16:13, Dmitry Osipenko wrote:
>> Now memory controller is a memory interconnection provider. This allows us
>> to use interconnect API in order to change memory configuration.
>>
>> Signed-off-by: Dmitry Osipenko <digetx@xxxxxxxxx>
>> ---
>> drivers/memory/tegra/Kconfig | 1 +
>> drivers/memory/tegra/mc.c | 114 +++++++++++++++++++++++++++++++++++
>> drivers/memory/tegra/mc.h | 8 +++
>> include/soc/tegra/mc.h | 3 +
>> 4 files changed, 126 insertions(+)
>>
>> diff --git a/drivers/memory/tegra/Kconfig b/drivers/memory/tegra/Kconfig
>> index 5bf75b316a2f..7055fdef2c32 100644
>> --- a/drivers/memory/tegra/Kconfig
>> +++ b/drivers/memory/tegra/Kconfig
>> @@ -3,6 +3,7 @@ config TEGRA_MC
>> bool "NVIDIA Tegra Memory Controller support"
>> default y
>> depends on ARCH_TEGRA
>> + select INTERCONNECT
>> help
>> This driver supports the Memory Controller (MC) hardware found on
>> NVIDIA Tegra SoCs.
>> diff --git a/drivers/memory/tegra/mc.c b/drivers/memory/tegra/mc.c
>> index 772aa021b5f6..7ef7ac9e103e 100644
>> --- a/drivers/memory/tegra/mc.c
>> +++ b/drivers/memory/tegra/mc.c
>> @@ -594,6 +594,118 @@ static __maybe_unused irqreturn_t tegra20_mc_irq(int irq, void *data)
>> return IRQ_HANDLED;
>> }
>>
>> +static int tegra_mc_icc_set(struct icc_node *src, struct icc_node *dst)
>> +{
>> + return 0;
>> +}
>> +
>> +static int tegra_mc_icc_aggregate(struct icc_node *node,
>> + u32 tag, u32 avg_bw, u32 peak_bw,
>> + u32 *agg_avg, u32 *agg_peak)
>> +{
>> + *agg_avg = min((u64)avg_bw + (*agg_avg), (u64)U32_MAX);
>> + *agg_peak = max(*agg_peak, peak_bw);
>> +
>> + return 0;
>> +}
>> +
>> +/*
>> + * Memory Controller (MC) has few Memory Clients that are issuing memory
>> + * bandwidth allocation requests to the MC interconnect provider. The MC
>> + * provider aggregates the requests and then sends the aggregated request
>> + * up to the External Memory Controller (EMC) interconnect provider which
>> + * re-configures hardware interface to External Memory (EMEM) in accordance
>> + * to the required bandwidth. Each MC interconnect node represents an
>> + * individual Memory Client.
>> + *
>> + * Memory interconnect topology:
>> + *
>> + * +----+
>> + * +--------+ | |
>> + * | TEXSRD +--->+ |
>> + * +--------+ | |
>> + * | | +-----+ +------+
>> + * ... | MC +--->+ EMC +--->+ EMEM |
>> + * | | +-----+ +------+
>> + * +--------+ | |
>> + * | DISP.. +--->+ |
>> + * +--------+ | |
>> + * +----+
>> + */
>> +static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
>> +{
>> + struct icc_onecell_data *data;
>> + struct icc_node *node;
>> + unsigned int num_nodes;
>> + unsigned int i;
>> + int err;
>> +
>> + /* older device-trees don't have interconnect properties */
>> + if (!of_find_property(mc->dev->of_node, "#interconnect-cells", NULL))
>> + return 0;
>> +
>> + num_nodes = mc->soc->num_clients;
>> +
>> + data = devm_kzalloc(mc->dev, struct_size(data, nodes, num_nodes),
>> + GFP_KERNEL);
>> + if (!data)
>> + return -ENOMEM;
>> +
>> + mc->provider.dev = mc->dev;
>> + mc->provider.set = tegra_mc_icc_set;
>
> Hmm, maybe the core should not require a set() implementation and we can
> just make it optional instead. Then the dummy function would not be needed.
Eventually this dummy function might become populated with a memory
latency allowness programming. I could add a comment into that function
in the next version, saying that it's to-be-done for now.
>> + mc->provider.data = data;
>> + mc->provider.xlate = of_icc_xlate_onecell;
>> + mc->provider.aggregate = tegra_mc_icc_aggregate;
>> +
>> + err = icc_provider_add(&mc->provider);
>> + if (err)
>> + goto err_msg;
>
> Nit: I am planning to re-organize some of the existing drivers to call
> icc_provider_add() after the topology is populated. Could you please move
> this after the nodes are created and linked.
Are you planning to remove the provider's list-head initialization from
the icc_provider_add() [1] and move it to the individual provider
drivers, correct?
[1]
https://elixir.bootlin.com/linux/v5.8-rc3/source/drivers/interconnect/core.c#L977
If yes, then it should be easy to move the icc_provider_add() in the
case of this driver. Otherwise, please give some more clarification.
Please notice that the same "node" variable is used for both creation
and linking of the nodes to the provider here, making code nice and
clean. And thus, the provider's list-head should be initialized before
the linking.
...
> The rest looks good to me!
Thanks!