I have a few additional questions regarding the bindings.
michael@xxxxxxxx wrote on Fri, 2 Sep 2022 00:18:37 +0200:
This is now the third attempt to fetch the MAC addresses from the VPD
for the Kontron sl28 boards. Previous discussions can be found here:
https://lore.kernel.org/lkml/20211228142549.1275412-1-michael@xxxxxxxx/
NVMEM cells are typically added by board code or by the devicetree. But
as the cells get more complex, there is (valid) push back from the
devicetree maintainers to not put that handling in the devicetree.
Therefore, introduce NVMEM layouts. They operate on the NVMEM device and
can add cells during runtime. That way it is possible to add more complex
cells than it is possible right now with the offset/length/bits
description in the device tree. For example, you can have post processing
for individual cells (think of endian swapping, or ethernet offset
handling).
The imx-ocotp driver is the only user of the global post processing hook,
convert it to nvmem layouts and drop the global post pocessing hook. Please
note, that this change is only compile-time tested.
You can also have cells which have no static offset, like the
ones in an u-boot environment. The last patches will convert the current
u-boot environment driver to a NVMEM layout and lifting the restriction
that it only works with mtd devices. But as it will change the required
compatible strings, it is marked as RFC for now. It also needs to have
its device tree schema update which is left out here. These two patches
are not expected to be applied, but rather to show another example of
how to use the layouts.
For now, the layouts are selected by a specific compatible string in a
device tree. E.g. the VPD on the kontron sl28 do (within a SPI flash node):
compatible = "kontron,sl28-vpd", "user-otp";
or if you'd use the u-boot environment (within an MTD patition):
compatible = "u-boot,env", "nvmem";
The "user-otp" (or "nvmem") will lead to a NVMEM device, the
"kontron,sl28-vpd" (or "u-boot,env") will then apply the specific layout
on top of the NVMEM device.
So if I understand correctly, there should be:
- one DT node defining the storage medium eeprom/mtd/whatever,
- another DT node defining the nvmem device with two compatibles, the
"nvmem" (or "user-otp") and the layout.
Is this correct? Actually I was a bit surprised because generally
speaking, DT maintainers (rightfully) do not want to describe how we
use devices, the nvmem abstraction looks like a Linux thing when on top
of mtd devices for instance, so I just wanted to confirm this point.
Then, as we have an nvmem device described in the DT, why not just
pointing at the nvmem device from the cell consumer, rather than still
having the need to define all the cells that the nvmem device will
produce in the DT?
Maybe an example to show what I mean. Here is the current way:
nvmem_provider: nvmem-provider {
properties;
mycell: my_cell {
[properties;]
};
};
And we point to a cell with:
nvmem-cells = <&mycell>;
But, as for the tlv tables, there are many cells that will be produced,
and the driver may anyway just ask for the cell name (eg. performing a
lookup of the "mac-address" cell name), so why bothering to describe all
the cells in the DT, like:
nvmem-cells-providers = <&nvmem_provider>;
What do you think?
Maybe for the mac addresses this is a bit limiting as, in practice, we
often have base mac addresses available and using:
nvmem-cells = <&mycell INDEX>;
allows to dynamically create many different mac addresses, but I wonder
if the approach would be interesting for other cell types. Just an open
question.