Do you mean spi-rx-delay-us and spi-tx-delay-us properties? Those are
documented but don't seem to be used. There's also the delay_usecs part
of the spi_transfer structure, which may be what you're talking about.
delay_usecs is for inter-transfer delays within a message rather than
after the initial chip select assert (it can be used to keep chip select
asserted for longer after the final transfer too). Obviously this is
also something that shouldn't be configured in a driver specific
fashion.
Hmmm ok, so you mean don't send these as controller_data, rather add new
members to the spi_device struct ?
+ if (of_property_read_u32(pdev->dev.of_node, "spi-max-frequency",
+ &spi->max_speed_hz)) {
Why does this need to come from DT?
+ mas->cur_speed_hz = spi_slv->max_speed_hz;
Why can't you use clk_get_rate() instead? Or call clk_set_rate() with
the rate you want the master clk to run at and then divide that down
from there?
> Not sure I follow, the intention is to run the controller clock based on
> the slave's max frequency.
That's good. The problem I see is that we have to specify the max
frequency in the controller/bus node, and also in the child/slave node.
It should only need to be specified in the slave node, so making the
cur_speed_hz equal the max_speed_hz is problematic. The current speed of
the master should be determined by calling clk_get_rate().
We don't require that the slaves all individually set a speed since it
gets a bit redundant, it should be enough to just use the default the
controller provides. A bigger problem with this is that the driver will
never see a transfer which doesn't explicitly have a speed set as the
core will ensure something is set, open coding this logic in every
driver would obviously be tiresome.