On Mon, Jan 09, 2023 at 12:10:30PM -0800, William Zhang wrote:Thanks for the explanation. I saw the spi-uniphier.c and spi-bcm2835.c doing the trick you mentioned(thanks Kursad for pointing out). In our case, even the maximum fifo size usage(512bytes), the polling still have better performance than interrupt. The MTD test result included in this patch is based on maximum fifo usage. So there is no benefit to switch to interrupt based on transfer size.
On 01/09/2023 11:06 AM, Mark Brown wrote:
You can put whatever logic is needed in the code - for something like
this an architecture based define isn't ideal but is probably good
enough if need be (though I'd not be surprised if it turned out that
there was also some performance benefit for the MIPS systems too, at
least for smaller transfers).
I just don't know what other logic I can put in the driver to select
interrupt or polling mode. Only the end user know if performance or cpu
usage is more important to their application.
Usually you can take a reasonable guess as to what would be a good point
to start switching, typically for short enough transfers the overhead of
setting up DMA, waiting for interrupts and tearing things down is very
much larger than the cost of just doing PIO - a bunch of other drivers
have pick a number logic of some kind, often things like FIFO sizes are
a good key for where to look. A lot of the time this is good enough,
and it means that users have much better facilities for making tradeoffs
if they have a range of transfer sizes available - it's not an either/or
thing but based on some features of the individual message/transfer.
It is true that for people with heavy SPI traffic or otherwise very
demanding requirements for a specific system and software stack
additional tuning might produce better results, exposing some sysfs
knobs to allow tuning of parameters at runtime would be helpful for them
and I'd certainly be happy to see that added.
Attachment:
smime.p7s
Description: S/MIME Cryptographic Signature