Re: [PATCH] PCI: endpoint: pci-epf-test: Roll back BAR mapping when subrange setup fails
From: Christian Bruel
Date: Wed Mar 18 2026 - 09:00:51 EST
On 3/18/26 09:58, Niklas Cassel wrote:
On Wed, Mar 18, 2026 at 09:35:10AM +0100, Christian Bruel wrote:
I think "modify TEST_F to SKIP if ENOSPC",
since it will solve the problem for all platforms that have few
inbound iATUs.
That sounds like the right direction, though I think we would first
need a way
to propagate -ENOSPC back to the host side, instead of just
collapsing all
EP-side setup failures into STATUS_BAR_SUBRANGE_SETUP_FAIL, which
pci_endpoint_test currently maps to -EIO.
I think the epf test function can return several bits in the status
mask, in addition of STATUS_BAR_SUBRANGE_SETUP_FAIL. e.g
STATUS_BAR_SUBRANGE_SETUP_SKIP, or STATUS_BAR_SUBRANGE_SETUP_NOSPC.
I prefer the former since we want to pass the cause, not the action and
let the skip decision to the host
Rethinking this, having a pci_epc_feature to limit the maximum number of
simultaneously allocatable BARs might be a useful addition to the EPC
driver. The epc driver would need to keep track of the allocated BARs and
check it before calling set_bar() for skipping or not.
The limitation is not allocatable BARs, it is the number of inbound/outbound
iATUs/windows.
(E.g. with inbound subrange mapping one BAR could be split in 3, and require
three different iATUs, but another BAR is not split, so just requires one
iATU.)
Right now, a big limitation in the PCI endpoint framework, is that there is
currently no API to see how many inbound/outbound iATUs that are currently
in use.
So the only thing you can do is to call mem_map() and see if you get an
error. This is a bit wasteful, as in some cases, you could probably
skip/wait with a lot of processing, if you knew that there was no free
iATUs windows available.
Ah yes, I didn't consider mem_map(). For the record, the idea was to expose num_ib_windows in the epc_features to cap the number of BARs. subranges would consume multiple entries.
But we need to track counts for that, and I'm afraid this is a fragile abstraction from the iATU
Today, with the stm32mp2 iATU limitation, the epf test fails not gracefully : (unless we disable 2 bars)
[ 13.453253] stm32-ep-pcie 48400000.pcie-ep: iATU: unroll T, 4 ob, 4 ib, align 64K, limit 4G
[ 13.456074] pci_epf_test pci_epf_test.0: Failed to get private DMA rx channel. Falling back to generic one
[ 13.466149] stm32-ep-pcie 48400000.pcie-ep: No free inbound window
[ 13.472061] pci_epf_test pci_epf_test.0: Failed to set BAR4
[ 13.477626] stm32-ep-pcie 48400000.pcie-ep: No free inbound window
[ 13.483981] pci_epf_test pci_epf_test.0: Failed to set BAR5
This could be solved by monitoring the iATU free spaces based on the feature, but that's another subject.
For now and the subrange skip, reporting ENOSPC is good enough indeed
Thank you for your advice,
Christian
However, I think such API would be most useful for outbound mapping.
I.e. endpoint to host transfers. Think of e.g. nvmet-pci-epf, you can
easily have a queue depth of 128, so 128 outbound mappings at the same
time.
For inbound mappings, you can only ever map BARs, so in comparison
to outbound mappings, inbound mappings are very limited.
6 BARs, and sure, with inbound subrange mapping you can have a few
windows per BAR, but this is usually something the EPF driver does a
.bind() time, even before any transfers have taken place.
If set_bar() fails with -ENOSPC or something to indicate no free window,
I would imagine that that that is good enough for most EPF drivers.
Do you think this added (minor) complexity is worth it compared to simply
returning ENOSPC in the status?
TL;DL: I think a number of free windows API would be a good addition,
but for outbound windows.
For inbound windows, it seems a bit unnecessary IMO.
Kind regards,
Niklas