When validating decoder IW/IG when setting up regions, the granularity
is irrelevant when iw=1 - all accesses will always route to the only
target anyway - so all ig values are "correct". Loosen the requirement
that `ig = (parent_iw * parent_ig)` when iw=1.
On some Zen5 platforms, the platform BIOS specifies a 256-byte
interleave granularity window for host bridges when there is only
one target downstream. This leads to Linux rejecting the configuration
of a region with a x2 root with two x1 hostbridges.
Decoder Programming:
root - iw:2 ig:256
hb1 - iw:1 ig:256 (Linux expects 512)
hb2 - iw:1 ig:256 (Linux expects 512)
ep1 - iw:2 ig:256
ep2 - iw:2 ig:256
This change allows all decoders downstream of a passthrough decoder to
also be configured as passthrough (iw:1 ig:X), but still disallows
downstream decoders from applying subsequent interleaves.
e.g. in the above example if there was another decoder south of hb1
attempting to interleave 2 endpoints - Linux would enforce hb1.ig=512
because the southern decoder would have iw:2 and require ig=pig*piw.
Signed-off-by: Gregory Price <gourry@xxxxxxxxxx>
Reviewed-by: Dave Jiang <dave.jiang@xxxxxxxxx>