On Tue, Aug 29, 2023 at 04:37:00PM +0100, Robin Murphy wrote:
On 2023-08-22 17:42, Nicolin Chen wrote:
On Tue, Aug 22, 2023 at 10:19:21AM +0100, Robin Murphy wrote:
out_free_data:
@@ -1071,6 +1073,7 @@ arm_mali_lpae_alloc_pgtable(struct io_pgtable_cfg *cfg, void *cookie)
ARM_MALI_LPAE_TTBR_ADRMODE_TABLE;
if (cfg->coherent_walk)
cfg->arm_mali_lpae_cfg.transtab |= ARM_MALI_LPAE_TTBR_SHARE_OUTER;
+ cfg->nents_per_pgtable = 1 << data->bits_per_level;
The result of this highly complex and expensive calculation is clearly
redundant with the existing bits_per_level field, so why do we need to
waste space storing when the driver could simply use bits_per_level?
bits_per_level is in the private struct arm_lpae_io_pgtable, while
drivers can only access struct io_pgtable_cfg. Are you suggesting
to move bits_per_level out of the private struct arm_lpae_io_pgtable
to the public struct io_pgtable_cfg?
Or am I missing another bits_per_level?
Bleh, apologies, I always confuse myself trying to remember the fiddly
design of io-pgtable data. However, I think this then ends up proving
the opposite point - the number of pages per table only happens to be a
fixed constant for certain formats like LPAE, but does not necessarily
generalise. For instance for a single v7s config it would be 1024 or 256
or 16 depending on what has actually been unmapped.
The mechanism as proposed implicitly assumes LPAE format, so I still
think we're better off making that assumption explicit. And at that
point arm-smmu-v3 can then freely admit it already knows the number is
simply 1/8th of the domain page size.
Hmm, I am not getting that "1/8th" part, would you mind elaborating?
Also, what we need is actually an arbitrary number for max_tlbi_ops.
And I think it could be irrelevant to the page size, i.e. either a
4K pgsize or a 64K pgsize could use the same max_tlbi_ops number,
because what eventually impacts the latency is the number of loops
of building/issuing commands.
So, combining your narrative above that nents_per_pgtable isn't so
general as we have in the tlbflush for MMU,