On Tue, Feb 09, 2016 at 05:08:02PM +0100, Christophe Leroy wrote:No, the physical mapping is done using one 512k page. And this is done in 4k pages mode only, for the reason explained below.
Once the linear memory space has been mapped with 8Mb pages, as"the patch" -- this one, that below says it maps IMMR with other sizes?
seen in the related commit, we get 11 millions DTLB missed during
the reference 600s period. 77% of the misses are on user addresses
and 23% are on kernel addresses (1 fourth for linear address space
and 3 fourth for virtual address space)
Traditionaly, each driver manages one computer board which has its
own components with its own memory maps.
But on embedded chips like the MPC8xx, the SOC has all registers
located in the same IO area.
When looking at ioremaps done during startup, we see that
many drivers are re-mapping small parts of the IMMR for their own use
and all those small pieces gets their own 4k page, amplifying the
number of TLB misses: in our system we get 0xff000000 mapped 31 times
and 0xff003000 mapped 9 times.
Even if each part of IMMR was mapped only once with 4k pages, it would
still be several small mappings towards linear area.
With the patch, on the same principle as what was done for the RAM,
the IMMR gets mapped by a 512k page.
The principle here, as for the 8M pages used for the mapping of RAM, is to have the PTE in the PGD (level 1 table) and no level 2 table associated with that PGD entry.
In 4k pages mode, we reserve a 4Mb area for mapping IMMR. The TLBIf IMMR is 512k, why do you need to reserve 4M/64M for it?
miss handler checks that we are within the first 512k and bail out
with page not marked valid if we are outside
In 16k pages mode, it is not realistic to reserve a 64Mb area, so
we do a standard mapping of the 512k area using 32 pages of 16k.
The CPM will be mapped via the first two pages, and the SEC engine
will be mapped via the 16th and 17th pages. As the pages are marked
guarded, there will be no speculative accesses.