Hi Will,
On 2016/4/11 18:40, Will Deacon wrote:
On Mon, Apr 11, 2016 at 12:31:53PM +0200, Ard Biesheuvel wrote:
On 11 April 2016 at 11:59, Chen Feng <puck.chen@xxxxxxxxxxxxx> wrote:
Please see the pg-tables below.
With sparse and vmemmap enable.
---[ vmemmap start ]---
0xffffffbdc0200000-0xffffffbdc4800000 70M RW NX SHD AF UXN MEM/NORMAL
---[ vmemmap end ]---
OK, I see what you mean now. Sorry for taking so long to catch up.
The board is 4GB, and the memap is 70MB
1G memory --- 14MB mem_map array.
No, this is incorrect. 1 GB corresponds with 16 MB worth of struct
pages assuming sizeof(struct page) == 64
So you are losing 6 MB to rounding here, which I agree is significant.
I wonder if it makes sense to use a lower value for SECTION_SIZE_BITS
on 4k pages kernels, but perhaps we're better off asking the opinion
of the other cc'ees.
You need to be really careful making SECTION_SIZE_BITS smaller because
it has a direct correlation on the use of page->flags and you can end up
running out of bits fairly easily.
Yes, making SECTION_SIZE_BITS smaller can solve the current situation.
But if the phys-addr is 64GB, but only 4GB ddr is the valid address. And the
holes are not always 512MB.
But, can you tell us why *smaller SIZE makes running out of bits fairly easily*?
And how about the flat-mem model?
Will
.