Re: [Techteam] [RFC PATCH] x86-32: Start out eflags and cr4 clean

From: H. Peter Anvin
Date: Fri Jan 18 2013 - 21:35:13 EST


On 01/18/2013 05:05 PM, Mitch Bradley wrote:
>
>
> On 1/18/2013 2:42 PM, H. Peter Anvin wrote:
>> On 01/18/2013 04:40 PM, Andres Salomon wrote:
>>> Bad news on this patch; I've been told that it breaks booting on an
>>> XO-1.5. Does anyone from OLPC know why yet?
>>
>> What are the settings of CR0 and CR4 on kernel entry on XO-1.5?
>
>
> CR0 is 0x80000011
> CR4 is 0x10
>

OK, that makes sense... the kernel doesn't enable the PSE bit yet and I
bet that's what you're using for the non-stolen page tables.

Can we simply disable paging before mucking with CR4? The other option
that I can see is to always enable PSE and PGE, since they are simply
features opt-ins that don't do any harm if unused. At the same time,
though, entering the kernel through the default_entry path with paging
enabled is definitely not anything the kernel expects.

Does this patch work for you? Since we have ditched 386 support, we can
mimic x86-64 (yay, one more difference gone!) and just use a predefined
value for %cr0 (the FPU flags need to change if we are on an FPU-less
chip, but that happens during FPU probing.)

Does this patch work for you?

-hpa



diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S
index 8e7f655..2713ea1 100644
--- a/arch/x86/kernel/head_32.S
+++ b/arch/x86/kernel/head_32.S
@@ -300,6 +300,11 @@ ENTRY(startup_32_smp)
leal -__PAGE_OFFSET(%ecx),%esp

default_entry:
+#define CR0_STATE (X86_CR0_PE | X86_CR0_MP | X86_CR0_ET | \
+ X86_CR0_NE | X86_CR0_WP | X86_CR0_AM | \
+ X86_CR0_PG)
+ movl $(CR0_STATE & ~X86_CR0_PG),%eax
+ movl %eax,%cr0
/*
* New page tables may be in 4Mbyte page mode and may
* be using the global pages.
@@ -364,8 +369,7 @@ default_entry:
*/
movl $pa(initial_page_table), %eax
movl %eax,%cr3 /* set the page table pointer.. */
- movl %cr0,%eax
- orl $X86_CR0_PG,%eax
+ movl $CR0_STATE,%eax
movl %eax,%cr0 /* ..and set paging (PG) bit */
ljmp $__BOOT_CS,$1f /* Clear prefetch and normalize %eip */
1: