Hi Nanyong,
Le 8/09/2021 à 08:42, Nanyong Sun a écrit :
On 2021/8/14 6:08, Palmer Dabbelt wrote:
On Mon, 02 Aug 2021 05:43:02 PDT (-0700), alex@xxxxxxxx wrote:Hi Alex,
Hi Nanyong,
Le 28/07/2021 à 13:55, Alex Ghiti a écrit :
Le 28/07/2021 à 04:49, Nanyong Sun a écrit :
Remove redundant trampoline PGD for 64bit and add more comment
for why 32bit systems need trampoline PGD.
+load_kernel_pgd:
+ /*
+ * Switch to kernel page tables. A full fence is necessary
in order to
+ * avoid using the trampoline translations, which are only
correct for
+ * the first superpage. Fetching the fence is guarnteed to work
+ * because that first superpage is translated the same way.
+ */
+ csrw CSR_SATP, a2
+ sfence.vma
+
+load_done:
/* Set trap vector to spin forever to help debug */
la a0, .Lsecondary_park
csrw CSR_TVEC, a0
I suppose stvec was set this way to catch any problem with early_pg_dir,
you moved that and then this defeats this original purpose.
I don't think so, before set early_pg_dir to satp, it's the physical address world, we must set stvec as
the first place in virtual address world we want jump to. And I don't think ".Lsecondary_park " can catch
problem of bad early_pg_dir, if the basic page table is wrong, CPU also can not go to the virtual address stored in stvec correctly.
But I think then that it loops forever at the stvec address which allows to know where the boot failed.
The design logic is: at the first time cpu convert to virtual address world from physical world, actually stvec is a real "trampoline", it can not be set
More, in the original code, before set trampoline_pg_dir, what if the trampoline_pg_dir had a problem?
You're right but this debug 'feature' was not installed, I guess somebody had a hard time at some point with the early page table and not the trampoline :)
Anyway, I was just pointing that you 'broke' the current way things work and unless this is for an explicit good reason, that should not happen.
I mean trampoline_pg_dir cost 4096 bytes here and __maybe_unused only tell compiler don't raise a warning, but it still cost memory.
Essentially.I'm afraid that __maybe_unused can not save one page memory here.
The specific issue is that the JTAG debug spec is defined (or at least was when I was using it, it's been years since I've needed to do that) in terms of committed instructions. Thus if you end up in a position where the processer is unable to commit an instruction you also lose the ability to do anything meaningful with the debugger, thus essentially locking up the system.
The most common way to end up in a situation where the processor is unable to commit an instruction is to have a fault with an invalid trap vector: maybe dangling from M-mode, the last boot, reset, whatever. Then as soon as you take a trap the system locks up. Any trap before we have a working trap handler is a bug, but it's way harder to debug things when the debugger doesn't function.
There is of course no way to fundamentally prevent these sort of no-commitable-instruction situations, but I got into the habbit of just setting up a trivial trap entry point ASAP -- it probably took a dozen rounds of trying to debug the debugger only to realize it was per spec to hang, but that idiom eventually crept into pretty much everything.
Not sure if the debug spec is still written this way (or if debuggers respect it), as I haven't had to use one in a while.
diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
index ac48742fa6fc..306fcb2334fa 100644
--- a/arch/riscv/mm/init.c
+++ b/arch/riscv/mm/init.c
@@ -219,13 +219,17 @@ unsigned long pfn_base __ro_after_init;
EXPORT_SYMBOL(pfn_base);
pgd_t swapper_pg_dir[PTRS_PER_PGD] __page_aligned_bss;
+#ifndef CONFIG_64BIT
pgd_t trampoline_pg_dir[PTRS_PER_PGD] __page_aligned_bss;
+#endif /* CONFIG_64BIT */
As stated in Documentation/process/coding-style.rst, it is better to use
__maybe_unused rather than #ifdefs.
What do you mean?