Re: [PATCH 1/7] mips: dmi: Fix early remap on MIPS32
From: Serge Semin
Date: Fri Nov 24 2023 - 13:52:50 EST
On Thu, Nov 23, 2023 at 05:33:31PM +0000, Jiaxun Yang wrote:
>
>
> 在2023年11月23日十一月 下午4:07,Thomas Bogendoerfer写道:
> > On Thu, Nov 23, 2023 at 03:07:09PM +0000, Jiaxun Yang wrote:
> >>
> [...]
> >
> > the problem with all 32bit unmapped segments is their limitations in
> > size. But there is always room to try to use unmapped and fall back
> > to mapped, if it doesn't work. But I doubt anybody is going to
> > implement that.
>
> Yep, I guess fallback should be implemented for ioremap_cache as well.
>
> >
> >> >> AFAIK for Loongson DMI is located at cached memory so using ioremap_uc
> >> >> blindly will cause inconsistency.
> >> >
> >> > why ?
> >>
> >> Firmware sometimes does not flush those tables from cache back to memory.
> >> For Loongson systems (as well as most MTI systems) cache is enabled by
> >> firmware.
> >
> > kernel flushes all caches on startup, so there shouldn't be a problem.
>
> Actually dmi_setup() is called before cpu_cache_init().
To preliminary sum the discussion, indeed there can be issues on the
platforms which have DMI initialized on the cached region. Here are
several solutions and additional difficulties I think may be caused by
implementing them:
1. Use unmapped cached region utilization in the MIPS32 ioremap_prot()
method.
This solution a bit clumsy than it looks on the first glance.
ioremap_prot() can be used for various types of the cachability
mapping. Currently it's a default-cacheable CA preserved in the
_page_cachable_default variable and Write-combined CA saved in
boot_cpu_data.writecombine. Based on that we would have needed to use
the unmapped cached region utilized for the IO-remaps called with the
"_page_cachable_default" mapping flags passed only. The rest of the IO
range mappings, including the write-combined ones, would have been
handled by VM means. This would have made the ioremap_prot() a bit
less maintainable, but still won't be that hard to implement (unless I
miss something):
--- a/arch/mips/mm/ioremap.c
+++ b/arch/mips/mm/ioremap.c
/*
- * Map uncached objects in the low 512mb of address space using KSEG1,
- * otherwise map using page tables.
+ * Map uncached/default-cached objects in the low 512mb of address
+ * space using KSEG1/KSEG0, otherwise map using page tables.
*/
- if (IS_LOW512(phys_addr) && IS_LOW512(last_addr) &&
- flags == _CACHE_UNCACHED)
- return (void __iomem *) CKSEG1ADDR(phys_addr);
+ if (IS_LOW512(phys_addr) && IS_LOW512(last_addr)) {
+ if (flags == _CACHE_UNCACHED)
+ return (void __iomem *) CKSEG1ADDR(phys_addr);
+ else if (flags == _page_cachable_default)
+ return (void __iomem *) CKSEG0ADDR(phys_addr);
+ }
Currently I can't figure out what obvious problems it may cause. But
It seems suspicious that the cacheable IO-mapping hasn't been
implemented by the unmapped cacheable region in the first place. In
anyway this solution looks more dangerous than solution 2. because it
affects all the MIPS32 platforms at once.
2. Convert dmi_remap_early() to ioremap_uc() (actually just ioremap()
as noted by Arnd).
As Jiaxun correctly noted this may cause problems on the platforms
which don't flush caches before jumping out to the kernel. Thomas said
that kernel flushes the caches early on boot, but Jiaxun noted that
it's still done after early DMI setup. So the issue with solution 2 is
that the setup_arch() method calls dmi_setup() before it flushes the
caches by means of the cpu_cache_init() method. I guess it can be
fixed just by moving the dmi_setup() method invocation to be after the
cpu_cache_init() is called. This solution looks much less invasive
than solution 1.
So what do you think? What solution do you prefer? Perhaps
alternative?
-Serge(y)
>
> Thanks
> >
> > Thomas.
> >
> > --
> > Crap can work. Given enough thrust pigs will fly, but it's not necessarily a
> > good idea. [ RFC1925, 2.3 ]
>
> --
> - Jiaxun