Re: Overlapping ioremap() calls, set_memory_*() semantics

From: Luis R. Rodriguez
Date: Tue Mar 15 2016 - 21:46:16 EST


On Fri, Mar 11, 2016 at 03:13:52PM -0700, Toshi Kani wrote:
> Hi Ingo,
>
> My apology for the delay...
>
> On Wed, 2016-03-09 at 10:15 +0100, Ingo Molnar wrote:
> > * Toshi Kani <toshi.kani@xxxxxxx> wrote:
> >
> > > On Tue, 2016-03-08 at 13:16 +0100, Ingo Molnar wrote:
> > > > * Toshi Kani <toshi.kani@xxxxxxx> wrote:
> > > >
> > > > > > So where is the problem? The memtype implementation and hence
> > > > > > most ioremap() users are supposed to be safe. set_memory_*() APIs
> > > > > > are supposed to be safe as well, as they too go via the memtype
> > > > > > API.
> > > > >
> > > > > Let me try to summarize...
> > > > >
> > > > > The original issue Luis brought up was that drivers written to work
> > > > > with MTRR may create a single ioremap range covering multiple cache
> > > > > attributes since MTRR can overwrite cache attribute of a certain
> > > > > range.  Converting such drivers with PAT-based ioremap interfaces,
> > > > > i.e. ioremap_wc() and ioremap_nocache(), requires a separate
> > > > > ioremap map for each cache attribute, which can be challenging as
> > > > > it may result in overlapping ioremap ranges (in his term) with
> > > > > different cache attributes.
> > > > >
> > > > > So, Luis asked about 'sematics of overlapping ioremap()' calls.
> > > > >  Hence, I responded that aliasing mapping itself is supported, but
> > > > > alias with different cache attribute is not.  We have checks in
> > > > > place to detect such condition.  Overlapping ioremap calls with a
> > > > > different cache attribute either fails or gets redirected to the
> > > > > existing cache attribute on x86.
> > > >
> > > > Ok, fair enough!
> > > >
> > > > So to go back to the original suggestion from Luis, I've quoted it,
> > > > but with a s/overlapping/aliased substitution:
> > > >
> > > > > I had suggested long ago then that one possible resolution was for
> > > > > us to add an API that *enables* aliased ioremap() calls, and only
> > > > > use it on select locations in the kernel. This means we only have
> > > > > to convert a few users to that call to white list such semantics,
> > > > > and by default we'd disable aliased calls. To kick things off -- is
> > > > > this strategy agreeable for all other architectures?
> > > >
> > > > I'd say that since the overwhelming majority of ioremap() calls are
> > > > not aliased, ever, thus making it 'harder' to accidentally alias is
> > > > probably a good idea.
> > >
> > > Did you mean 'aliased' or 'aliased with different cache attribute'?
> > >  The former check might be too strict.
> >
> > I'd say even 'same attribute' aliasing is probably relatively rare.
> >
> > And 'different but compatible cache attribute' is in fact more of a sign
> > that the driver author does the aliasing for a valid _reason_: to have
> > two different types of access methods to the same piece of physical
> > address space...
>
> Right.  So, if we change to fail ioremap() on aliased cases, it'd be easier
> to start with the different attribute case first.  This case should be rare
> enough that we can manage to identify such callers and make them use a new
> API as necessary.  If we go ahead to fail any aliased cases, it'd be
> challenging to manage without a regression or two.

>From my experience on the ioremap_wc() crusade, I found that the need for
aliasing with different cache types would have been needed in only 3 drivers.
For these 3, the atyfb driver I did the proper split in MMIO and framebuffer,
but that was significant work. I did this work to demo and document such
work. It wasn't easy. For other two, ivtv and ipath we left as requiring
"nopat" to be used. The ipath driver is on its way out of the kenrel now
through staging, and ivtv, well I am not aware of single human being
claiming to use it. The architecture of ivtv actually prohibits us
from ever using PAT for write-combining on the framebuffer as the firmware
is the only one who knows the write-combining area and hides it from us.

We might be able to use tools like Coccinelle to perhaps hunt for
the use of aliasing on drivers with different cache attribute types
to do a full assessment but I really think that will be really hard
to accomplish.

If we can learn anything from the ioremap_wc() crusade I'd say its that
the need for aliasing with different cache types obviously implies we
should disable such drivers with PAT as what we'd really need is a proper
split in maps, but history shows the split can be really hard. It sounded
like you guys were confirming we currently do not allow for aliasing with
different attributes on x86, is that the case for all architectures?

If aliasing with different cache attributes is not allowed for x86 and
if its also rare for other architectures that just leaves the hunt for
valid aliasing uses. That still may be hard to hunt for, but I also
suspect it may be rare.

> > > > The memtype infrastructure of phyisical memory ranges in that case
> > > > acts as a security measure, to avoid unintended (not just physically
> > > > incompatible) aliasing. I suspect it would be helpful during driver
> > > > development as well.
> > >
> > > The memtype infrastructure does not track caller interfaces.  So, it
> > > will check against to any map, i.e. kernel & user map.  I assume a
> > > kernel map is created before user map, though.
> >
> > I don't understand this, could you give me a specific example?
>
> Multiple pfn map interfaces use the memtype.  So, when ioremap() detects an
> aliased map in the memtype, it may be created by ioremap(),
> remap_pfn_range(), or any pfn map that is tracked.  So, strictly speaking,
> we cannot check against other ioremap().
>
> > > > the other problem listed:
> > > >
> > > > > As another problem case, set_memor_*() will not fail on MMIO even
> > > > > though set_memor_*() is designed only for RAM.
> > > >
> > > > So what does this mean exactly? Having WB caching on MMIO area is not
> > > > good, but UC, WC and WB sure is still sensible in some cases, right?
> > >
> > > I responded to Luis in other email that:
> > > > Drivers use ioremap family with a right cache type when mapping MMIO
> > > > ranges, ex. ioremap_wc().  They do not need to change the type to
> > > > MMIO. RAM is different since it's already mapped with WB at boot-
> > > > time. set_memory_*() allows us to change the type from WB, and put it
> > > > back to WB.
> >
> > So there's two different uses here:
> >
> >  - set_memory_x() to set the caching attribute
> >  - set_memory_x() to set any of the RWX access attributes
> >
> > I'd in fact suggest we split these two uses to avoid confusion: keep 
> > set_memory_x() APIs for the RWX access attributes uses, and introduce
> > a new API that sets caching attributes:
> >
> > set_cache_attr_wc()
> > set_cache_attr_uc()
> > set_cache_attr_wb()
> > set_cache_attr_wt()
> >
> > it has the same arguments, so it's basically just a rename initially.
>
> I think the "set_memory_" prefix implies that their target is regular
> memory only.

I did not find any driver using set_memory_wc() on MMIO, its a good thing
as that does not work it seems even if it returns no error. I'm not sure
of the use of other set_memory_*() on MMIO but I would suspect its not
used. A manual hunt may suffice to rule these out.

I guess what I'm trying to say is I am not sure we have a need for
set_cache_attr_*() APIs, unless of course we find such valid use.

> > And at that point we could definitely argue that set_cache_attr_*() APIs
> > should probably generate a warning for _RAM_, because they mostly make
> > sense for MMIO type of physical addresses, right? Regular RAM should
> > always be WB.
> >
> > Are there cases where we change the caching attribute of RAM for valid
> > reasons, outside of legacy quirks?
>
> ati_create_page_map() is one example that it gets a RAM page
> by __get_free_page(), and changes it to UC by calling set_memory_uc().

Should we instead have an API that lets it ask for RAM and of UC type?
That would seem a bit cleaner. BTW do you happen to know *why* it needs
UC RAM types?

> Since RAM is already mapped with WB, the driver has to change its attribute
> when it needs a different cache type.  For MMIO, drivers should be able to
> create a map with the right attribute since they own the ranges.

>From my own work so far I tend to agree here, but that's just because
I've seen how drivers can suck badly, and they can quickly think they
can use hacks for things which actually really need a full architectural
proper solution in driver or firmware. In the worst case we have insane
firmware, but I think that's probably really uncommon and likely and its
design was flawed given that at that time PAT was probably rare.

> > > > Basically if ioremap_uc() and ioremap_wc() is allowed on MMIO areas,
> > > > then I see no reason in principle why it should be invalid to change
> > > > the area from UC to WC after it has been ioremap()ed.
> > >
> > > The current implementation does not support MMIO.
> > >  - It does not track cache attribute correctly for MMIO since it uses
> > > __pa().
> >
> > Hm, so __pa() works on mmio addresses as well - at least on x86. The
> > whole mmtype tree is physical address based - which too is MMIO
> > compatible in principle.
> >
> > The only problem would be if it was fundamentally struct page * based -
> > but it's not AFAICS. We have a few APIs that work on struct page *
> > arrays, but those are just vectored helper APIs AFAICS.
> >
> > What am I missing?
>
> I do not think __pa() returns the right physical address for an ioremap'd
> virtual address since its virtual address is allocated from the vmalloc
> range.  But we can simply use slow_virt_to_phys(), instead.
>
> > >  - It only supports attribute transition of {WB -> NewType -> WB} for
> > > RAM.  RAM is tracked differently that WB is treated as "no map".  So,
> > > this transition does not cause a conflict on RAM.  This will causes a
> > > conflict on MMIO when it is tracked correctly.   
> >
> > That looks like a bug?
>
> This is by design since set_memory_xx was introduced for RAM only.  If we
> extend it to MMIO, then we need to change how memtype manages MMIO.

I'd be afraid to *want* to support this on MMIO as I would only expect
hacks from drivers.

Luis