Re: Interacting with coherent memory on external devices

From: Christoph Lameter
Date: Mon Apr 27 2015 - 11:08:36 EST


On Sat, 25 Apr 2015, Paul E. McKenney wrote:

> Would you have a URL or other pointer to this code?

linux/mm/migrate.c

> > > Without modifying a single line of mm code, the only way to do this is to
> > > either unmap from the cpu page table the range being migrated or to mprotect
> > > it in some way. In both case the cpu access will trigger some kind of fault.
> >
> > Yes that is how Linux migration works. If you can fix that then how about
> > improving page migration in Linux between NUMA nodes first?
>
> In principle, that also would be a good thing. But why do that first?

Because it would benefit a lot of functionality that today relies on page
migration to have a faster more reliable way of moving pages around.

> > > This is not the behavior we want. What we want is same address space while
> > > being able to migrate system memory to device memory (who make that decision
> > > should not be part of that discussion) while still gracefully handling any
> > > CPU access.
> >
> > Well then there could be a situation where you have concurrent write
> > access. How do you reconcile that then? Somehow you need to stall one or
> > the other until the transaction is complete.
>
> Or have store buffers on one or both sides.

Well if those store buffers end up with divergent contents then you have
the problem of not being able to decide which version should survive. But
from Jerome's response I deduce that this is avoided by only allow
read-only access during migration. That is actually similar to what page
migration does.

> > > This means if CPU access it we want to migrate memory back to system memory.
> > > To achieve this there is no way around adding couple of if inside the mm
> > > page fault code path. Now do you want each driver to add its own if branch
> > > or do you want a common infrastructure to do just that ?
> >
> > If you can improve the page migration in general then we certainly would
> > love that. Having faultless migration is certain a good thing for a lot of
> > functionality that depends on page migration.
>
> We do have to start somewhere, though. If we insist on perfection for
> all situations before we agree to make a change, we won't be making very
> many changes, now will we?

Improvements to the general code would be preferred instead of
having specialized solutions for a particular hardware alone. If the
general code can then handle the special coprocessor situation then we
avoid a lot of code development.

> As I understand it, the trick (if you can call it that) is having the
> device have the same memory-mapping capabilities as the CPUs.

Well yes that works with read-only mappings. Maybe we can special case
that in the page migration code? We do not need migration entries if
access is read-only actually.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/