Re: [openib-general] Re: [PATCH][RFC][0/4] InfiniBand userspace verbs implementation

From: Caitlin Bestler
Date: Tue Apr 26 2005 - 22:18:06 EST


On 4/26/05, Andrew Morton <akpm@xxxxxxxx> wrote:
> Roland Dreier <roland@xxxxxxxxxxx> wrote:
> >
> > Libor> Do you mean that the set/clear parameters to do_mlock()
> > Libor> are the actual flags which are set/cleared by the caller?
> > Libor> Also, the issue remains that the flags are not reference
> > Libor> counted which is a problem if you are dealing with
> > Libor> overlapping memory region, or even if one region ends and
> > Libor> another begins on the same page. Since the desire is to be
> > Libor> able to pin any memory that a user can malloc this is a
> > Libor> likely scenario.
> >
> > Good point... we need to figure out how to handle:
> >
> > a) app registers 0x0000 through 0x17ff
> > b) app registers 0x1800 through 0x2fff
> > c) app unregisters 0x0000 through 0x17ff
> > d) the page at 0x1000 must stay pinned
>
> The userspace library should be able to track the tree and the overlaps,
> etc. Things might become interesting when the memory is MAP_SHARED
> pagecache and multiple independent processes are involved, although I guess
> that'd work OK.
>
> But afaict the problem wherein part of a page needs VM_DONTCOPY and the
> other part does not cannot be solved.
>

Which portion of the userspace library? HCA-dependent code, or common code?

The HCA-dependent code would fail to count when the same memory was
registered to different HCAs (for example to the internal network device and
the external network device).

The vendor-independent code *could* do it, but only by maintaining a
complete list of all registrations that had been issued but not cancelled.
That data would be redundant with data kept at the verb layer, and by
the kernel.

It *would' work, but maintaining highly redundant data at multiple layers
is something that I generally try to avoid.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/