Re: [PATCH 0/7] devcg: device cgroup extension for rdma resource
From: Parav Pandit
Date: Mon Sep 14 2015 - 14:54:51 EST
On Mon, Sep 14, 2015 at 10:58 PM, Jason Gunthorpe
<jgunthorpe@xxxxxxxxxxxxxxxxxxxx> wrote:
> On Mon, Sep 14, 2015 at 04:39:33PM +0530, Parav Pandit wrote:
>
>> 1. How does the % of resource, is different than absolute number? With
>> rest of the cgroups systems we define absolute number at most places
>> to my knowledge.
>
> There isn't really much choice if the abstraction is a bundle of all
> resources. You can't use an absolute number unless every possible
> hardware limited resource is defined, which doesn't seem smart to me
> either.
Absolute number of percentage is representation for a given property.
That property needs definition. Isn't it?
How do we say that "Some undefined" resource you give certain amount,
which user doesn't know about what to administer, or configure.
It has to be quantifiable entity.
It is not abstract enough, and doesn't match our universe of
> hardware very well.
>
Why does the user need to know the actual hardware resource limits or
define hardware based resource.
RDMA verbs is the abstraction point.
We could well define
(a) how many number of RDMA connections are allowed instead of QP, or CQ or AH.
(b) how many data transfer buffers to use.
The fact is we have so many mid layers, which uses these resources
differently, above abstraction does not fit the bill.
But we know the mid layers how they operate, and how they use the RDMA
resource keeping.
So if we deploy MPI application for given cluster of container, we can
accurately configure the RDMA resource, isn't it?
Another example would be, if we don't want only 50% resources to be
given to all containers and rest 50% to kernel consumers such as NFS,
all containers can reside in single rdma cgroup limited to given
limits.
>> 2. bytes of kernel memory for RDMA structures
>> One QP of one vendor might consume X bytes and other Y bytes. How does
>> the application knows how much memory to give.
>
> I don't see this distinction being useful at such a fine granularity
> where the control side needs to distinguish between 1 and 2 QPs.
>
> The majority use for control groups has been along with containers to
> prevent a container for exhausting resources in a way that impacts
> another.
>
Right. Thats the intention.
> In that use model limiting each container to N MB of kernel memory
> makes it straightforward to reason about resource exhaustion in a
> multi-tennant environment. We have other controllers that do this,
> just more indirectly (ie limiting the number of inotifies, or the
> number of fds indirectly cap kernel memory consumption)
>
> ie Presumably some fairly small limitation like 10MB is enough for
> most non-MPI jobs.
Container application always write a simple for loop code to take away
majority of QP with 10MB limit.
>
>> Application doing 100 QP allocation, still within limit of memory of
>> cgroup leaves other applications without any QP.
>
> No, if the HW has a fixed QP pool then it would hit #1 above. Both are
> active at once. For example you'd say a container cannot use more than
> 10% of the device's hardware resources, or more than 10MB of kernel
> memory.
>
Right. we need to define this resource pool, right?
Why it cannot be verbs abstraction?
How many resources are really used to implement verb layer in reality
is left to hardware vendor
Abstract pool just added confusion instead of clarity.
Imagine instead of tcp_bytes or kmem bytes, its "some memory
resource", how would someone debug/tune a system with abstract knobs?
> If on an mlx card, you probably hit the 10% of QP resources first. If
> on an qib card there is no HW QP pool (well, almost, QPNs are always
> limited), so you'd hit the memory limit instead.
>
> In either case, we don't want to see a container able to exhaust
> either all of kernel memory or all of the HW resources to deny other
> containers.
>
> If you have a non-container use case in mind I'd be curious to hear
> it..
Container is the prime case. Additionally equally prime case of non
container use case.
Today, application can take up all the resource being first class
citizan, and NFS mount will fail.
So without container also we should be able to restrict resources to
user mode app.
>
>> I don't see a point of memory footprint based scheme, as memory limits
>> are well addressed by more smarter memory controller anyway.
>
> I don't thing #1 is controlled but another controller. This is long
> lived kernel-side memory allocations to support RDMA resource
> allocation - we certainly have nothing in the rdma layer that is
> tracking this stuff.
>
Some drivers performs mmap() of kernel memory to user space, some
drivers does user space page allocation and maps to device.
Putting or tracking all those is just so intrusive changes spreading
down the vendor drivers or ib layer which may not be right way to
track.
Memory allocation tracking I believe should be left to memcg.
>> If the hardware vendor defines the resource pool without saying its
>> resource QP or MR, how would actually management/control point can
>> decide what should be controlled to what limit?
>
> In the kernel each HW driver has to be involved to declare what it's
> hardware resource limits are.
>
> In user space, it is just a simple limiter knob to prevent resource
> exhaustion.
>
> UAPI wise, nobdy has to care if the limit is actually # of QPs or
> something else.
>
If we dont care about resource, we cannot tune or limit it. number of
MRs used by MPI vs rsocket vs accelio is way different.
> Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/