Re: [PATCH 0/7] devcg: device cgroup extension for rdma resource
From: Parav Pandit
Date: Mon Sep 14 2015 - 06:18:59 EST
On Sat, Sep 12, 2015 at 12:55 AM, Tejun Heo <tj@xxxxxxxxxx> wrote:
> Hello, Parav.
>
> On Fri, Sep 11, 2015 at 10:09:48PM +0530, Parav Pandit wrote:
>> > If you're planning on following what the existing memcg did in this
>> > area, it's unlikely to go well. Would you mind sharing what you have
>> > on mind in the long term? Where do you see this going?
>>
>> At least current thoughts are: central entity authority monitors fail
>> count and new threashold count.
>> Fail count - as similar to other indicates how many time resource
>> failure occured
>> threshold count - indicates upto what this resource has gone upto in
>> usage. (application might not be able to poll on thousands of such
>> resources entries).
>> So based on fail count and threshold count, it can tune it further.
>
> So, regardless of the specific resource in question, implementing
> adaptive resource distribution requires more than simple thresholds
> and failcnts.
May be yes. Buts in difficult to go through the whole design to shape
up right now.
This is the infrastructure getting build with few capabilities.
I see this as starting point instead of end point.
> The very minimum would be a way to exert reclaim
> pressure and then a way to measure how much lack of a given resource
> is affecting the workload. Maybe it can adaptively lower the limits
> and then watch how often allocation fails but that's highly unlikely
> to be an effective measure as it can't do anything to hoarders and the
> frequency of allocation failure doesn't necessarily correlate with the
> amount of impact the workload is getting (it's not a measure of
> usage).
It can always kill the hoarding process(es), which is holding up the
resources without using it.
Such processes will eventually will get restarted but will not be able
to hoard so much because its been on the radar for hoarding and its
limits have been reduced.
>
> This is what I'm awry about. The kernel-userland interface here is
> cut pretty low in the stack leaving most of arbitration and management
> logic in the userland, which seems to be what people wanted and that's
> fine, but then you're trying to implement an intelligent resource
> control layer which straddles across kernel and userland with those
> low level primitives which inevitably would increase the required
> interface surface as nobody has enough information.
>
We might be able to get the information as we go along.
Such arbitration and management layer outside (instead of inside) has
more visibility into multiple systems which are part of single cluster
and processes are spreaded across cgroup in each such system.
While a logic inside can manage just a manage a process of single node
which are using multiple cgroups.
> Just to illustrate the point, please think of the alsa interface. We
> expose hardware capabilities pretty much as-is leaving management and
> multiplexing to userland and there's nothing wrong with it. It fits
> better that way; however, we don't then go try to implement cgroup
> controller for PCM channels. To do any high-level resource
> management, you gotta do it where the said resource is actually
> managed and arbitrated.
>
> What's the allocation frequency you're expecting? It might be better
> to just let allocations themselves go through the agent that you're
> planning.
In that case we might need to build FUSE style infrastructure.
Frequency for RDMA resource allocation is certainly less than read/write calls.
> You sure can use cgroup membership to identify who's asking
> tho. Given how the whole thing is architectured, I'd suggest thinking
> more about how the whole thing should turn out eventually.
>
Yes, I agree.
At this point, its software solution to provide resource isolation in
simple manner which has scope to become adaptive in future.
> Thanks.
>
> --
> tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/