Re: [RFC][PATCH v7 08/14] writeback: add memcg fields to writeback_control
From: Greg Thelen
Date: Sun May 15 2011 - 15:53:43 EST
On Fri, May 13, 2011 at 2:41 AM, KAMEZAWA Hiroyuki
<kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:
> On Fri, 13 May 2011 01:47:47 -0700
> Greg Thelen <gthelen@xxxxxxxxxx> wrote:
>
>> Add writeback_control fields to differentiate between bdi-wide and
>> per-cgroup writeback. Cgroup writeback is also able to differentiate
>> between writing inodes isolated to a particular cgroup and inodes shared
>> by multiple cgroups.
>>
>> Signed-off-by: Greg Thelen <gthelen@xxxxxxxxxx>
>
> Personally, I want to see new flags with their usage in a patch...
Ok. Next version will merge the flag definition with first usage of the flag.
>> ---
>> include/linux/writeback.h | 2 ++
>> 1 files changed, 2 insertions(+), 0 deletions(-)
>>
>> diff --git a/include/linux/writeback.h b/include/linux/writeback.h
>> index d10d133..4f5c0d2 100644
>> --- a/include/linux/writeback.h
>> +++ b/include/linux/writeback.h
>> @@ -47,6 +47,8 @@ struct writeback_control {
>> unsigned for_reclaim:1; /* Invoked from the page allocator */
>> unsigned range_cyclic:1; /* range_start is cyclic */
>> unsigned more_io:1; /* more io to be dispatched */
>> + unsigned for_cgroup:1; /* enable cgroup writeback */
>> + unsigned shared_inodes:1; /* write inodes spanning cgroups */
>> };
>
>
> If shared_inode is really rare case...we don't need to have this shared_inodes
> flag and do writeback shared_inode always.....No ?
>
> Thanks,
> -Kame
The shared_inodes field is present to avoid punishing cgroups that are
not sharing, if they are run on a system that also includes sharing.
This issue is being debated in another thread: "[RFC][PATCH v7 00/14]
memcg: per cgroup dirty page accounting". Depending on the decision,
we may be able to delete the shared_inode fields if we choose to
always write shared inodes in both cgroup foreground and cgroup
background writeback.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/