Re: [RFC][PATCH] update /proc/sys/vm/drop_caches documentation
From: Tim Pepper
Date: Wed Sep 15 2010 - 15:25:08 EST
On Wed 15 Sep at 13:33:03 +0900 kamezawa.hiroyu@xxxxxxxxxxxxxx said:
> >
> > diff -puN fs/drop_caches.c~update-drop_caches-documentation fs/drop_caches.c
> > --- linux-2.6.git/fs/drop_caches.c~update-drop_caches-documentation 2010-09-14 15:44:29.000000000 -0700
> > +++ linux-2.6.git-dave/fs/drop_caches.c 2010-09-14 15:58:31.000000000 -0700
> > @@ -47,6 +47,8 @@ int drop_caches_sysctl_handler(ctl_table
> > {
> > proc_dointvec_minmax(table, write, buffer, length, ppos);
> > if (write) {
> > + WARN_ONCE(1, "kernel caches forcefully dropped, "
> > + "see Documentation/sysctl/vm.txt\n");
>
> Documentation updeta seems good but showing warning seems to be meddling to me.
We already have examples of things where we warn in order to turn up
"interesting" userspace code, in the hope of starting dialog and getting
things fixed for the future. I don't see this so much as meddling as
one of the fundamental aspects of open source.
drop_caches probably originally should have gone in under a CONFIG_DEBUG
(even if all the distros would have turned it on), and had a WARN_ON
(personally I'd argue for this compared to WARN_ONCE()), and even have
been exposed in debugfs not procfs...but it's part of the "the interface"
at this point.
Somebody doing debug and testing which leverages drop_caches should not
be bothered by a WARN_ON(). Somebody using it to "fix" the kernel with
repeated/regular calls to drop_caches should get called out for fixing
themselves and the WARN_*()'s noting the comm could help that, unless
somebody has a use case where repeated/regular calls to drop_caches
is valid and not connected to buggy usage or explicit performance
debug/testing?
--
Tim Pepper <lnxninja@xxxxxxxxxxxxxxxxxx>
IBM Linux Technology Center
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/