Re: [PATCH 1/4] vmscan: simplify shrink_inactive_list()

From: Dave Chinner
Date: Fri Apr 16 2010 - 22:38:13 EST


On Fri, Apr 16, 2010 at 03:57:07PM +0100, Mel Gorman wrote:
> On Fri, Apr 16, 2010 at 09:40:13AM +1000, Dave Chinner wrote:
> > On Thu, Apr 15, 2010 at 06:54:16PM +0200, Andi Kleen wrote:
> > > > It's a buying-time venture, I'll agree but as both approaches are only
> > > > about reducing stack stack they wouldn't be long-term solutions by your
> > > > criteria. What do you suggest?
> > >
> > > (from easy to more complicated):
> > >
> > > - Disable direct reclaim with 4K stacks
> >
> > Just to re-iterate: we're blowing the stack with direct reclaim on
> > x86_64 w/ 8k stacks.
>
> Yep, that is not being disputed. By the way, what did you use to
> generate your report? Was it CONFIG_DEBUG_STACK_USAGE or something else?
> I used a modified bloat-o-meter to gather my data but it'd be nice to
> be sure I'm seeing the same things as you (minus XFS unless I
> specifically set it up).

I'm using the tracing subsystem to get them. Doesn't everyone use
that now? ;)

$ grep STACK .config
CONFIG_STACKTRACE_SUPPORT=y
CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y
# CONFIG_CC_STACKPROTECTOR is not set
CONFIG_STACKTRACE=y
CONFIG_USER_STACKTRACE_SUPPORT=y
CONFIG_STACK_TRACER=y
# CONFIG_DEBUG_STACKOVERFLOW is not set
# CONFIG_DEBUG_STACK_USAGE is not set

Then:

# echo 1 > /proc/sys/kernel/stack_tracer_enabled

<run workloads>

Monitor the worst recorded stack usage as it changes via:

# cat /sys/kernel/debug/tracing/stack_trace
Depth Size Location (44 entries)
----- ---- --------
0) 5584 288 get_page_from_freelist+0x5c0/0x830
1) 5296 272 __alloc_pages_nodemask+0x102/0x730
2) 5024 48 kmem_getpages+0x62/0x160
3) 4976 96 cache_grow+0x308/0x330
4) 4880 96 cache_alloc_refill+0x27f/0x2c0
5) 4784 96 __kmalloc+0x241/0x250
6) 4688 112 vring_add_buf+0x233/0x420
......


Cheers,

Dave.
--
Dave Chinner
david@xxxxxxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/