Re: virtio ring cleanups, which save stack on older gcc
From: Rusty Russell
Date: Fri May 30 2014 - 03:01:48 EST
Minchan Kim <minchan@xxxxxxxxxx> writes:
> On Thu, May 29, 2014 at 08:38:33PM +0930, Rusty Russell wrote:
>> Minchan Kim <minchan@xxxxxxxxxx> writes:
>> > Hello Rusty,
>> >
>> > On Thu, May 29, 2014 at 04:56:41PM +0930, Rusty Russell wrote:
>> >> They don't make much difference: the easier fix is use gcc 4.8
>> >> which drops stack required across virtio block's virtio_queue_rq
>> >> down to that kmalloc in virtio_ring from 528 to 392 bytes.
>> >>
>> >> Still, these (*lightly tested*) patches reduce to 432 bytes,
>> >> even for gcc 4.6.4. Posted here FYI.
>> >
>> > I am testing with below which was hack for Dave's idea so don't have
>> > a machine to test your patches until tomorrow.
>> > So, I will queue your patches into testing machine tomorrow morning.
>>
>> More interesting would be updating your compiler to 4.8, I think.
>> Saving <100 bytes on virtio is not going to save you, right?
>
> But in my report, virtio_ring consumes more than yours.
Yeah, weird. I wonder if it's because I'm measuring before the call to
kmalloc; gcc probably spills extra crap on the stack before that.
You got 904 bytes:
5928 376 vring_add_indirect+0x36/0x200
[ 111.404781] <...>-15987 5d..2 111689538us : stack_trace_call: 9)
5552 144 virtqueue_add_sgs+0x2e2/0x320
[ 111.404781] <...>-15987 5d..2 111689538us : stack_trace_call: 10)
5408 288 __virtblk_add_req+0xda/0x1b0
[ 111.404781] <...>-15987 5d..2 111689538us : stack_trace_call: 11)
5120 96 virtio_queue_rq+0xd3/0x1d0
When I move my "stack_top" save code into __kmalloc, with gcc 4.6 and your
.config I get:
[ 2.506869] virtio_blk: stack used = 640
So I don't know quite what's going on :(
Cheers,
Rusty.
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index cb9b1f8326c3..894e290b4bd2 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -151,15 +151,19 @@ static void virtblk_done(struct virtqueue *vq)
spin_unlock_irqrestore(&vblk->vq_lock, flags);
}
+extern struct task_struct *record_stack;
+extern unsigned long stack_top;
+
static int virtio_queue_rq(struct blk_mq_hw_ctx *hctx, struct request *req)
{
struct virtio_blk *vblk = hctx->queue->queuedata;
struct virtblk_req *vbr = req->special;
unsigned long flags;
unsigned int num;
+ unsigned long stack_bottom;
const bool last = (req->cmd_flags & REQ_END) != 0;
int err;
-
+
BUG_ON(req->nr_phys_segments + 2 > vblk->sg_elems);
vbr->req = req;
@@ -199,7 +203,12 @@ static int virtio_queue_rq(struct blk_mq_hw_ctx *hctx, struct request *req)
}
spin_lock_irqsave(&vblk->vq_lock, flags);
+ record_stack = current;
+ __asm__ __volatile__("movq %%rsp,%0" : "=g" (stack_bottom));
err = __virtblk_add_req(vblk->vq, vbr, vbr->sg, num);
+ record_stack = NULL;
+
+ printk("virtio_blk: stack used = %lu\n", stack_bottom - stack_top);
if (err) {
virtqueue_kick(vblk->vq);
blk_mq_stop_hw_queue(hctx);
diff --git a/mm/slub.c b/mm/slub.c
index 2b1ce697fc4b..0f9a1a6b381e 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3278,11 +3278,22 @@ static int __init setup_slub_nomerge(char *str)
__setup("slub_nomerge", setup_slub_nomerge);
+extern struct task_struct *record_stack;
+struct task_struct *record_stack;
+EXPORT_SYMBOL(record_stack);
+
+extern unsigned long stack_top;
+unsigned long stack_top;
+EXPORT_SYMBOL(stack_top);
+
void *__kmalloc(size_t size, gfp_t flags)
{
struct kmem_cache *s;
void *ret;
+ if (record_stack == current)
+ __asm__ __volatile__("movq %%rsp,%0" : "=g" (stack_top));
+
if (unlikely(size > KMALLOC_MAX_CACHE_SIZE))
return kmalloc_large(size, flags);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/