Re: [PATCH] radeon: use vmalloc instead of kmalloc

From: Thomas Hellström
Date: Tue Jun 23 2009 - 11:14:41 EST

Dave Airlie skrev:
On Tue, Jun 23, 2009 at 9:35 PM, Peter Zijlstra<peterz@xxxxxxxxxxxxx> wrote:
On Mon, 2009-06-22 at 19:26 +0200, Jerome Glisse wrote:
We don't need to allocated contiguous pages in cs codepath
so use vmalloc instead.
Best would be to not require >PAGE_SIZE allocations at all of course.

It gets messy when you have copy from user and spinlocks, it would be nice
to just parse the userspace PAGE_SIZE at a time, but it would mean dropping
a lock we probably need to hold.

But barring that, it would be great to have something like:

ptr = kmalloc(size, GFP_KERNEL | __GFP_NOWARN);
if (!ptr)
ptr = vmalloc(size);

we have a drm_calloc_large workaround already for the Intel driver which also
need this.
One problem with multiple vmallocs per command submission is performance. Judging from previous work, drivers that are doing this tend to get very cpu-hungry. Since Radeon is only allowing a single process into the command submission path at once, what about pre-allocating a single larger vmalloced buffer at first command submission and take care to flush user-space before the submitted command stream gets too big.

Also, how long do these allocations live? vmalloc space can be quite
limited (i386) and it can fragment too.

Only an ioctl lifetime so they aren't that bad. We however do have some vmaps
that might be quite large, mainly the fbcon framebuffer (up to 8MB in
some cases)
That one would be ioremapped, not vmapped right? Not that it matters because it's using vmalloc space anyway, but it wouldn't be worse than a traditionally ioremapped framebuffer.



Are you an open source citizen? Join us for the Open Source Bridge conference!
Portland, OR, June 17-19. Two days of sessions, one day of unconference: $250.
Need another reason to go? 24-hour hacker lounge. Register today!;215844324;13503038;v?
Dri-devel mailing list

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at