tar tz <xxx.tgz ;->...
I re-tared file on my web, so it doesn't make mess anymore...
>
> On the actual code
>
> It tries to grab as big a chunk as possible. Thats horrible if not needed.
> The linux vm has problems with big chunks without someone writing a chunk
> eater. A lot of drivers dont care about chunk sizes (eg the BT848 video
> chip - perhaps a chunk hint is needed).
So, you could theoreticaly feed BT848 with, say, 128 x 32k chunks ? (4MB)
For my graphic accelerator purposes I don't need DMA buffers bigger than
64k or 128k. And it can't scather-gather, so it has to be the big chunk.
There are few more options planed - MAP_CONTIGUOUS, min_frag_size and
max_fragmesnts - the allocation should occur acording to these options.
Let's introduce here some language 'rule' - talking about chunks I
understand 4,16,32...128k or more (yes, I saw it in 2.1 kernels!!!),
fragment can be chunk or combination of contiguous chunks (4k, 24k, 12k)
The chunk size realy doesn't matter - what's funy, during testing I
released that very often found chunks are contiguous (those of 128k) and
get backwards. If I could somehow scan RAM before getting chunks, would be
great. I've heard there is some patch which gives you fragmentation info
of RAM - do you know where to find it ?
The folowing scenario:
-to keep kernel sane, define limits, the worse and the best cases for
amount of physical RAM, processes, users...
-scan RAM and find actual information about contiguous areas, like:
|****|........|....|........|****|................|....|........|****|
^ ^ ^ ^ ^
| | | | |
| 8k+4k+8k = 20k free | 16k+4k+8k = 28k free |
4k reserved 4k reserved 4k reserved
-if not enough, find out how much RAM you get if you do all possible
swapping (but not beyond defined limits!) This is actualy the same what
happens when you call __get_free_pages with GFP_KERNEL option, but only
thing you get here is information
-now you have all information and DMA can decide what to do and map only
the _specific_ pages which would be the best for the purpose.
However, I think that very good aproach would be some de-fragment function
in kernel. You don't even have to defragment whole RAM, just so much you
need. And 4GB <-> 1GB tuning could be keeped if you introduce some
structure which independently keeps all necessary information only at the
moment the fragmentation happens or reduce in some way the cases where
physical -> virtual memory info must be keeped.
> Also perhaps it should look how many big chunks are free and not use the
> last 'n', sort of like 2.1.x tries to keep them free.
sure, but what is the limit in that case ?
> Can I also have a GFP_DMA hint option. There are ISA cards that could
> benefit here.
yes, I planed to put it inside
>
> Its using mmap. That unfortunately means I can't drop it into shared
> memory or the video ram. Thats a nasty problem for a few specific cases
> I care about - firstly mmap /dev/mem, start video grabbing on it, secondly
> grabbing into the image part of a sys5 shared memory MITSHM block.
It uses code from mmap. But - could you drop the memory into shared area
with remap_page_range() ? Besides, I don't know much about MITSHM...
> Otherwise it looks nice
thanks
>
> Alan
>
simon
_______________________________________________________________________________
simon pogarcic sim@suse.de www.suse.de/~sim
_______________________________________________________________________________
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu