Hmm.. Since the cpu_online_map thing can be used to fix this, this doesn't
seem to be a big issue, BUT
On Wed, 12 Jun 2002, Anton Altaparmakov wrote:
>
> 1) Use a single buffer and lock it so once one file is under decompression
> no other files can be and if multiple compressed files are being accessed
> simultaneously on different CPUs only one CPU would be decompressing. The
> others would be waiting for the lock. (Obviously scheduling and doing other
> stuff.)
>
> 2) Use multiple buffers and allocate a buffer every time the decompression
> engine is used. Note this means a vmalloc()+vfree() in EVERY ->readpage()
> for a compressed file!
>
> 3) Use one buffer for each CPU and use a critical section during
> decompression (disable preemption, don't sleep). Allocated at mount time of
> first partition supporting compression. Freed at umount time of last
> partition supporting compression.
>
> I think it is obvious why I went for 3)...
I don't see that as being all that obvious. The _obvious_ choice is just
(1), protected by a simple spinlock. 128kB/CPU seems rather wasteful,
especially as the only thing it buys you is scalability on multiple CPU's
for the case where you have multiple readers all at the same time touching
a new compressed block.
That scalability operation seems dubious, especially since this will only
happen when you just had to do IO anyway, so in order to actually take
advantage of the scalability that IO would have had to happen on multiple
separate controllers.
Ehh?
Linus
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Sat Jun 15 2002 - 22:00:26 EST