Re: e2fs performance as function of block size

From: Jeff V. Merkey (jmerkey@timpanogas.org)
Date: Tue Nov 21 2000 - 19:37:57 EST


Alan Cox wrote:
>
> > It's as though the disk drivers are optimized for this case (1024). I
>
> The disk drivers are not, and they normally see merged runs of blocks so they
> will see big chunks rather than 1K then 1K then 1K etc.
>
> > behavior, but there is clearly some optimization relative to this size
> > inherent in the design of Linux -- and it may be a pure accident. This
> > person may be mixing and matching block sizes in the buffer cache, which
> > would satisfy your explanation.
>
> I see higher performance with 4K block sizes. I should see higher latency too
                                                            
^^^^^^^^^^^^^^^^^
Since buffer heads are chained, this would make sense.

> but have never been able to measure it. Maybe it depends on the file system.
> It certainly depends on the nature of requests

Could be. NWFS likes 4K block sizes -- this is it's default. On linux,
I've been emulating other block sizes beneath it. I see best
performance at 1024 byte blocks, worst at 512. The overhead of buffer
chaining is clearly the culprit.

On the TCPIP oops on 2.2.18-22, I have not been able to reproduce it
reliably. It appears to be in the ppp code, however, and not the TCPIP
code. The problem only shows up after several pppd connections have
accessed the box then terminated the connections (which is why I think
it's pp related). I would rate this as a level IV bug due to the
difficulty in creating it, and the fact that you have to deliberately
misconfigure a TCPIP network to make it show up.

Jeff

Jeff
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Thu Nov 23 2000 - 21:00:22 EST