I've got a very large (2TB) proprietary database that is kept on an XFS
partition under a debian 2.6.8 kernel. It seemed to work well, until I
recently did some of my own tests and found that the performance should
be better then it is...
basically, treat the database as just a bunch of random seeks. The XFS
partition is sitting on top of a SCSI array (Dell PowerVault) which has
13+1 disks in a RAID5, stripe size=64k. I have done a number of tests
that mimic my app's accesses and realized that I want the inode to be
as large as possible (which in an intel box is only 2k), played with su
and sw and got those to 64k and 13... and performance got better.
BUT... here is what I need to understand, the filesize has a drastic
effect on performance. If I am doing random reads from a 20GB file
(system only has 2GB ram, so caching is not a factor), I get
performance about where I want it to be: about 5.7 - 6ms per block. But
if that file is 2TB then the time almost doubles, to 11ms. Why is this?
No other factors changed, only the filesize.
Another note, on this partition there is no other file then this one