Re: BUG: jbd2 slowing down file copies even though no journalingfile system is used
From: Roland Eggner
Date: Wed May 16 2012 - 17:03:21 EST
Hi BjÃrn!
âCc: LKMLâ added â sorry for the duplicate to your personal address.
On 2012-05-13 Sun 01:21, BjÃrn Christoph wrote:
> I have very slow and non-consistant transfer rates when copying files to Linux.
>
> My system:
> Ubuntu 12.04 Server x64 (same issue with Debian Squeez), 2 Gb RAM, AMD
> 240e processor
>
> Hard disks:
> 1 * 500 Gb Seagate Momentus XT Hybrid HDD
> 6 * 1.5 - 2 TB Samsung HDD
>
> The Momentus contains the OS in an encrypted LWM (dm-crypt) - one ext2
> for booting, one ext2 and one LVM partition (all primary).
>
> The other bigger hard disks each contain one encrypted Truecrypt
> partition. Most are with ext4, one is with ext3. I join them to one
> folder using "union-fs" (read only).
>
> I use samba for transfer of data from my Windows 7 PC's.
>
> The files stored are usually around 20 Gb in size (movies), and I
> don't really care much about journaling and integrity (most partitions
> are mounted read-only anyways as they are full ;) )
>
> ------------
> The good scenario
>
> Copying files FROM the server to a Windows client is really OK
> performance wise. I see a 50% network utilization (1 Gbit network) and
> it's quite constant around 46 Mb/sec.
>
> â â
>
> ------------
> The bad scenario:
>
> I do also have to transfer data to this server. And here comes the problem.
>
> I copy the file to a specific hard disk / samba share (not to the union-fs).
>
> And here, the data transfer is just impossibly slow.
>
> Now before we end up in "it's encryption", let me say this is not the
> case and why:
>
> I have one standard ext2 partition on the Momentus (300 Gb), ext2 has
> no journaling (which I really don't require).
>
> I copy the file to the ext2 partition. Average transfer rate around 29
> Mb/sec. However, there are a lot of spikes in the network transfer.
> Peak at around 35% and it goes down to 20% in a wave form with
> sometimes even 0% transfer.
> â â
I guess, your write performance problem is related to a large amount of triple
indirect blocks and high file fragmentation:
(1) Are you aware of the fundamental difference between blocklist filesystems
(ext2, ext3) and extent based filesystems (ext4 with option extents enabled,
XFS, JFS, â)?
If not, imagine the huge metadata overhead of reading a 20G sized file stored
in millions of triple indirect blocks (= pointers to pointers to pointers to
blocks), compared to reading the same 20G sized file stored in just 10 extents.
(2) Did you check âtind blocksâ and ânon-contiguous filesâ in the output of
fsck command? If you want further help, please post output of df and fsck.
df output shows in the first column the device names to use for fsck command:
df -BG -T
sudo fsck -C 0 -f -n /dev/yourdevice
(3) Did you create your ext4-filesystems by using mkfs.ext4 or by conversion
of ext3 filesystems? In the latter case, did you run following commands
(partition backup recommended prior to first trial)?
sudo tune2fs -O extents /dev/yourdevice
sudo tune2fs -I 256 /dev/yourdevice
Note, that enabling extents affects only files written in the future, not
already written files. Thus a "backup - mkfs.ext4 - restore"-cycle is
preferable. Related documentation e.g.:
http://kernel.org/doc/Documentation/filesystems/ext4.txt
man tune2fs
man mkfs.ext4
Background:
6 years ago I compared Linux filesystems and encountered this case:
(a) ext3 filesystem size some 20G, holding a few files with size 0.5G â 4G.
e2fsck required almost 1 hour for checking.
(b) XFS filesystem on the very same partition, holding exactly the same
fileset. xfs_check completed within a few seconds.
Since then and after some other considerations I use XFS for all my plain and
encrypted Linux filesystems. My shell prompt automatically calls a script
âcookedâ by me, which executes sync as soon as cpuload drops to idle or nearly
idle. This protects my filesystems against the famous âzero-sized files
problemâ of XFS after power interruptions, without any noticeable performance
downsides.
In terms of file fragmentation XFS most likely surpasses every other file-
system, particularly in nearly full conditions. For your use case with âmost
partitions are mounted read-only anyways as they are fullâ this sounds
attractive, or what would you say?
http://en.wikipedia.org/wiki/XFS#Extent_based_allocation
http://kernel.org/doc/Documentation/filesystems/xfs.txt
http://xfs.org/index.php/XFS
--
Regards
Roland
Attachment:
pgp00000.pgp
Description: PGP signature