Memory leak in 2.0.x and 2.1.63 !?

Chris Adams (cadams@ro.com)
Fri, 14 Nov 1997 11:56:04 -0600 (CST)


I still see a memory leak whenever I hit a large RAID device on a DPT
SmartRAID IV controller. My "benchmark" has been running mke2fs on the
30G filesystem (/dev/sdb1) (other heavy disk accesses cause memory
leaks, mke2fs is just easy to make it happen). I have tried this on a
bunch of 2.0 kernels (2.0.x, x=0,27,29,31,32-pre1,32-pre2) and on
2.1.63, and I get the same basic results (the kernels 2.0-2.0.29 don't
leak as much memory as the newer kernels). I also tried using the EATA
(ISA/EISA/PCI) driver and got the same problems. The system has 320M of
RAM, and I found out that if I limit how much memory is available (via
the mem= kernel option), it doesn't leak as much memory. Here is the
output of some runs of the following script under 2.0.32-pre2:

#!/bin/sh -x
date
free
mke2fs /dev/sdb1
free
date

I reboot before each run of the script.

With "mem=320M":
***
Fri Nov 14 11:27:31 CST 1997
total used free shared buffers cached
Mem: 322072 8064 314008 6068 1604 3096
-/+ buffers: 3364 318708
Swap: 130748 0 130748
Linux ext2 filesystem format
Filesystem label=
7778304 inodes, 31106174 blocks
1555308 blocks (5.00%) reserved for the super user
First data block=1
Block size=1024 (log=0)
Fragment size=1024 (log=0)
3798 block groups
8192 blocks per group, 8192 fragments per group
2048 inodes per group
Superblock backups stored on blocks:
Writing inode tables:
Writing superblocks and filesystem accounting information: done
total used free shared buffers cached
Mem: 322072 311860 10212 5928 270604 3192
-/+ buffers: 38064 284008
Swap: 130748 0 130748
Fri Nov 14 11:38:26 CST 1997
***
34700 leaked - 8:55 elapsed time
***

With "mem=32M":
***
Fri Nov 14 11:04:23 CST 1997
total used free shared buffers cached
Mem: 31212 8060 23152 6068 1604 3092
-/+ buffers: 3364 27848
Swap: 130748 0 130748
Linux ext2 filesystem format
Filesystem label=
7778304 inodes, 31106174 blocks
1555308 blocks (5.00%) reserved for the super user
First data block=1
Block size=1024 (log=0)
Fragment size=1024 (log=0)
3798 block groups
8192 blocks per group, 8192 fragments per group
2048 inodes per group
Superblock backups stored on blocks:
Writing inode tables:
Writing superblocks and filesystem accounting information: done
total used free shared buffers cached
Mem: 31212 26184 5028 508 20940 600
-/+ buffers: 4644 26568
Swap: 130748 1568 129180
Fri Nov 14 11:08:37 CST 1997
***
2848 leaked - 4:14 elapsed
***

With "mem=8M":
***
Fri Nov 14 11:19:12 CST 1997
total used free shared buffers cached
Mem: 6972 6772 200 6056 1464 1960
-/+ buffers: 3348 3624
Swap: 130748 8 130740
Linux ext2 filesystem format
Filesystem label=
7778304 inodes, 31106174 blocks
1555308 blocks (5.00%) reserved for the super user
First data block=1
Block size=1024 (log=0)
Fragment size=1024 (log=0)
3798 block groups
8192 blocks per group, 8192 fragments per group
2048 inodes per group
Superblock backups stored on blocks:
Writing inode tables:
Writing superblocks and filesystem accounting information: done
total used free shared buffers cached
Mem: 6972 4500 2472 508 1704 596
-/+ buffers: 2200 4772
Swap: 130748 1568 129180
Fri Nov 14 11:24:26 CST 1997
***
420 leaked - 5:14 elapsed
***

The amount of memory leaked goes down with the total amount of system
memory. Also, it ran faster with 32M available than with either 320M or
8M.

Has anyone else seen anything like this? Could this be some kind of
hardware problem? If I can't figure out what is happening here soon, I
will have to drop Linux on this system and use Solaris (ack!).

-- 
Chris Adams - cadams@ro.com
System Administrator - Renaissance Internet Services
I don't speak for anybody but myself - that's enough trouble.