Re: [OOPS] 2.6.9-rc4, dual Opteron, NUMA, 8GB
From: Brad Fitzpatrick
Date: Wed Oct 13 2004 - 15:12:40 EST
On Wed, 13 Oct 2004, Jeff Garzik wrote:
> Brad Fitzpatrick wrote:
> > I'm reporting an oops. Details follow.
> >
> > I have two of these machines. I will happily be anybody's guinea pig
> > to debug this. (more details, access to machine, try patches, kernels...)
> > Machines aren't in production.
> >
> > - Brad
> >
> >
> > Kernel: 2.6.9-rc4 vanilla (.config below)
> >
> > Hardware: IBM eServer 325, Dual Opteron 8GB ram (more info below)
> >
> > Pre-crash and crash:
> >
> > a1:~# mke2fs /dev/mapper/raid10-data
> > mke2fs 1.35 (28-Feb-2004)
> > Filesystem label=
> > OS type: Linux
> > Block size=4096 (log=2)
> > Fragment size=4096 (log=2)
> > 25608192 inodes, 51200000 blocks
> > 2560000 blocks (5.00%) reserved for the super user
> > First data block=0
> > 1563 block groups
> > 32768 blocks per group, 32768 fragments per group
> > 16384 inodes per group
> > Superblock backups stored on blocks:
> > 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
> > 4096000, 7962624, 11239424, 20480000, 23887872
> >
> > Writing inode tables: 1091/1563
> > Message from syslogd@localhost at Wed Oct 13 11:46:01 2004 ...
> > localhost kernel: Oops: 0000 [1] SMP
> >
> > Message from syslogd@localhost at Wed Oct 13 11:46:01 2004 ...
> > localhost kernel: CR2: 0000000000001770
>
>
> What's your block device configuration? What block devices are sitting
> on top of what other block devices?
/dev/mapper/raid10-data is a LV taking 200GB of a 280GB VG ("raid10") with
a single PV in it: /dev/sdb1 -- ips driver, IBM ServeRAID 6M card,
representing a RAID 10 atop 8 SCSI disks.
I just made a new kernel without NUMA and made a filesystem on /dev/sdb1
directly instead of using LVM and it worked fine, if not a little slowly.
Now that I know it /can/ work, I'll try and narrow down whose fault it is:
NUMA or LVM.
- Brad
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/