Re: File Size question

Stephen C. Tweedie (sct@redhat.com)
Wed, 25 Nov 1998 15:19:40 GMT


Hi,

On Mon, 23 Nov 1998 15:29:35 -0500 (EST), Phillip Mikulak
<pmikulak@iusb.edu> said:

> 1.5 gigabytes in size (the file has daily stock return data dating back to
> the early 60's. This file is used by several of our faculty for research
> purposes). However, the largest size file I can create is 1073741824
> bytes. I have explored the kernel source files and have found 2 source
> files where a "file too big" message can crop up: file.c and inode.c under
> /usr/src/linux/fs/ext2. After some experimenting I found that code within
> the inode.c was limiting the size a file could be. When I bypassed this
> code, I was able to make a 1.5 gig. file.

> Needless to say, I have a feeling this is a dangerous thing to do. My
> question is, is there any way around the 1.07 gigabyte limitation? Any
> advice someone might have would be greatly appreciated.

There is no such limit. I regularly build files up to the true kernel
limit of 2GB (and even that limit is just a property of our 32-bit API:
on 64-bit machines you can go way beyond that).

The code

> if (!create || new_block >=
> (current->rlim[RLIMIT_FSIZE].rlim_cur >>
> EXT2_BLOCK_SIZE_BITS(inode->i_sb))) {
> brelse (bh);
> *err = -EFBIG;
> return NULL;
> }

is where your process soft limits are tested. Check your file size
limits to see if you have got a soft limit in place ("ulimit -f" under
bash).

--Stephen

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/