Re: Filesize limitation

Richard B. Johnson (root@chaos.analogic.com)
Tue, 4 Nov 1997 08:34:17 -0500 (EST)


On Mon, 3 Nov 1997, linux kernel account wrote:

>
> While this is an execlent statement, there are uses for files that large:
> Consider making a file of a standard type, like a multimeda file (say
> .mpg, or .au, or .pcm, or whatever) that contains a very long sequence.
>
If I were to write an application that uses a very long sequence such
as a video disc, etc. I would not use "files". I would read/write
to either a block or character device, depending upon system requirements.

In image processing, I do have a need to store a very long sequence and
I do use SCSI Disks including a special type that has data encoded
in a spiral so one doesn't have to seek to another track (although
it's possible if necessary).

Given this, I only need to know the number of logical units that have
been written. These logical units, if they won't fit within the available
data type(s) are too small.
[SNIPPED]
>
> This is mostly an issue with the interface between apps and the kernel,
> hence, it will be difficult to change.

Yes. The kernel doesn't prevent you from filling up a disk with data of
any abitrary type. Therefore it's not a kernel issue at all. Now, if
the kernel prevented one from accessing every logical block on a
N-Gigabyte drive (it eventually will as N gets larger), then there is
a problem that should be addressed. Presently, with 1024 byte blocks,
we are only 1/1023th of the way there with a 32-bit block value. It
will be several years before this is a limitation.

Cheers,
Dick Johnson

Richard B. Johnson
Project Engineer
Analogic Corporation
Penguin : Linux version 2.1.60 on an i586 machine (66.15 BogoMips).
Warning : It's hard to remain at the trailing edge of technology.