Re: Filesize limitation

Adam D. Bradley (artdodge@cs.bu.edu)
Wed, 5 Nov 1997 01:01:55 -0500 (EST)


> > > That is no excuse for badly written software, of course.
> > > But the 2 GB limit looks a lot like the 640KB limit. When the 640KB
> > > limit was created by IBM, it was hard to think of applications
> > > that could use all that memory. But they came.
> >
> > More like the old DOS 32M limit.
> >
> > Why don't you just hack Ext2fs to use 128 bits (really_long ints?) and be
> > done with it? I doubt you could find enough DASD on the planet to make
> > that much storage...
>
> Agreed. 4TB isn't that much anymore, why not just move straight to 128bit?

Performance with 64-bit values is bad enough on 32-bit machines...
when we all have Alphas and Merceds, then we can talk... ;-)

A 64-bit linear address space spans 1634 PB, 16777216 TB. If we could
create a file with a 128-bit address span, how many of those 64-bit
address spaces could be allocated to each and every person on earth?

Don't get me wrong, I would love to have bragging rights to "the first
desktop OS to support a 128-bit filespace", I just don't think it's a
good idea in terms of keeping mainline code fast and Linux filesystem
performance the best on earth.

Does GCC even _have_ a type for 128 bits on ia32?
except "char longlonglong[16];", of course... ;-)

Adam

--
Things look so bad everywhere      Adam D. Bradley      artdodge@cs.bu.edu
In this whole world what is fair        Boston University Computer Science
We walk blind and we try to see             Ph.D. student and Linux hacker
Falling behind in what could be  ---->  Bring me a Higher Love  ---->  <><