Re: VFS/ext2fs - large files on the Alpha (fails for 17GB+)

Daniel Pittman (danielp@osa.de)
Wed, 19 Aug 1998 14:18:07 +0200


Rogier Wolff wrote:

[segmneted memory]
> Then you're back to the horrendously inefficient DOS stuff: You have
> to carry around two registers to do one pointer.

For a huge memory model, yeah. As it stands you DO actually need 2
registers, it's just that only one segment gets used for userspace under
Linux.

> To get the common
> stuff fast, you probably want to increment just the low end of the
> pointer. This means that you get restrictions on sizes of arrays.....
> Anyway, DOS flashbacks. The whole muck!

Yeah, it's not the best, but I cannot think of a single better solution
to large memory space using 32 bit pointers.

> Don't even think about it.

Well, it does get around the problem of addressing more than 2 (or 3) GB
of memory under Linux; this will cost something somewhere. mmaping only
part of the file is the other option and is IMHO uglier, especially if I
need access to more than 3GB simultaneously; the segment model does make
that possible, if a little difficult.

> I knew pretty quickly after I got my 640K
> XT, that I really didn't want to bother with those issues. It is also
> the main reason that I'm a Linux user since 0.12 or something like
> that.

Segmentation for everything is bad. Segmentation where relevant is
useful. It's that nice balance that is so hard to achieve...

Daniel

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.altern.org/andrebalsa/doc/lkml-faq.html