Re: using more than 2 GB as a ram disk

Osvaldo Pinali Doederlein (osvaldo@visionnaire.com.br)
Thu, 4 Feb 1999 22:18:38 +0100


From: Alain Williams <addw@phcomp.co.uk>
To: <linux-kernel@vger.rutgers.edu>
Sent: Thursday, February 04, 1999 8:16 PM
Subject: Re: using more than 2 GB as a ram disk

>On Wed, Feb 03, 1999 at 10:32:10PM -0500, David Wragg wrote:
>> Actually, I think that big as the segment overhead is, it wouldn't be
>> the biggest overhead. Undisguised segmented pointers are sufficient to
>> implement plain ISO C, but if you wanted to port real applications
>> (and libraries, OS kernels, etc) to such a system, it would need to
>> present a flat memory space. There are at least three significant
>> costs to this:
>>
>> 1) Pointer arithmetic. Imagine offsetting a segmented pointer, so that
>> it crosses a segment boundary. Expensive normalisation would be
>> required. By overlapping neighbouring segments by a hefty margin (e.g.
>> 1MB), and lots of cleverness in the compiler, it may be possible to
>> avoid most of the normalizations, but some will remain.
>>
>> 2) Long would have to be 64-bits, to allow pointer->long and
>> long->pointer casts to work as most C programs expect.
>>
>> 3) Most C programs also expect things like:
>>
>> (T*)((long)pointerToT + sizeof(T)) == pointerToT + 1
>>
>> So in order to make the pointers cast to long look like a flat 64-bit
>> address space, pointer->long and long->pointer casts have to involve
>> shifting bits around.
>It reminds me of the old days using Xenix on '286, and the 5 different
>memory models that you got with C on MSDOS:
> int 16 bits
> long 32 bits
>
>There were compiler switches to control pointer size:
> code * 16 or 32 bits )
> data * 16 or 32 bits ) gives 4 different combinations
> The 5th was HUDGE, but can't recall what
> was special about it.
>You also needed 4 versions of libraries.

Huge would force pointers to be normalized after each operation, to support
arrays spanning more than one 64-Kb block... :) Otherwise, you would (for
example) initlialize a FAR pointer to the beginning of the block (seg:0) and
iterate it up to seg:FFFF but the next "p++" would roll back to the first
element. Huge pointers would instead change the segment when needed (moving
offsets back to zero at each normalization). I think that's what David
wants to avoid.

I don't know how this contributes to the Linux kernel work, but that's it :)
Well, if this makes pointer aberrations look even worse and stop you from
'supporting' similar things on Linux, it's great...

I also remember that in bad 'ol DOS you could use any memory model but still
have access to huge-style data through a language extension, the keyword
_huge which was a modifier for pointer types, then only the pointers
declared as such would become bigger (including segment) and have the
normalization. Maybe this is not so terrible because the apps needing gigs
of memory is extremely small, and all concerns would be very isolated. But
I think pragmas would be cleaner than language extensions, e.g. #pragma huge
myPointer.

>It all worked, but was vile, I hated it. It was done because that was
>the only way of doing it on an 8086/80286 -- unless you were sensible
>enough to move to a motorola 68k.
>
>OK, so do this vile hack, I will hate it. If I really need to go > 2Gb I
>will be sensible & move to an alpha (or something).

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/