(I also seem to recall Novell Netware being a 1-3 split though I'm not
sure, without paging why would there need to be more VM then mem)
Any datapoints on how other X86 OSes handle this?
On Tue, 13 Jan 1998, Albert D. Cahalan wrote:
> H.H.vanRiel writes:
> > On 11 Jan 1998, Albert D. Cahalan wrote:
> >> Xintian WU <xintian@cse.ogi.edu> writes:
> >>
> >>> I've installed linux2.0.32 first and linux2.1.77 later on a
> >>> Compaq Proliant 5000 box with 2GB memory and 4 cpus. Linux
> >>> 2.0.32 seems to have some difficulty in APIC interrupt handling,
> >>> and linux2.1.77 better.
> >>> But the problem is, both of them can set memory up to 1000MB.
> >>> If I set mem=2000MB, the system gets stuck during booting.
> >>> Is there any kernel setup to limit linux's memory?
> >>
> >> It is a limit that could be adjusted.
> >>
> >> Efficient operation of the Intel CPUs allows 4 GB of virtual
> >> address space and 4 GB of physical memory. (messing around with
> >> segments and undocumented CPU features will get you more of both,
> >
> > Segments... I've heard some rumours about them. Could they
> > be used for efficient support for more memory?
>
> No, they can be used for inefficient support for more memory.
> The 2.0.xx kernels had separate user and kernel segments,
> but it was too slow and complicated.
>
> >> but at a performance and complexity cost) To get good performance
> >> and security, virtual address space is used like this:
> >>
> >> 00000000 to bfffffff user address space
> >> c0000000 to ffffffff kernel address space
> >>
> >> The 1 GB of kernel address space includes a linear mapping of
> >> physical memory, PCI hardware devices, and vmalloc() mappings.
> >> Perhaps you could use 900 MB of physical memory.
> >
> > Do we need the mapping of vmalloc()ed area's?
>
> Yes, unless you want to play games with keeping it unmapped
> but with reserved space in user memory. They must be protected.
>
> >> As you can see, 64-bit Alpha hardware might be a better choice.
> >
> > Not if he needs to ditch his (awesome) machine first...
> > He's got a machine, and needs to get it to work.
> > AFAIK, it's not an FAQ yet that Linux doesn't support
> > more than 1G of memory on ia32 (or any 32bit arch)
>
> I've only heard of 1 GB machines before. AFAIK, such machines can't
> quite use all of the memory. PCI cards and vmalloc() mappings need
> to go somewhere. Linus Torvalds once mentioned 768 MB as a practical
> limit.
>
> >> You could sacrifice user address space to gain more kernel
> >> address space. That would allow more physical memory, since
> >> that gets mapped into the kernel address space. You want this:
> >>
> >> 00000000 to 7fffffff user address space
> >> 80000000 to ffffffff kernel address space
> >
> > Good temporary hack...
>
> It is excellent if your goal is "many large processes".
> It fails if your goal is "one huge process".
>
> To compensate for user address space loss, maybe allow/force
> really huge processes to manage their own secondary data segment.
> Then normal processes don't suffer the performance loss.
> (perhaps dosemu and wine already do that for other reasons?)
>
> >> To make that work:
> >> change the kernel page tables
> >> have copy_to_user() and other functions use the new division
> >> change the address the kernel is compiled for
> >> fix anything else that breaks
> >
> > VM code
> > arch/i386/kernel/*.S
> > arch/i386/kernel/mm/*
> > include/asm-i386/*.h
> > ...
> >
> > If Xintian needs to get a box up real soon, I think he's
> > better off changing his machine for an Alpha (AXP) box.
> >
> > In the long run, however, I do think Linux needs to support
> > that kind of architecture. Especially since there are quite
> > a lot of 32bit processors around that'll remain popular for
> > several years to come...
>
> Perhaps a new architecture, p6big. Mostly it would have symbolic
> links into the i386 code. The page tables would use the weird
> 36-bit Pentium Pro features (64 GB physical memory). Segments
> would be used everywhere. All the old segment registers get used
> as originally intended, but to go past 4 GB in 1998.
>
> For really big processes, gcc needs to learn about 48-bit far pointers.
>
>
>