Re: Patch: BadRAM put to use

From: Rick van Rein (vanrein@cc4409-a.ensch1.ov.nl.home.com)
Date: Fri Mar 24 2000 - 23:47:54 EST


Hi,

REGARDING TREES:

> > I can use 61 MB out of 64 MB, which is a great result, and
> > good for trees.

With that, I meant environmental matters in general. Being a little poetic.
Chip production is really wasteful on resources, mainly because everything
has to be extremely pure. And every chip has to be 100% correct for acceptance,
which is not strictly necessary for everything, at least not for RAM chips.

REGARDING PERFORMANCE:

> Have you done any performance measurements? I'd think that this would
> encourage fragmentation of RAM and cause a performance degradation all
> out of proportion to the amount of RAM "lost".

Using the updated system, I experience no differences, but I did not (yet) do
formal measurements on the system.

Such measurements are bound to be dilluted to start with:
 1) My 100% correct has a CAS timing of 2, the 99% correct ones of 3.
         (Could be overcome with the BIOS setting, I suppose. Unsure.)
 2) More RAM influences speed of Linux anyway.
         (Could be overcome by proper use of mem=64M or so.)
Please read on...

> Remapping bad sectors on a hard drive is one thing; the extra complication
> it adds is only seen occasionally, when you hit the disk. But you literally
> use RAM continuously, every nanosecond. Complicating things at that level
> seems like it would have a disproportionate effect on performance.

Things are _not_ complicated at all!

At boot time, there is a negligable extra overhead, but who cares?

At runtime, the free_area structures simply do not contain memory pages with
faults. Therefore, the erroneous pages are not visible. And, as a result, they
are not available for allocation, and do not add any tests while allocating.

Now, I can hear y'all think: He's forgetting memory fragmentation. Once again,
Linux already handles this well.

Of course, a linear region of memory becomes broken up if it contains a faulty
page. That's a pitty, and you'll need at least some longer stretches of RAm to
allow for longer stretches to be allocated. In the 2 DIMMs I tested so far,
there were faulty pages in the 8MB-16MB region out of 32MB, so that's covered.

A problem could have been a relatively high number of single pages, or stretches
of just a few pages. But this is not a problem at all, since the vmalloc
routines allocate virtual memory (for userspace) from the free_area heap, ONE
PAGE AT A TIME. Follow this stack trace in mm/vmalloc.c:

        vmalloc
        vmalloc_area_pages
        alloc_area_pmd
        alloc_area_pte
        __get_free_page (TOP OF STACK)

The MMU does all the mapping of continuous userspace memory regions to single
pages. Since the MMU did this anyway, there is no added overhead.

After this explanation, is there still a wish for performance measurements?
If so, I'll perform them. But personally, I don't see a need for them.

Thanx for the feedback.

Cheers,
 -Rick.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Fri Mar 31 2000 - 21:00:14 EST