Re: [PATCH 0/3] staging: zcache: xcfmalloc support

From: Dave Hansen
Date: Thu Sep 01 2011 - 19:44:30 EST


On Thu, 2011-09-01 at 17:01 -0500, Seth Jennings wrote:
> I was seeing n as the number of allocations. Since
> XCF_MAX_BLOCKS_PER_ALLOC and XCF_NUM_FREELISTS are constant (i.e.
> not increasing with the number of allocations) wouldn't it be
> O(1)?

It's the difference between your implementation and the _algorithm_
you've chosen. If someone doubled XCF_MAX_BLOCKS_PER_ALLOC and
XCF_NUM_FREELISTS, you'd see the time quadruple, not stay constant.
That's a property of the _algorithm_.

> > xcfmalloc's big compromise is that it doesn't do any searching or
> > fitting. It might needlessly split larger blocks when two small ones
> > would have worked, for instance.
>
> Splitting a larger block is the last option. I might not
> be understanding you correctly, but find_remove_block() does try to
> find the optimal block to use, which is "searching and fitting" in my
> mind.

I don't want to split hairs on the wording. It's obvious, though, that
xcfmalloc does not find _optimal_ fits. It also doesn't use the
smallest-possible blocks to fit the alloction. Consider if you wanted a
1000 byte allocation (with 10 100-byte buckets and no metadata for
simplicity), and had 4 blocks:

900
500,500,500

I think it would split a 500 into 100,400, and leave the 400:

500,500
400

It took the largest (most valuable) block, and split a 500 block when it
didn't have to. The reason it doesn't do this is that it doesn't
_search_. It just indexes and guesses. That's *fast*, but it errs on
the side of speed rather than being optimal. That's OK, we do it all
the time, but it *is* a compromise. We should at least be thinking of
the cases when this doesn't perform well.

-- Dave

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/