Re: [PATCH v2] make dm and dm-crypt forward cgroup context (was:dm-crypt parallelization patches)
From: Mikulas Patocka
Date: Fri Apr 12 2013 - 14:01:33 EST
On Thu, 11 Apr 2013, Tejun Heo wrote:
> On Thu, Apr 11, 2013 at 12:52:03PM -0700, Tejun Heo wrote:
> > If this becomes an actual bottleneck, the right thing to do is making
> > css ref per-cpu. Please stop messing around with refcounting.
>
> If you think this kind of hackery is acceptable, you really need to
> re-evaluate your priorities in making engineering decisions. In
> tightly coupled code, maybe, but you're trying to introduce utterly
> broken error-prone thing as a generic block layer API. I mean, are
> you for real?
>
> --
> tejun
Please describe what is wrong with the code? Why do you call it hackery?
When device mapper is creating a cloned bio for the lower layer, it is
already assumed that the cloned bio has shorter lifetime than the original
bio it was created from.
The device mapper copies a part of the bio vector from the original bio to
the cloned bio, it copies pointers to pages without increasing reference
counts on those pages. As long as the original bio is not returned with
bio_endio, the pages must exist, so there is no need to increase their
reference counts.
Now, if copying pointers without increasing reference counts is OK for
pages, why do you think it is not OK for cgroup context?
Why do you call this bug-prone? - how do you think a bug could happen? If
someone in device mapper errorneously ends the master bio while the cloned
bio is still in progress, there is already a memory corruption bug (the
cloned bio vector points to potentially free pages) and safeguarding the
cgroup pointers won't fix it.
So if you think that reference counts should be incremented by every clone
of the original bio, what kind of bug should it protect against? If we
don't increment reference counts for pages, why should we do it for cgroup
pointers?
Mikulas
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/