Re: [PATCH] DMA: Correct invalid assumptions in the Kconfig text

From: Haavard Skinnemoen
Date: Wed Oct 24 2007 - 14:17:22 EST


On Wed, 24 Oct 2007 08:55:58 -0700
"Dan Williams" <dan.j.williams@xxxxxxxxx> wrote:
> > Don't get me wrong; I think Intel deserves lots of respect for
> > creating this framework. But this is also why I got a bit
> > disappointed when I discovered that it seems to be less generic
> > than I initially hoped.
> >
>
> Patches welcome :-)

Sure, that's the plan, and this patch is a start, isn't it? I just
wanted to make sure that the generic-sounding DMA Engine API would
allow more "traditional" DMA controller functionality in addition to
all the high-end RAID and networking stuff.

> Part of the problem of supporting slave/device DMA along side generic
> memcpy/xor/memset acceleration is that it adds a number of caveats and
> restrictions to the interface. One idea is to create another client
> layer, similar to async_tx, that can handle the architecture specific
> address, bus, and device pairing restrictions. In other words make
> device-dma a superset of the generic offload capabilities and move it
> to its own channel management layer.

Yes, I've been thinking along those lines, and
Documentation/crypto/async-tx-api.txt seems to suggest something like
that for platform-specific extensions. More specifically, I'm thinking
about adding a few new structs which wrap around some of the existing
ones, like struct dma_async_tx_descriptor, adding new ops and data
members. So it will be sort of a generic, optional extension of the API.

If that works out well, we may also consider moving some of the
async_tx-specific fields into an extended struct of its own. But that
will be more of an optimization or cleanup. I don't want to hurt any
existing users.

Then we're going to need some new hooks for creating those extended
descriptors. This can be done either by adding more hooks in struct
dma_device or by creating another "subclass".

Yes, I'm mostly handwaving at this point, but I intend to follow up
with actual code soon. Unless someone else beats me to it, of course.

> > I'm not going to suggest any changes to the actual framework for
> > 2.6.24, but I think the _intention_ of the framework needs to be
> > clarified.
> >
>
> Should this patch wait until the framework has been extended?

IMO, no. The first step I'm planning to do is to get the driver working
within the existing framework, i.e. as a pure memcpy offload
engine, and verify that it can indeed improve networking performance.
This would imply that the framework itself isn't Intel-specific even
though all the existing drivers are for Intel hardware. And I don't
think the proposed text makes any new promises that weren't there
before.

And I think it's important to clarify that what is meant to live under
drivers/dma isn't just RAID and networking acceleration engines -- it
is realy meant to be a generic DMA engine framework. If this isn't
true, I might as well stop writing the driver based on this framework
and instead focus on creating a new one.

But I'm pretty optimistic about being able to extend the API in a way
that will not hurt existing functionality and still provide what we
need to support embedded DMA controllers. In fact, it looks a lot
easier to do with the current incarnation of the API than it did
earlier.

> Otherwise, Acked-by: Dan Williams <dan.j.williams@xxxxxxxxx>

Thanks. Are one of you going to pick it up as well?

HÃvard
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/