Re: [net-next PATCH v9 1/8] octeontx2-pf: map skb data as device writeable

From: Leon Romanovsky
Date: Mon Nov 11 2024 - 02:16:12 EST


On Mon, Nov 11, 2024 at 10:31:02AM +0530, Bharat Bhushan wrote:
> On Sun, Nov 10, 2024 at 7:53 PM Leon Romanovsky <leon@xxxxxxxxxx> wrote:
> >
> > On Fri, Nov 08, 2024 at 10:27:01AM +0530, Bharat Bhushan wrote:
> > > Crypto hardware need write permission for in-place encrypt
> > > or decrypt operation on skb-data to support IPsec crypto
> > > offload. That patch uses skb_unshare to make skb data writeable
> > > for ipsec crypto offload and map skb fragment memory as
> > > device read-write.
> > >
> > > Signed-off-by: Bharat Bhushan <bbhushan2@xxxxxxxxxxx>
> > > ---
> > > v7->v8:
> > > - spell correction (s/sdk/skb) in description
> > >
> > > v6->v7:
> > > - skb data was mapped as device writeable but it was not ensured
> > > that skb is writeable. This version calls skb_unshare() to make
> > > skb data writeable.
> > >
> > > .../ethernet/marvell/octeontx2/nic/otx2_txrx.c | 18 ++++++++++++++++--
> > > 1 file changed, 16 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
> > > index 7aaf32e9aa95..49b6b091ba41 100644
> > > --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
> > > +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
> > > @@ -11,6 +11,7 @@
> > > #include <linux/bpf.h>
> > > #include <linux/bpf_trace.h>
> > > #include <net/ip6_checksum.h>
> > > +#include <net/xfrm.h>
> > >
> > > #include "otx2_reg.h"
> > > #include "otx2_common.h"
> > > @@ -83,10 +84,17 @@ static unsigned int frag_num(unsigned int i)
> > > static dma_addr_t otx2_dma_map_skb_frag(struct otx2_nic *pfvf,
> > > struct sk_buff *skb, int seg, int *len)
> > > {
> > > + enum dma_data_direction dir = DMA_TO_DEVICE;
> > > const skb_frag_t *frag;
> > > struct page *page;
> > > int offset;
> > >
> > > + /* Crypto hardware need write permission for ipsec crypto offload */
> > > + if (unlikely(xfrm_offload(skb))) {
> > > + dir = DMA_BIDIRECTIONAL;
> > > + skb = skb_unshare(skb, GFP_ATOMIC);
> > > + }
> > > +
> > > /* First segment is always skb->data */
> > > if (!seg) {
> > > page = virt_to_page(skb->data);
> > > @@ -98,16 +106,22 @@ static dma_addr_t otx2_dma_map_skb_frag(struct otx2_nic *pfvf,
> > > offset = skb_frag_off(frag);
> > > *len = skb_frag_size(frag);
> > > }
> > > - return otx2_dma_map_page(pfvf, page, offset, *len, DMA_TO_DEVICE);
> > > + return otx2_dma_map_page(pfvf, page, offset, *len, dir);
> >
> > Did I read correctly and you perform DMA mapping on every SKB in data path?
> > How bad does it perform if you enable IOMMU?
>
> Yes Leon, currently DMA mapping is done with each SKB, That's true
> even with non-ipsec cases.
> Performance is not good with IOMMU enabled. Given the context of this
> series, it just extends the same for ipsec use cases.

I know and I don't ask to change anything, just really curious how
costly this implementation is when IOMMU enabled.

Thanks

>
> Thanks
> -Bharat
>
> >
> > Thanks
> >
> > > }
> > >
> > > static void otx2_dma_unmap_skb_frags(struct otx2_nic *pfvf, struct sg_list *sg)
> > > {
> > > + enum dma_data_direction dir = DMA_TO_DEVICE;
> > > + struct sk_buff *skb = NULL;
> > > int seg;
> > >
> > > + skb = (struct sk_buff *)sg->skb;
> > > + if (unlikely(xfrm_offload(skb)))
> > > + dir = DMA_BIDIRECTIONAL;
> > > +
> > > for (seg = 0; seg < sg->num_segs; seg++) {
> > > otx2_dma_unmap_page(pfvf, sg->dma_addr[seg],
> > > - sg->size[seg], DMA_TO_DEVICE);
> > > + sg->size[seg], dir);
> > > }
> > > sg->num_segs = 0;
> > > }
> > > --
> > > 2.34.1
> > >
> > >
> >