On Thu, May 23, 2019 at 09:20:35PM +0300, Ivan Khoronzhuk wrote:
Add XDP support based on rx page_pool allocator, one frame per page.
Page pool allocator is used with assumption that only one rx_handler
is running simultaneously. DMA map/unmap is reused from page pool
despite there is no need to map whole page.
Due to specific of cpsw, the same TX/RX handler can be used by 2
network devices, so special fields in buffer are added to identify
an interface the frame is destined to. Thus XDP works for both
interfaces, that allows to test xdp redirect between two interfaces
easily.
XDP prog is common for all channels till appropriate changes are added
in XDP infrastructure.
Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@xxxxxxxxxx>
---
drivers/net/ethernet/ti/Kconfig | 1 +
drivers/net/ethernet/ti/cpsw.c | 555 ++++++++++++++++++++++---
drivers/net/ethernet/ti/cpsw_ethtool.c | 53 +++
drivers/net/ethernet/ti/cpsw_priv.h | 7 +
4 files changed, 554 insertions(+), 62 deletions(-)
diff --git a/drivers/net/ethernet/ti/Kconfig b/drivers/net/ethernet/ti/Kconfig
index bd05a977ee7e..3cb8c5214835 100644
--- a/drivers/net/ethernet/ti/Kconfig
+++ b/drivers/net/ethernet/ti/Kconfig
@@ -50,6 +50,7 @@ config TI_CPSW
depends on ARCH_DAVINCI || ARCH_OMAP2PLUS || COMPILE_TEST
select TI_DAVINCI_MDIO
select MFD_SYSCON
+ select PAGE_POOL
select REGMAP
---help---
This driver supports TI's CPSW Ethernet Switch.
diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index 87a600aeee4a..274e6b64ea9e 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -31,6 +31,10 @@
#include <linux/if_vlan.h>
#include <linux/kmemleak.h>
#include <linux/sys_soc.h>
+#include <net/page_pool.h>
+#include <linux/bpf.h>
+#include <linux/bpf_trace.h>
+#include <linux/filter.h>
#include <linux/pinctrl/consumer.h>
+ start_free = 1;s/cash/cache
+ continue;
+ }
+
+ /* if refcnt > 1, page has been holding by netstack, it's pity,
+ * so put it to the ring to be consumed later when fast cash is
+ * empty. If ring is full then free page by recycling as above.Although this should be fine since this part won't be called during the driver
+ */
+ ret = ptr_ring_produce(&pool->ring, page);
+ if (ret) {
+ page_pool_recycle_direct(pool, page);
+ continue;
+ }
init, i think i'd prefer unmapping the buffer and let the network stack free it,
instead of pushing it for recycling. The occurence should be pretty low, so
allocating a buffer every once in a while shouldn't have a noticeable
performance impact