On Wed, Feb 28, 2018 at 10:20:33PM +0800, Jason Wang wrote:
Then you are better off just using a small ring and dropping
On 2018å02æ28æ 22:01, Michael S. Tsirkin wrote:
On Wed, Feb 28, 2018 at 02:28:21PM +0800, Jason Wang wrote:E.g using xdp_redirect to redirect packets from ixgbe to tap. In my test,
On 2018å02æ28æ 12:09, Michael S. Tsirkin wrote:This is not true in my experiments. In my experiments, ring size of 4k
Right, but there's usually a mismatch of speed between producer andThe contention is only when the ring overflows into the list though.Or we can add plist to a union:This look ok.
struct sk_buff {
union {
struct {
/* These two members must be first. */
struct sk_buff *next;
struct sk_buff *prev;
union {
struct net_device *dev;
/* Some protocols might use this space to store information,
* while device pointer would be NULL.
* UDP receive path is one user.
*/
unsigned long dev_scratch;
};
};
struct rb_node rbnode; /* used in netem & tcp stack */
+ struct plist plist; /* For use with ptr_ring */
};
Yes, it's not clear to me this is really needed for XDP consider the lockFor XDP, we need to embed plist in struct xdp_buff too,Right - that's pretty straightforward, isn't it?
contention it brings.
Thanks
consumer. In case of a fast producer, we may get this contention very
frequently.
Thanks
bytes is enough to see packet drops in single %s of cases.
To you have workloads where rings are full most of the time?
ixgeb can produce ~8Mpps. But vhost can only consume ~3.5Mpps.
packets early, right?
One other nice side effect of this patch is that instead of droppingIn some case, producer may not want to be slowed down, e.g in devmap which
packets quickly it slows down producer to match consumer speeds.
can redirect packets into several different interfaces.
IOW, it can go either way in theory, we will need to test and see the effect.Yes.
Thanks