On Sun, Oct 8, 2023 at 9:00 AM Yajun Deng <yajun.deng@xxxxxxxxx> wrote:
Well, I am not sure about this, "bpftrace" worked for me.
On 2023/10/8 14:45, Eric Dumazet wrote:
On Sat, Oct 7, 2023 at 8:34 AM Yajun Deng <yajun.deng@xxxxxxxxx> wrote:
On 2023/10/7 13:29, Eric Dumazet wrote:I honestly do not know what you are talking about.
On Sat, Oct 7, 2023 at 7:06 AM Yajun Deng <yajun.deng@xxxxxxxxx> wrote:Yes, I replied in v6.
Although there is a kfree_skb_reason() helper function that can be used to...
find the reason why this skb is dropped, but most callers didn't increase
one of rx_dropped, tx_dropped, rx_nohandler and rx_otherhost_dropped.
+This is broken...
+void netdev_core_stats_inc(struct net_device *dev, u32 offset)
+{
+ /* This READ_ONCE() pairs with the write in netdev_core_stats_alloc() */
+ struct net_device_core_stats __percpu *p = READ_ONCE(dev->core_stats);
+ unsigned long *field;
+
+ if (unlikely(!p))
+ p = netdev_core_stats_alloc(dev);
+
+ if (p) {
+ field = (unsigned long *)((void *)this_cpu_ptr(p) + offset);
+ WRITE_ONCE(*field, READ_ONCE(*field) + 1);
As I explained earlier, dev_core_stats_xxxx(dev) can be called from
many different contexts:
1) process contexts, where preemption and migration are allowed.
2) interrupt contexts.
Adding WRITE_ONCE()/READ_ONCE() is not solving potential races.
I _think_ I already gave you how to deal with this ?
https://lore.kernel.org/all/e25b5f3c-bd97-56f0-de86-b93a3172870d@xxxxxxxxx/
Please try instead:This wouldn't trace anything even the rx_dropped is in increasing. It
+void netdev_core_stats_inc(struct net_device *dev, u32 offset)
+{
+ /* This READ_ONCE() pairs with the write in netdev_core_stats_alloc() */
+ struct net_device_core_stats __percpu *p = READ_ONCE(dev->core_stats);
+ unsigned long __percpu *field;
+
+ if (unlikely(!p)) {
+ p = netdev_core_stats_alloc(dev);
+ if (!p)
+ return;
+ }
+ field = (__force unsigned long __percpu *)((__force void *)p + offset);
+ this_cpu_inc(*field);
+}
needs to add an extra operation, such as:
Have you even tried to change your patch to use
field = (__force unsigned long __percpu *)((__force void *)p + offset);
this_cpu_inc(*field);
Yes, I tested this code. But the following couldn't show anything even
if the rx_dropped is increasing.
'sudo python3 /usr/share/bcc/tools/trace netdev_core_stats_inc'
Make sure your toolchain generates something that looks like what I got:
000000000000ef20 <netdev_core_stats_inc>:
ef20: f3 0f 1e fa endbr64
ef24: e8 00 00 00 00 call ef29 <netdev_core_stats_inc+0x9>
ef25: R_X86_64_PLT32 __fentry__-0x4
ef29: 55 push %rbp
ef2a: 48 89 e5 mov %rsp,%rbp
ef2d: 53 push %rbx
ef2e: 89 f3 mov %esi,%ebx
ef30: 48 8b 87 f0 01 00 00 mov 0x1f0(%rdi),%rax
ef37: 48 85 c0 test %rax,%rax
ef3a: 74 0b je ef47 <netdev_core_stats_inc+0x27>
ef3c: 89 d9 mov %ebx,%ecx
ef3e: 65 48 ff 04 08 incq %gs:(%rax,%rcx,1)
ef43: 5b pop %rbx
ef44: 5d pop %rbp
ef45: c3 ret
ef46: cc int3
ef47: e8 00 00 00 00 call ef4c <netdev_core_stats_inc+0x2c>
ef48: R_X86_64_PLT32 .text.unlikely.+0x13c
ef4c: 48 85 c0 test %rax,%rax
ef4f: 75 eb jne ef3c <netdev_core_stats_inc+0x1c>
ef51: eb f0 jmp ef43 <netdev_core_stats_inc+0x23>
ef53: 66 66 66 66 2e 0f 1f data16 data16 data16 cs nopw 0x0(%rax,%rax,1)
ef5a: 84 00 00 00 00 00