Re: net: GPF in eth_header

From: Andrey Konovalov
Date: Tue Nov 29 2016 - 05:27:46 EST

On Sat, Nov 26, 2016 at 9:05 PM, Eric Dumazet <erdlkml@xxxxxxxxx> wrote:
>> I actually see multiple places where skb_network_offset() is used as
>> an argument to skb_pull().
>> So I guess every place can potentially be buggy.
> Well, I think the intent is to accept a negative number.

I'm not sure that was the intent since it results in a signedness
issue which leads to an out-of-bounds.

A quick grep shows that the same issue can potentially happen in
multiple places across the kernel:

net/ipv6/ip6_output.c:1655: __skb_pull(skb, skb_network_offset(skb));
net/packet/af_packet.c:2043: skb_pull(skb, skb_network_offset(skb));
net/packet/af_packet.c:2165: skb_pull(skb, skb_network_offset(skb));
net/core/neighbour.c:1301: __skb_pull(skb, skb_network_offset(skb));
net/core/neighbour.c:1331: __skb_pull(skb, skb_network_offset(skb));
net/core/dev.c:3157: __skb_pull(skb, skb_network_offset(skb));
net/sched/sch_teql.c:337: __skb_pull(skb, skb_network_offset(skb));
net/sched/sch_atm.c:479: skb_pull(skb, skb_network_offset(skb));
net/ipv4/ip_output.c:1385: __skb_pull(skb, skb_network_offset(skb));
net/ipv4/ip_fragment.c:391: if (!pskb_pull(skb, skb_network_offset(skb) + ihl))
drivers/net/vxlan.c:1440: __skb_pull(reply, skb_network_offset(reply));
drivers/net/vxlan.c:1902: __skb_pull(skb, skb_network_offset(skb));
drivers/net/vrf.c:220: __skb_pull(skb, skb_network_offset(skb));
drivers/net/vrf.c:314: __skb_pull(skb, skb_network_offset(skb));

A similar thing also happened to somebody else (on a receive path!):

Does it make sense to check skb_network_offset() before passing it to
skb_pull() everywhere?

> This definitely was assumed by commit e1f165032c8bade authors !
> I guess they were using a 32bit kernel for their tests.
> --
> You received this message because you are subscribed to the Google Groups "syzkaller" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to syzkaller+unsubscribe@xxxxxxxxxxxxxxxxx
> For more options, visit