Re: [PATCH net v6] net: rose: fix null-ptr-deref caused by rose_kill_by_neigh

From: duoming
Date: Wed Jul 13 2022 - 05:03:57 EST


Hello,

On Wed, 13 Jul 2022 10:33:54 +0200 Paolo Abeni wrote:

> > > On Mon, 2022-07-11 at 09:31 +0800, Duoming Zhou wrote:
> > > > When the link layer connection is broken, the rose->neighbour is
> > > > set to null. But rose->neighbour could be used by rose_connection()
> > > > and rose_release() later, because there is no synchronization among
> > > > them. As a result, the null-ptr-deref bugs will happen.
> > > >
> > > > One of the null-ptr-deref bugs is shown below:
> > > >
> > > > (thread 1) | (thread 2)
> > > > | rose_connect
> > > > rose_kill_by_neigh | lock_sock(sk)
> > > > spin_lock_bh(&rose_list_lock) | if (!rose->neighbour)
> > > > rose->neighbour = NULL;//(1) |
> > > > | rose->neighbour->use++;//(2)
> > > >
> > > > The rose->neighbour is set to null in position (1) and dereferenced
> > > > in position (2).
> > > >
> > > > The KASAN report triggered by POC is shown below:
> > > >
> > > > KASAN: null-ptr-deref in range [0x0000000000000028-0x000000000000002f]
> > > > ...
> > > > RIP: 0010:rose_connect+0x6c2/0xf30
> > > > RSP: 0018:ffff88800ab47d60 EFLAGS: 00000206
> > > > RAX: 0000000000000005 RBX: 000000000000002a RCX: 0000000000000000
> > > > RDX: ffff88800ab38000 RSI: ffff88800ab47e48 RDI: ffff88800ab38309
> > > > RBP: dffffc0000000000 R08: 0000000000000000 R09: ffffed1001567062
> > > > R10: dfffe91001567063 R11: 1ffff11001567061 R12: 1ffff11000d17cd0
> > > > R13: ffff8880068be680 R14: 0000000000000002 R15: 1ffff11000d17cd0
> > > > ...
> > > > Call Trace:
> > > > <TASK>
> > > > ? __local_bh_enable_ip+0x54/0x80
> > > > ? selinux_netlbl_socket_connect+0x26/0x30
> > > > ? rose_bind+0x5b0/0x5b0
> > > > __sys_connect+0x216/0x280
> > > > __x64_sys_connect+0x71/0x80
> > > > do_syscall_64+0x43/0x90
> > > > entry_SYSCALL_64_after_hwframe+0x46/0xb0
> > > >
> > > > This patch adds lock_sock() in rose_kill_by_neigh() in order to
> > > > synchronize with rose_connect() and rose_release(). Then, changing
> > > > type of 'neighbour->use' from unsigned short to atomic_t in order to
> > > > mitigate race conditions caused by holding different socket lock while
> > > > updating 'neighbour->use'.
> > > >
> > > > Meanwhile, this patch adds sock_hold() protected by rose_list_lock
> > > > that could synchronize with rose_remove_socket() in order to mitigate
> > > > UAF bug caused by lock_sock() we add.
> > > >
> > > > What's more, there is no need using rose_neigh_list_lock to protect
> > > > rose_kill_by_neigh(). Because we have already used rose_neigh_list_lock
> > > > to protect the state change of rose_neigh in rose_link_failed(), which
> > > > is well synchronized.
> > > >
> > > > Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
> > > > Signed-off-by: Duoming Zhou <duoming@xxxxxxxxxx>
> > > > ---
> > > > Changes in v6:
> > > > - Change sk_for_each() to sk_for_each_safe().
> > > > - Change type of 'neighbour->use' from unsigned short to atomic_t.
> > > >
> > > > include/net/rose.h | 2 +-
> > > > net/rose/af_rose.c | 19 +++++++++++++------
> > > > net/rose/rose_in.c | 12 ++++++------
> > > > net/rose/rose_route.c | 24 ++++++++++++------------
> > > > net/rose/rose_timer.c | 2 +-
> > > > 5 files changed, 33 insertions(+), 26 deletions(-)
> > > >
> > > > diff --git a/include/net/rose.h b/include/net/rose.h
> > > > index 0f0a4ce0fee..d5ddebc556d 100644
> > > > --- a/include/net/rose.h
> > > > +++ b/include/net/rose.h
> > > > @@ -95,7 +95,7 @@ struct rose_neigh {
> > > > ax25_cb *ax25;
> > > > struct net_device *dev;
> > > > unsigned short count;
> > > > - unsigned short use;
> > > > + atomic_t use;
> > > > unsigned int number;
> > > > char restarted;
> > > > char dce_mode;
> > > > diff --git a/net/rose/af_rose.c b/net/rose/af_rose.c
> > > > index bf2d986a6bc..54e7b76c4f3 100644
> > > > --- a/net/rose/af_rose.c
> > > > +++ b/net/rose/af_rose.c
> > > > @@ -163,16 +163,23 @@ static void rose_remove_socket(struct sock *sk)
> > > > void rose_kill_by_neigh(struct rose_neigh *neigh)
> > > > {
> > > > struct sock *s;
> > > > + struct hlist_node *tmp;
> > > >
> > > > spin_lock_bh(&rose_list_lock);
> > > > - sk_for_each(s, &rose_list) {
> > > > + sk_for_each_safe(s, tmp, &rose_list) {
> > > > struct rose_sock *rose = rose_sk(s);
> > > >
> > > > + sock_hold(s);
> > > > + spin_unlock_bh(&rose_list_lock);
> > > > + lock_sock(s);
> > > > if (rose->neighbour == neigh) {
> > > > rose_disconnect(s, ENETUNREACH, ROSE_OUT_OF_ORDER, 0);
> > > > - rose->neighbour->use--;
> > > > + atomic_dec(&rose->neighbour->use);
> > > > rose->neighbour = NULL;
> > > > }
> > > > + release_sock(s);
> > > > + sock_put(s);
> > >
> > > I'm sorry, this does not work. At this point both 's' and 'tmp' sockets
> > > can be freed and reused. Both iterators are not valid anymore when you
> > > acquire the 'rose_list_lock' later.
> >
> > Thank you for your time and reply! But I think both 's' and 'tmp' can not
> > be freed and reused in rose_kill_by_neigh(). Because rose_remove_socket()
> > calls sk_del_node_init() which is protected by rose_list_lock to delete the
> > socket node from the hlist and if sk->sk_refcnt equals to 1, the socket will
> > be deallocated.
> >
> > static void rose_remove_socket(struct sock *sk)
> > {
> > spin_lock_bh(&rose_list_lock);
> > sk_del_node_init(sk);
> > spin_unlock_bh(&rose_list_lock);
> > }
> >
> > https://elixir.bootlin.com/linux/v5.19-rc6/source/net/rose/af_rose.c#L152
> >
> > Both 's' and 'tmp' in rose_kill_by_neigh() is also protected by rose_list_lock.
>
> The above loop explicitly releases the rose_list_lock at each
> iteration. Additionally, the reference count to 's' is released before
> re-acquiring such lock. By the time rose_list_lock is re-acquired, some
> other process could have removed from the list both 's' and 'tmp' and
> even de-allocate them.
>
> Moving the 'sock_put(s);' after re-acquiring the rose_list_lock could
> protect from 's' being de-allocated, but can't protect from 'tmp' being
> deallocated, neither from 's' and 'tmp' being removed from the list.
>
> The above code is not safe.

I understand, I will improve the code , thank you!

Best regards,
Duoming Zhou