Re: possible deadlock in lru_add_drain_all

From: Peter Zijlstra
Date: Mon Oct 30 2017 - 11:22:32 EST


On Mon, Oct 30, 2017 at 04:10:09PM +0100, Peter Zijlstra wrote:
> I can indeed confirm it's running old code; cpuhp_state is no more.
>
> However, that splat translates like:
>
> __cpuhp_setup_state()
> #0 cpus_read_lock()
> __cpuhp_setup_state_cpuslocked()
> #1 mutex_lock(&cpuhp_state_mutex)
>
>
>
> __cpuhp_state_add_instance()
> #2 mutex_lock(&cpuhp_state_mutex)
> cpuhp_issue_call()
> cpuhp_invoke_ap_callback()
> #3 wait_for_completion()
>
> msr_device_create()
> ...
> #4 filename_create()
> #3 complete()
>


So all this you can get in a single callchain when you do something
shiny like:

modprobe msr


> do_splice()
> #4 file_start_write()
> do_splice_from()
> iter_file_splice_write()
> #5 pipe_lock()
> vfs_iter_write()
> ...
> #6 inode_lock()
>
>

This is a splice into a devtmpfs file


> sys_fcntl()
> do_fcntl()
> shmem_fcntl()
> #5 inode_lock()

#6 (obviously)

> shmem_wait_for_pins()
> if (!scan)
> lru_add_drain_all()
> #0 cpus_read_lock()
>

Is the right fcntl()


So 3 different callchains, and *splat*..