Re: [PATCH v4 05/11] iommu/sva: Assign a PASID to mm on PASID allocation and free it on mm exit
From: Fenghua Yu
Date: Tue Apr 12 2022 - 09:41:26 EST
Hi, Zhangfei,
On Tue, Apr 12, 2022 at 03:04:09PM +0800, zhangfei.gao@xxxxxxxxxxx wrote:
>
>
> On 2022/4/11 下午10:52, Dave Hansen wrote:
> > On 4/11/22 07:44, zhangfei.gao@xxxxxxxxxxx wrote:
> > > On 2022/4/11 下午10:36, Dave Hansen wrote:
> > > > On 4/11/22 07:20, zhangfei.gao@xxxxxxxxxxx wrote:
> > > > > > Is there nothing before this call trace? Usually there will be at least
> > > > > > some warning text.
> > > > > I added dump_stack() in ioasid_free.
> > > > Hold on a sec, though...
> > > >
> > > > What's the *problem* here? Did something break or are you just saying
> > > > that something looks weird to _you_?
> > > After this, nginx is not working at all, and hardware reports error.
> > > Suppose the the master use the ioasid for init, but got freed.
> > >
> > > hardware reports:
> > > [ 152.731869] hisi_sec2 0000:76:00.0: qm_acc_do_task_timeout [error status=0x20] found
> > > [ 152.739657] hisi_sec2 0000:76:00.0: qm_acc_wb_not_ready_timeout [error status=0x40] found
> > > [ 152.747877] hisi_sec2 0000:76:00.0: sec_fsm_hbeat_rint [error status=0x20] found
> > > [ 152.755340] hisi_sec2 0000:76:00.0: Controller resetting...
> > > [ 152.762044] hisi_sec2 0000:76:00.0: QM mailbox operation timeout!
> > > [ 152.768198] hisi_sec2 0000:76:00.0: Failed to dump sqc!
> > > [ 152.773490] hisi_sec2 0000:76:00.0: Failed to drain out data for stopping!
> > > [ 152.781426] hisi_sec2 0000:76:00.0: QM mailbox is busy to start!
> > > [ 152.787468] hisi_sec2 0000:76:00.0: Failed to dump sqc!
> > > [ 152.792753] hisi_sec2 0000:76:00.0: Failed to drain out data for stopping!
> > > [ 152.800685] hisi_sec2 0000:76:00.0: QM mailbox is busy to start!
> > > [ 152.806730] hisi_sec2 0000:76:00.0: Failed to dump sqc!
> > > [ 152.812017] hisi_sec2 0000:76:00.0: Failed to drain out data for stopping!
> > > [ 152.819946] hisi_sec2 0000:76:00.0: QM mailbox is busy to start!
> > > [ 152.825992] hisi_sec2 0000:76:00.0: Failed to dump sqc!
> > That would have been awfully handy information to have in an initial bug report. :)
> > Is there a chance you could dump out that ioasid alloc *and* free information in ioasid_alloc/free()? This could be some kind of problem with the allocator, or with copying the ioasid at fork.
> The issue is nginx master process init resource, start daemon process, then
> master process quit and free ioasid.
> The daemon nginx process is not the original master process.
>
> master process: init resource
> driver -> iommu_sva_bind_device -> ioasid_alloc
Which code in the master process/daemon calls driver->iommu_sva_unbind_device?
>
> nginx : ngx_daemon
> fork daemon, without add mm's refcount.
>
> src/os/unix/ngx_daemon.c
> ngx_daemon(ngx_log_t *log)
> {
> int fd;
>
> switch (fork()) {
> case -1:
> ngx_log_error(NGX_LOG_EMERG, log, ngx_errno, "fork() failed");
> return NGX_ERROR;
>
> case 0: // here master process is quit directly and will be
> released.
> break;
>
> default:
> exit(0);
> }
> // here daemon process take control.
> ngx_parent = ngx_pid;
> ngx_pid = ngx_getpid();
>
>
> fork.c
> copy_mm
> if (clone_flags & CLONE_VM) {
> mmget(oldmm);
> mm = oldmm;
> } else {
> mm = dup_mm(tsk, current->mm); // here daemon process
> handling without mmget.
>
> master process quit, mmput -> mm_pasid_drop->ioasid_free
> But this ignore driver's iommu_sva_unbind_device function,
> iommu_sva_bind_device and iommu_sva_unbind_device are not pair, So driver
> does not know ioasid is freed.
>
> Any suggestion?
ioasid is per process or per mm. A daemon process shouldn't share the same
ioasid with any other process with even its parent process. Its parent gets
an ioasid and frees it on exit. The ioasid is gone and shouldn't be used
by its child process.
Each daemon process should call driver -> iommu_sva_bind_device -> ioasid_alloc
to get its own ioasid/PASID. On daemon quit, the ioasid is freed.
That means nqnix needs to be changed.
> Or can we still use the original ioasid refcount mechanism?
>
Thanks.
-Fenghua