Re: [ANNOUNCE] Status of unlocked_qcmds=1 operation for .37

From: Nicholas A. Bellinger
Date: Wed Oct 27 2010 - 14:08:39 EST


On Wed, 2010-10-27 at 09:53 +0200, Andi Kleen wrote:
> > This sounds like a pretty reasonable compromise that I think is slightly
> > less risky for the LLDs with the ghosts and cob-webs hanging off of
> > them.
>
> They won't get tested either next release cycle. Essentially
> near nobody uses them.
>

This is exactly my point. Using this series does not introduce
distruptive changes into LLDs that will be getting little or no testing
wrt the changes to enable lock-less operation with the modern LLDs that
we actually care about. When running with the default of
SHT->unlocked_qcmd=0, the legacy LLDs will continue to function
*exactly* the same, minus those that are now using the explict
scsi_cmd_get_serial() call because they use cmd->serial_number for
something beyond simple informational purposes.

> >
> > What do you think..?
>
> Standard linux practice is to simply push the locks down. That's a pretty
> mechanical operation and shouldn't be too risky
>

No disagreements here whatsoever, as I think that pushing the locks
approach does make alot sense as the final goal. The question is if
starting with this series is less disruptive and less error prone than
adding a new host_lock -> lock() and unlock() in SHT->queuecommand() of
every single legacy LLD and every single failure path for that legacy
code.

The benfits on this series is having to add less LOC, not having to
touch lots legacy LLD code ->queuecommand() that will get little or no
testing, and the 'by default' setting of using SHT->unlocked_qcmd=0 (eg:
legacy mode). I believe it makes sense that merging this approach first
and then transitioning to pushing the locks in per LLDs specific
SHT->queuecommand() would be the most logical two steps for a graceful
transition to optional host_lock less scsi_dispatch_cmd().

Best,

--nab

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/