Re: [PATCH linux-2.6-block:post-2.6.15 08/10] blk: update IDE to use new blk_ordered
From: Bartlomiej Zolnierkiewicz
Date: Fri Nov 18 2005 - 10:59:29 EST
On 11/18/05, Tejun Heo <htejun@xxxxxxxxx> wrote:
> Hello, Bartlomiej.
> Bartlomiej Zolnierkiewicz wrote:
> > On 11/17/05, Tejun Heo <htejun@xxxxxxxxx> wrote:
> > I fail to see how the partial completions (good + bad sectors)
> > are done in your new scheme, please explain.
> It doesn't. I've noted this way back when I posted this patchset the
> second time.
This should be noted in the patch description not in the announcement.
> * The actual barrier IO request is issued as a part of ordered sequence.
> When any part of this sequence fails (any of leading flush, barrier IO
> or post flush), the whole sequence should be considered to have failed.
> e.g. if leading flush fails, there's no point in reporting partial or
> full success of barrier IO. Ditto for tailing flush. We can special
> case when only part of barrier IO fails and report partial barrier
> success, but 1. benefits are doubtful 2. even if it's implemented, it
> wouldn't work (see next rationale)
> * Barrier requests are not mergeable. ie. Each barrier bio is turned
> into one barrier request and partially completing the request doesn't
> result in any successfully completed bio.
However your flush request can fail on the sector _completely_
unrelated to the barrier request so in this case old code worked
differently. Anyway I'm fine with this change (previous logic was
> * SCSI doesn't handle partial completion of barrier IOs.
> >>-static int idedisk_prepare_flush(request_queue_t *q, struct request *rq)
> >>- ide_drive_t *drive = q->queuedata;
> >>- if (!drive->wcache)
> >>- return 0;
> > What does happen if somebody disables drive->wcache later?
> Thanks for pointing out. I've moved ordered configuration into
> write_cache such that ordered is reconfigured when write_cache changes.
> There can be in-flight barrier requests which are inconsistent with the
> newly updated setting, but 1. it's not too unfair to assume that user is
> responsible for that synchronization 2. the original implementation had
> the same issue 3. the consequence is not catastrophic.
The consequence could be increased number of bugreports about
failed IDE commands which wasn't the case with !drive->wcache check
in place - please leave as it was.
> >> static void ide_cacheflush_p(ide_drive_t *drive)
> >>@@ -1034,6 +993,8 @@ static int ide_disk_remove(struct device
> >> struct ide_disk_obj *idkp = drive->driver_data;
> >> struct gendisk *g = idkp->disk;
> >>+ blk_queue_ordered(drive->queue, QUEUE_ORDERED_NONE, NULL, 0);
> > Shouldn't this be done in ide_disk_release()?
> Hmmm... The thing is that, AFAIK, requests are not supposed to be issued
> after ->remove is called (->remove is called only on module unload
> unless hardware is hot-unplugged and HL driver cannot be unloaded while
> it's still opened). I think that's why both sd and ide-disk issue the
> last cache flush in ->remove callbacks but not in ->release.
Are you sure? I think that only calling del_gendisk() assures you
that there won't be outstanding fs requests?
I have also noticed bug in ide_disk_remove() - ide_cacheflush_p()
should be called after del_gendisk() - I will fix it later.
BTW Nowadays you can dynamically dettach/attach driver from/to
device using sysfs interface.
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/