Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code

From: Greg KH
Date: Fri Apr 29 2011 - 12:40:48 EST


On Fri, Apr 29, 2011 at 04:32:35PM +0000, KY Srinivasan wrote:
>
>
> > -----Original Message-----
> > From: Christoph Hellwig [mailto:hch@xxxxxxxxxxxxx]
> > Sent: Wednesday, April 27, 2011 8:19 AM
> > To: KY Srinivasan
> > Cc: Christoph Hellwig; Greg KH; gregkh@xxxxxxx; linux-kernel@xxxxxxxxxxxxxxx;
> > devel@xxxxxxxxxxxxxxxxxxxxxx; virtualization@xxxxxxxxxxxxxx
> > Subject: Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code
> >
> > On Wed, Apr 27, 2011 at 11:47:03AM +0000, KY Srinivasan wrote:
> > > On the host side, Windows emulates the standard PC hardware
> > > to permit hosting of fully virtualized operating systems.
> > > To enhance disk I/O performance, we support a virtual block driver.
> > > This block driver currently handles disks that have been setup as IDE
> > > disks for the guest - as specified in the guest configuration.
> > >
> > > On the SCSI side, we emulate a SCSI HBA. Devices configured
> > > under the SCSI controller for the guest are handled via this
> > > emulated HBA (SCSI front-end). So, SCSI disks configured for
> > > the guest are handled through native SCSI upper-level drivers.
> > > If this SCSI front-end driver is not loaded, currently, the guest
> > > cannot see devices that have been configured as SCSI devices.
> > > So, while the virtual block driver described earlier could potentially
> > > handle all block devices, the implementation choices made on the host
> > > will not permit it. Also, the only SCSI device that can be currently configured
> > > for the guest is a disk device.
> > >
> > > Both the block device driver (hv_blkvsc) and the SCSI front-end
> > > driver (hv_storvsc) communicate with the host via unique channels
> > > that are implemented as bi-directional ring buffers. Each (storage)
> > > channel carries with it enough state to uniquely identify the device on
> > > the host side. Microsoft has chosen to use SCSI verbs for this storage channel
> > > communication.
> >
> > This doesn't really explain much at all. The only important piece
> > of information I can read from this statement is that both blkvsc
> > and storvsc only support disks, but not any other kind of device,
> > and that chosing either one is an arbitrary seletin when setting up
> > a VM configuration.
> >
> > But this still isn't an excuse to implement a block layer driver for
> > a SCSI protocol, and it doesn't not explain in what way the two
> > protocols actually differ. You really should implement blksvs as a SCSI
> > LLDD, too - and from the looks of it it doesn't even have to be a
> > separate one, but just adding the ids to storvsc would do the work.
>
> On the host-side, as part of configuring a guest you can specify block devices
> as being under an IDE controller or under a
> SCSI controller. Those are the only options you have. Devices configured under
> the IDE controller cannot be seen in the guest under the emulated SCSI front-end which is
> the scsi driver (storvsc_drv).

Are you sure the libata core can't see this ide controller and connect
to it? That way you would use the scsi system if you do that and you
would need a much smaller ide driver, perhaps being able to merge it
with your scsi driver.

We really don't want to write new IDE drivers anymore that don't use
libata.

thanks,

greg k-h
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/