Re: Integration of SCST in the mainstream Linux kernel
From: Nicholas A. Bellinger
Date: Tue Feb 05 2008 - 20:51:23 EST
On Tue, 2008-02-05 at 16:11 -0800, Nicholas A. Bellinger wrote:
> On Tue, 2008-02-05 at 22:21 +0300, Vladislav Bolkhovitin wrote:
> > Jeff Garzik wrote:
> > >>> iSCSI is way, way too complicated.
> > >>
> > >> I fully agree. From one side, all that complexity is unavoidable for
> > >> case of multiple connections per session, but for the regular case of
> > >> one connection per session it must be a lot simpler.
> > >
> > > Actually, think about those multiple connections... we already had to
> > > implement fast-failover (and load bal) SCSI multi-pathing at a higher
> > > level. IMO that portion of the protocol is redundant: You need the
> > > same capability elsewhere in the OS _anyway_, if you are to support
> > > multi-pathing.
> >
> > I'm thinking about MC/S as about a way to improve performance using
> > several physical links. There's no other way, except MC/S, to keep
> > commands processing order in that case. So, it's really valuable
> > property of iSCSI, although with a limited application.
> >
> > Vlad
> >
>
> Greetings,
>
> I have always observed the case with LIO SE/iSCSI target mode (as well
> as with other software initiators we can leave out of the discussion for
> now, and congrats to the open/iscsi on folks recent release. :-) that
> execution core hardware thread and inter-nexus per 1 Gb/sec ethernet
> port performance scales up to 4x and 2x core x86_64 very well with
> MC/S). I have been seeing 450 MB/sec using 2x socket 4x core x86_64 for
> a number of years with MC/S. Using MC/S on 10 Gb/sec (on PCI-X v2.0
> 266mhz as well, which was the first transport that LIO Target ran on
> that was able to reach handle duplex ~1200 MB/sec with 3 initiators and
> MC/S. In the point to point 10 GB/sec tests on IBM p404 machines, the
> initiators where able to reach ~910 MB/sec with MC/S. Open/iSCSI was
> able to go a bit faster (~950 MB/sec) because it uses struct sk_buff
> directly.
>
Sorry, these where IBM p505 express (not p404, duh) which had a 2x
socket 2x core POWER5 setup. These along with an IBM X-series machine)
where the only ones available for PCI-X v2.0, and this probably is still
the case. :-)
Also, these numbers where with a ~9000 MTU (I don't recall what the
hardware limit on the 10 Gb/sec switch lwas) doing direct struct iovec
to preallocated struct page mapping for payload on the target side.
This is known as RAMDISK_DR plugin in the LIO-SE. On the initiator, LTP
disktest and O_DIRECT where used for direct to SCSI block device access.
I can big up this paper if anyone is interested.
--nab
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/