Re: Staging: add pata_rdc driver
From: Alan Cox
Date: Mon Jun 22 2009 - 08:31:36 EST
> +static struct pci_bits ATA_Decode_Enable_Bits[] = { // see ATA Host Adapters Standards.
> + { 0x41U, 1U, 0x80UL, 0x80UL }, /* port (Channel) 0 */
> + { 0x43U, 1U, 0x80UL, 0x80UL }, /* port (Channel) 1 */
> +};
> +
Decode bits 0x80 in 0x41/0x43 - same as ata_piix
> + /* no hotplugging support (FIXME) */ // why???
copied from the piix driver
> + Mask = ATAConfiguration_IDEIOConfiguration_PrimaryDeviceCable80Report;
Cable bits at 0x54: same format as ATA_PIIX
and this continues throughout the driver
So it seems the following occurred
- take ata_piix
- remove all the innards of it
- replace them with identically functional but convoluted vendor code for
the same actual hardware interface
- submit as new driver
Would someone please tell me wtf is going on here and why if the hardware
is so close to ata_piix it doesn't either use the piix driver or if its
very similar just use bits of it as was (as efar, mpiix and oldpiix do) ?
What if anything actually differs between Intel PIIX and the new RDC
controllers ? Why can't we just cp ata_piix.c ata_rdc.c and just remove
all the intel specific casing ?
Alan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/