On Tuesday 04 December 2012, Eli Billauer wrote:I'm really sorry about this. I begin to realize the confusion now, and Xillybus is indeed not a bus.
I'm currently writing some documentation which will cover the API andI think a lot of us (including Greg and me) were confused about
also help reading the code, I hope. It takes some time...
Until it's done, let's look at a usage example: Suppose that the FPGA's
application is to receive a high-speed bitstream with time multiplexed
data, demultiplex the bitstream into individual channel streams, and
send each channel's data to the host. And let's say that there are 64
channels in original bitstream. So the FPGA has now 64 independent
sources of data.
For that purpose, the Xillybus IP core (on the FPGA) is configured to
create 64 pipes for FPGA to host communication. The names of these pipes
(say, "chan00", "chan01", ...) are also stored in the FPGA.
When the driver starts, it queries the FPGA for its Xillybus
configuration, and creates 64 device nodes: /dev/xillybus_chan00,
/dev/xillybus_chan01, ... /dev/xillybus_chan63.
If the user wants to dump the data in channel 43 into a file, it's just:
$ cat /dev/xillybus_chan43> mydump.dat
I hope this clarified things a bit.
I can't see how the firmware interface would help here.
the purpose of the driver, since you did not include much documentation.
The request_firmware interface would be useful for loading a modelIndeed, Xillybus is not about loading the configuration bitstream for the FPGA.
into the FPGA, but that doesn't seem to be what your driver is
concerned with.
It's also a bit confusing because it doesn't appearI'm not sure I would agree on that. Xillybus consists of an IP core (sort-of library function for an FPGA), and a driver. At the OS level, it's no different than any PCI card and its driver. I call it "generic" because it's not tailored to transport a certain kind of data (say, audio samples or video frames).
to be a "bus" in the Linux sense of being something that provides
an abstract interface between hardware and kernel device drivers.
Instead, you just have a user interface for those FPGA models that
don't need a kernel level driver themselves.
This is somethingFor what it's worth, the driver is now divided into three parts: A xillybus_core, a module for PCIe and a module for Open Firmware interface. The two latter depend on the first, of course.
that sits on a somewhat higher level -- if we want a generic FPGA
interface, this would not be directly connected to a PCI or AMBA
bus, but instead connect to an FPGA bus that still needs to be
invented.
In the user interface side that you provide seems to be on theI'm not sure what you meant here, but I'll mention this: FPGA designers using the IP core don't need to care what the transport is, PCIe, AMBA or anything else. They just see a FIFO. Neither is the host influenced by this, except for loading a different front end module.
same interface level as the USB passthrough interface implemented
in drivers/usb/core/devio.c, which has a complex set of ioctls
but does serve a very similar purpose. Greg may want to comment
on whether that is actually a good interface or not, since I assume
he has some experience with how well it worked for USB.
My feeling for now is that we actually need both an in-kernel
interface and a user interface, with the complication that the
hardware should not care which of the two is used for a particular
instance.
For the user interface, something that is purely read/writeAnd this is where the term "hardware" becomes elusive with an FPGA: One could look at the entire FPGA chip as a single piece of hardware, and expect everything to be packed into a few device nodes.
based is really nice, though I wonder if using debugfs or sysfs
for this would be more appropriate than having lots of character
devices for a single piece of hardware.
Arnd