Some functionality was dropped as it was not good practiceDidn't know about RORA. I wonder how different this is compared to the
>(such as receiving VME interrupts in user space, it's not really doable if
>the slave card is Release On Register Access rather than Release on
>Acknowledge),
PCI bus case.
>so the interface became more of a debug mechanism for me.I'm not a VME expert, but it seems that VME windows are a quiet limited resource
>Others have clearly found it provides enough for them to allow drivers to be
>written in user space.
>
>I was thinking that the opposite might be better, no windows were mapped at
>module load, windows could be allocated and mapped using the control device.
>This would ensure that unused resources were still available for kernel
>based drivers and would mean the driver wouldn't be pre-allocating a bunch
>of fairly substantially sized slave window buffers (the buffers could also
>be allocated to match the size of the slave window requested). What do you
>think?
no matter how you allocate your resources. Theoretically we could put up to 32
different boards in a single crate, so there won't be enough windows for each
driver to allocate. That said, there is no way around this when putting together
a really heterogeneous VME system. To overcome such problem, one could
develop a different kernel API that would not provide windows to the
drivers, but
handle reads and writes by reconfiguring windows on the fly, which in turn would
introduce more latency. Those who need such API are welcome to develop it:)
As for dynamic vme_user device allocation, I don't see the point in this.
The only existing kernel VME driver allocates windows in advance, user is just
to make sure to leave one free window if she wants to use that. Module parameter
for window count will be dynamic enough to handle that.