On 07.12.20 04:48, Jason Wang wrote:
Hi,
Okay, renamed to messages in v3.Not a native speaker but event sounds like something driver read fromokay, shall I name it "message" ?
device. Looking at the below lists, most of them except for
VIRTIO_GPIO_EV_HOST_LEVEL looks more like a command.
It might be better.
Okay, going to do it in the next version.#define VIRTIO_NET_OK 0hmm, so I'd need to define all the error codes that possibly could
#define VIRTIO_NET_ERR 1
happen ?
Yes, I think you need.
I guess, I should add locks to the gpio callback functions (where gpioI'm not sure since I am not familiar with GPIO. But a question is, if atIf I read the code correctly, this expects there will be at most a@Linus @Bartosz: can that happen or does gpio subsys already serialize
single type of event that can be processed at the same time. E.g can
upper layer want to read from different lines in parallel? If yes, we
need to deal with that.
requests ?
Initially, I tried to protect it by spinlock (so, only one request may
run at a time, other calls just wait until the first is finished), but
it crashed when gpio cdev registration calls into the driver (fetches
the status) while still in bootup.
Don't recall the exact error anymore, but something like an
inconsistency in the spinlock calls.
Did I just use the wrong type of lock ?
most one request is allowed, I'm not sure virtio is the best choice here
since we don't even need a queue(virtqueue) here.
subsys calls in). That way, requests are requests are strictly ordered.
The locks didn't work in my previous attempts, but probably because I've
missed to set the can_sleep flag (now fixed in v3).
The gpio ops are already waiting for reply of the corresponding type, so
the only bad thing that could happen is the same operation being called
twice (when coming from different threads) and replies mixed up between
first and second one. OTOH I don't see much problem w/ that. This can be
fixed by adding a global lock.
I think it's still about whether or not we need allow a batch ofMeanwhile I've changed it to allocate a new rx buffer for the reply
requests via a queue. Consider you've submitted two request A and B, and
if B is done first, current code won't work. This is because, the reply
is transported via rxq buffers not just reuse the txq buffer if I read
the code correctly.
(done right before the request is sent), so everything should be
processed in the order it had been sent. Assuming virtio keeps the
order of the buffers in the queues.
hmm, still don't see how the code would actually look like. (in qemu asCould you please give an example how bi-directional transmission withinYou can check how virtio-blk did this in:
the same queue could look like ?
https://docs.oasis-open.org/virtio/virtio/v1.1/csprd01/virtio-v1.1-csprd01.html#x1-2500006
well as kernel). Just add the fetched inbuf as an outbuf (within the
same queue) ?
Okay, doing that now in v3: there's always at least one rx buffer,Maybe add one new buffer per request and one new per received asyncIt would be safe to fill the whole rxq and do the refill e.g when half
signal ?
of the queue is used.
and requests as well as the intr receiver always add a new one.
(they get removed on fetching, IMHO).
--mtx