I am not 100% sure I know what you mean with "notification prevention",but let me take a stab at it.

So like most of these kinds of constructs, I have two rings (rx + tx onthe guest is reversed to tx + rx on the host), each of which can signalin either direction for a total of 4 events, 2 on each side of theconnection. I utilize what I call "bidirectional napi" so that only thefirst packet submitted needs to signal across the guest/host boundary. E.g. first ingress packet injects an interrupt, and then does anapi_schedule and masks future irqs. Likewise, first egress packet doesa hypercall, and then does a "napi_schedule" (I dont actually use napiin this path, but its conceptually identical) and masks futurehypercalls. So thats is my first form of what I would call notificationprevention.

The second form occurs on the "tx-complete" path (that is guest->hosttx). I only signal back to the guest to reclaim its skbs every 10packets, or if I drain the queue, whichever comes first (note to self:make this # configurable).

The nice part about this scheme is it significantly reduces the amountof guest/host transitions, while still providing the lowest latencyresponse for single packets possible. e.g. Send one packet, and you getone hypercall, and one tx-complete interrupt as soon as it queues on thehardware. Send 100 packets, and you get one hypercall and 10tx-complete interrupts as frequently as every tenth packet queues on thehardware. There is no timer governing the flow, etc.

Is that what you were asking?

> As you point out, 350-450 is possible, which is still bad, and it's at least> partially caused by the exit to userspace and two system calls. If virtio_net> had a backend in the kernel, we'd be able to compare numbers properly.> :)

But that is the whole point, isnt it? I created vbus specifically as aframework for putting things in the kernel, and that *is* one of themajor reasons it is faster than virtio-net...its not the difference in,say, IOQs vs virtio-ring (though note I also think some of theinnovations we have added such as bi-dir napi are helping too, but theseare not "in-kernel" specific kinds of features and could probably helpthe userspace version too).

I would be entirely happy if you guys accepted the general concept andframework of vbus, and then worked with me to actually convert what Ihave as "venet-tap" into essentially an in-kernel virtio-net. I am notspecifically interested in creating a competing pv-net driver...I justneeded something to showcase the concepts and I didnt want to hack thevirtio-net infrastructure to do it until I had everyone's blessing. Note to maintainers: I *am* perfectly willing to maintain the venetdrivers if, for some reason, we decide that we want to keep them asis. Its just an ideal for me to collapse virtio-net and venet-taptogether, and I suspect our community would prefer this as well.