This addes multiqueue support to virtio_net driver. This feature is negotiatedthrough VIRTIO_NET_F_MULTIQUEUE.

The driver expects the number of rx/tx queue paris is equal to the number ofvcpus. To maximize the performance under this per-cpu rx/tx queue pairs, someoptimization were introduced:

- Txq selection is based on the processor id in order to avoid contending a lock whose owner may exits to host.- Since the txq/txq were per-cpu, affinity hint were set to the cpu that owns the queue pairs.

/* If all buffers were filled by other side before we napi_enabled, we * won't get another interrupt, so process any outstanding packets * now. virtnet_poll wants re-enable the queue, so we disable here. * We synchronize against interrupts via NAPI_STATE_SCHED */- if (napi_schedule_prep(&vi->napi)) {- virtqueue_disable_cb(vi->rvq);+ if (napi_schedule_prep(&rq->napi)) {+ virtqueue_disable_cb(rq->vq); local_bh_disable();- __napi_schedule(&vi->napi);+ __napi_schedule(&rq->napi); local_bh_enable(); } }

/* In theory, this can happen: if we don't get any buffers in * we will *never* try to fill again. */ if (still_empty)- queue_delayed_work(system_nrt_wq, &vi->refill, HZ/2);+ queue_delayed_work(system_nrt_wq, &rq->refill, HZ/2); }

- /* Make sure we have some buffers: if oom use wq. */- if (!try_fill_recv(vi, GFP_KERNEL))- queue_delayed_work(system_nrt_wq, &vi->refill, 0);+ for (i = 0; i < vi->num_queue_pairs; i++) {+ /* Make sure we have some buffers: if oom use wq. */+ if (!try_fill_recv(vi->rq[i], GFP_KERNEL))+ queue_delayed_work(system_nrt_wq,+ &vi->rq[i]->refill, 0);+ virtnet_napi_enable(vi->rq[i]);+ }