>> And only do it if there's other tasks running on this CPU or so.>> What would happen if there weren't? I'd guess the task would > continue running (but with a warped vruntime)?

We dont want that warping to occur - we just want to go back and burn CPU time in VM context. The problem is (as Peter pointed it out) that this hw facility is incomplete and does not give us any event (interrupt) and does not give us any event key (address we are waiting for) either.

So the next best thing to do is to go back to the guest, because that is where we make the most progress, more likely, and that is where we want to be to make progress immediately, with the shortest latency.

( Perhaps we could also increase vruntime beyond the standard latency value to make sure any freshly woken task gets executed first if we are still looping. )

>> _That_ would be pretty efficient, and would do the right thing when >> two (or more) vcpus run on the same CPU, and it would also do the >> right thing if there are repeated VM-exits due to pause filtering.>>>> Please dont even think about using yield for this though - that will >> just add a huge hit to this task and wont result in any sane behavior - >> and yield is bound to some historic user-space behavior as well.>>>> A gradual and linear back-off from the current timeline is more of a >> fair negotiation process between vcpus and results in more or less >> sane (and fair) scheduling, and no unnecessary looping.>>>> You could even do an exponential backoff up to a limit of 1-10 msecs >> or so, starting at 100 usecs.>> >> Good idea, it eliminates another variable to be tuned.

It could be made fully self-tuning, if the filter threshold can be tuned fast enough. (an MSR write? A VM context field update?)

I.e. the 3000 cycles value itself could be eliminated as well. (with just a common-sense max of say 100,000 cycles enforced)