It seems lately there's been a bit of confusion as to how the new validator works, so I'm making this post to help explain everything to everyone.

All workunits are initially generated with a quorum of 1, so they all go through a first pass of the validator. During this first time through, the validator checks to see if the result is going to be inserted into one of the populations of our evolutionary algorithms. If it is going to, that means we'll be using it to generate new workunits. Because of this we need to validate it to make sure the result is a good one. The validator will then set the quorum to 2 and wait for another result. When this happens, your result will be set to "Completed, validation inconclusive." This doesn't mean your result was invalid or anything was wrong, just that the server is waiting for another result to validate it.

If your result won't improve one of our populations, there's still a chance that we're going to validate it. This is to make sure that people aren't using bad applications or scripts to scam the server for credit. In this case, the server will again increase the quorum of that workunit to 2, and your result will be set to "Completed, validation inconclusive." Again, nothing here is wrong with your result, its just that the server is waiting for another result to validate against.

Some results are simply validated without being checked, because we won't be using them to improve our search populations, and we didn't pick them for extra validation, they are simply marked valid and awarded credit. Previously, this happened to all results that didn't improve our searches, which is why you didn't see too many results being verified.

So this is how the new validation is working. Again, if you're seeing "Completed, validation inconclusive," that doesn't mean you won't be getting credit, it just means it went through the validator once and we're waiting for another result to compare it against. After that (if it's a valid result) it will be awarded credit.

If anyone has any other questions about the new validation system, please post inside this thread and I'll be happy to answer any questions.

Thanks for the clarification. I was not that worried about it, but, heck, I am here for the science ...

However, I am not quite sure why you have the distinction in case 2 and 3...

Is the primary driver for the selection between those two cases only triggered when the validator "suspects" that there is scripting/cheating going on?

Well the reason we needed to do it was the number of workunits that improve the search populations tend to be very low. While in the beginning of the search they're higher, once things settle down its probably < 5% of actual results. Because of this quite a lot of results weren't being checked.

Right now, we're validating 50% of the results that aren't inserted into the populations -- this is just to update everyones error rate (which the BOINC server calculates as a running average of the % of results correctly validated against quorum, minimum 10%). Once these values settle down, we'll be using adaptive validation. That means that your workunits will be validated at a rate equal to your error rate. So if all your results return valid only 10% of them will be verified. If you start returning invalid results your error rate will increase until all your results are verified.

"Completed, waiting validation" means it hasn't gone through the validator yet. I'll try and see if I can change the PHP which writes the "Completed, validation inconclusive" and make it something a little more appropriate.

Is the error rate tracked per user or per computer? Is it possible to have this metric added in the appropriate section so that it is visible to us? With the quick purge rate specific task errors can quickly disappear from our sight. (Probably a change request for the BOINC dev team, but worthwhile since this figure is an important part in our contributions and how we manage our systems.)

Is the error rate tracked per user or per computer? Is it possible to have this metric added in the appropriate section so that it is visible to us? With the quick purge rate specific task errors can quickly disappear from our sight. (Probably a change request for the BOINC dev team, but worthwhile since this figure is an important part in our contributions and how we manage our systems.)

It's on a per host basis.

It also shouldn't be too hard to make your error rate visible, I'll see what I can do.

Is the error rate tracked per user or per computer? Is it possible to have this metric added in the appropriate section so that it is visible to us? With the quick purge rate specific task errors can quickly disappear from our sight. (Probably a change request for the BOINC dev team, but worthwhile since this figure is an important part in our contributions and how we manage our systems.)

Um, it should really be per computer per GPU class ... sad to say some of us have mixed ATI / Nvidia systems working and may see different error rates on the different GPU classes (see 58xx issues of awhile-ago) ...

Of course we have been asking for DCF to be on an application level for like forever and it is still not there yet ...

Is the error rate tracked per user or per computer? Is it possible to have this metric added in the appropriate section so that it is visible to us? With the quick purge rate specific task errors can quickly disappear from our sight. (Probably a change request for the BOINC dev team, but worthwhile since this figure is an important part in our contributions and how we manage our systems.)

Um, it should really be per computer per GPU class ... sad to say some of us have mixed ATI / Nvidia systems working and may see different error rates on the different GPU classes (see 58xx issues of awhile-ago) ...

Of course we have been asking for DCF to be on an application level for like forever and it is still not there yet ...

...and people keep ignoring my idea of a Homogeneous Redundancy-like thing for GPUs... That would have the various classes of GPUs, as best as I understand it when applied to CPUs... :shrug:

...and people keep ignoring my idea of a Homogeneous Redundancy-like thing for GPUs... That would have the various classes of GPUs, as best as I understand it when applied to CPUs... :shrug:

HR as implemented still does not work always as well ... VP has long had problems where BOINC insists that 64 bit systems are the same as 32 bit systems and assigns the tasks to them and 50% or more of the time (historically) they will fail to compare. Usually meaning the people running 64-bit OS lose out because the tie breaker is more likely to be another 32-bit system ...

The problem is of course that the systems are not quite close enough in the far end of the returned numbers to meet the comparison needs of the project. The guy running the project I think after a year of pulling out his hair has given up trying to solve the issue...

I only have one 64 bit system at the moment though if I get two more copies of Win7 they will be 64 bit and I will make the switch to it this week and I will only have one remaining 32-bit system left. Then I will have to watch to see if I am getting the high failure rate as I did a couple years ago with a 64-bit XP system (dual Xeon Dell) ...

With GPUs it will be worse because they will likely have to track chip families to prevent situations like what happened at MW with the 58xx series compared to the 48xx series ATI cards this last month ...

It is a good idea though ... though like most good ideas it won't get past DA ...