...All those people testing these Apps at Beta and no one picked this up? Nevermind.

This far into the noise floor, there is no shame, only things to learn. Nobody's been here before."Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.

I decided to see if I could get my new Windows install working with the Benchmark. I haven't run it in some time...ever since WinNSA came out. I didn't have any trouble running the Benchmark after chasing down all the files, and installing all those updates. I ran the same task above on the machine with two 1050s;

How there can be a "best" signal that isn't worth reporting (when there are apparently 3 inferior signals that are) is beyond me, but that's apparently the standard. :^)

For best there is a check, added ~2011, of the CHiSq fit ('i.e. 'Gaussian-ness') , in addition to the score used for reporting. My cursory reading suggests the Best may be reportable, maybe not, though yet to do a full line by line analysis. The suspected variation is in the multiple different implementations in that logic in the different branches, though that doesn't rule out other bugs or cumulative error"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.

Previous tests indicate only the Apps compiled using the SoG path have this *feature* of using reported as Best. All the other Apps that don't use the SoG compile path, that I've seen, give the correct Best Gaussian. That can be seen in the Results I posted at Crunchers Anonymous, the previous posts in this thread, and the recent test against the ATI Apps here, http://setiathome.berkeley.edu/workunit.php?wuid=2586601005 In that test My CPU agreed with zi3v, SETI@home v8 v8.22 (opencl_ati5_SoG_cat132) windows_intelx86 Failed, while SETI@home v8 v8.22 (opencl_ati_nocal) windows_intelx86 must have agreed with zi3v as zi3v was give the *canonical result - 5833121549*. At least that time zi3v wasn't robbed of canonical, I suspect My Inconclusives would be Lower if the SoGs were reporting the Correct Best Gaussian.

Fortunately, there are Non-SoG builds readily available for Most Apps, https://setiathome.berkeley.edu/apps.php except maybe Windows nVidia...the most used. There is a Mac Non-SoG ATI App at Beta that's been there as long as the 8.20 SoG build. The Mac nVidia SoG App never existed as I couldn't get it to work correctly using the SoG path.

How there can be a "best" signal that isn't worth reporting (when there are apparently 3 inferior signals that are) is beyond me, but that's apparently the standard. :^)

For best there is a check, added ~2011, of the CHiSq fit ('i.e. 'Gaussian-ness') , in addition to the score used for reporting. My cursory reading suggests the Best may be reportable, maybe not, though yet to do a full line by line analysis. The suspected variation is in the multiple different implementations in that logic in the different branches, though that doesn't rule out other bugs or cumulative error

I mean, I can understand situations where the "best" signal, of any type, still wouldn't be good enough to "report" as worthy of further investigation. However, it seems to me that if one or more signals do achieve that reportable threshold, that the "best" signal should be one of those. If it's not, it just seems really screwy to me. Out of sync, I guess. Perhaps the dictionary the scientists use has a different definition of "best" than the one most of us common folk use. ;^)

How there can be a "best" signal that isn't worth reporting (when there are apparently 3 inferior signals that are) is beyond me, but that's apparently the standard. :^)

For best there is a check, added ~2011, of the CHiSq fit ('i.e. 'Gaussian-ness') , in addition to the score used for reporting. My cursory reading suggests the Best may be reportable, maybe not, though yet to do a full line by line analysis. The suspected variation is in the multiple different implementations in that logic in the different branches, though that doesn't rule out other bugs or cumulative error

I mean, I can understand situations where the "best" signal, of any type, still wouldn't be good enough to "report" as worthy of further investigation. However, it seems to me that if one or more signals do achieve that reportable threshold, that the "best" signal should be one of those. If it's not, it just seems really screwy to me. Out of sync, I guess. Perhaps the dictionary the scientists use has a different definition of "best" than the one most of us common folk use. ;^)

Certainly something worthy of bringing up with Eric IMO. He may well examine the stock CPU code and say 'That's not what was intended', or say 'that's correct'. In terms of purpose, the 'best' is used for Screensaver display, so it would entirely make sense to me if the intent is to choose the most 'Gaussian-ey' looking signal to display, whether reportable or not (i.e . marketing). Naturally I can also see the point of view that if the score wasn't good enough to rep[ort, then why store it at all ? Unfortunately the CHiSq and null hypotheses aggravate a part of my brain that burned out on statistics long ago (as I was too good at it and fried that area of my brain), therefore I don't have definitive answers on what's meant to happen in this particular case."Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.

Certainly something worthy of bringing up with Eric IMO. He may well examine the stock CPU code and say 'That's not what was intended', or say 'that's correct'.

Yeah, that's definitely the key determination needed before any "fixing" gets done to whichever is not the "correct" path. Alternatively, I suppose, he could alter the validator so that it ignores Best Gaussian differences if all other signals match.

In terms of purpose, the 'best' is used for Screensaver display, ....(i.e . marketing).

Oh, goody. Perhaps they could also use that feature to hawk some limited edition SETI@home toasters to help fund the project! ;-P

Still waiting on my Seti Toaster :("Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.

@SETIEric If a 'best Gaussian' looks more 'Guassianey' than the reportables, why may it not necessarily be reportable ?

Heh, I suppose it's just my old age, but I tend to have a hard time keeping a straight face when I read that somebody "Tweeted" something. It always seems about as frivolous as, oh I dunno, flying toasters perhaps! ;^D

@SETIEric If a 'best Gaussian' looks more 'Guassianey' than the reportables, why may it not necessarily be reportable ?

Heh, I suppose it's just my old age, but I tend to have a hard time keeping a straight face when I read that somebody "Tweeted" something. It always seems about as frivolous as, oh I dunno, flying toasters perhaps! ;^D

Oh you'd be surprised [as I was]. The immediacy bypasses all sorts of tradition and other impediments. Eliminates the old 'Chinese Whispers' (aka Fake news)"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.

Oh you'd be surprised [as I was]. The immediacy bypasses all sorts of tradition and other impediments. Eliminates the old 'Chinese Whispers' (aka Fake news)

TraDITION! Oh great, first Flying Toasters and now Fiddler on the Roof flashbacks. I think it's past my bedtime.

*opens beer* ... Guess my work here is done :)"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.

Ah, but I guess I have one more post to make before cutting some ZZZs. And back on topic, too. After 14+ hours my Windows CPU (setiathome_8.00_windows_intelx86) bench of 23se08ac.6875.22968.6.33.135 just finished. TBar had indicated that he thought this WU was a bit of a problem case. He may have been right. My original post:

Ah, but I guess I have one more post to make before cutting some ZZZs. And back on topic, too. After 14+ hours my Windows CPU (setiathome_8.00_windows_intelx86) bench of 23se08ac.6875.22968.6.33.135 just finished. TBar had indicated that he thought this WU was a bit of a problem case. He may have been right. My original post:

Well, it seems that in this case the "gold standard" agrees with SoG:
<best_gaussian>
<peak_power>3.7621715068817</peak_power>

G'night.

Exactly. Note the Higher ChiSq. Therefore the Cuda 8 special one looks more 'Gaussianey' than the 8.22 SoG one. Hence my Tweet/Query to Eric."Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.