Welcome to the 10th annual Tour de Primes. 2 is the first prime number...and the only even one. This makes it unique among prime numbers. Therefore, February is declared Prime month...being the 2nd month of the year. :) And there's no better way to pay homage to a prime number than to go out and find one. :) More precisely, a Top 5000 prime.

For the month of February, an informal competition is offered. There are no challenge points to be gained... just a simple rare jersey at the end of the month to add to your badge list. No pressure or stress other than what you put on yourself. :)

For 2018, we're adding four new badges you can win -- and these are available to everyone!

Red Jersey -- discoverer of largest prime

Yellow Jersey -- prime count leader (tiebreaker will be prime score)

Green Jersey -- points (prime score) leader

Polk-a-dot Jersey -- on the 19th of February we'll have a "Mountain Stage" and award the Polk-a-dot Jersey to the one who finds the most primes on that day (tiebreaker will be prime score for that day).

Prime badge -- awarded to everyone who finds an eligible prime during the month of February. This is a counter badge, so if you find more than one prime it will show how many you've found, up to 99.

Mega prime badge -- awarded to everyone who finds a mega prime during February. This is a counter badge.

Mountain Stage prime badge -- awarded to everyone who finds an eligible prime during the Mountain Stage. This is a counter badge.

Mountain Stage mega prime badge -- awarded to everyone who finds a mega prime during the Mountain Stage. This is a counter badge.

As with the last few years, for all primes (BOINC and PRPNet) we're using the new reporting system whereby the prime's date of discovery determines whether it's eligible for the Tour de Primes. Prior to 2014, the date of verification for BOINC primes was used while the discovery date was used for PRPNet primes. The current system is more intuitive and fairer.

Note that SGS-LLR and GFN-15 are too small to be reported to the Top 5000 primes list and are therefore not eligible for the 2018 Tour de Primes.

Currently, the fastest opportunities to find Top 5000 primes is with the PPSE (LLR), and GFN-16 (65536) projects. Of course, should someone find a prime in the mega-prime searches, this would certainly give them a good shot at the green jersey. Not a guarantee, however, as in 2017 there were several mega primes found in the Tour de Primes. Overall, in 2017 we averaged more than one mega prime per week for the entire year, so you might need more than "merely" a mega prime to take home green. In 2017 there were 10 mega primes found during Tour de Primes.

At the current time, PRPNet is not running. If that changes, all ports in PRPNet would be available for the competition.

To participate in BOINC PPSE (LLR), GFN-16, or any other eligible LLR or Genefer project, all you have to do is select it in your PrimeGrid preferences. AP27 sequences are not reportable at T5K, so are not eligible for Tour de Primes.

Tip #1: He (or she) who finds the prime FIRST is the discover of the prime. It's a competition between you and your wingman. While having a fast computer helps, your computer is only useful when it's running a task. If you have a cache of tasks sitting on your computer waiting to run, chances are your wingman will return the task before you've even started it. Setting both BOINC cache settings to "0 days" is strongly recommended. People with slow computers find primes all the time because their wingman downloaded the task yesterday but won't start running it until tomorrow. Set your cache to 0 days!

Your mileage may vary. What works for me may not work for you. Before TdP starts, take some time and experiment and see what works best on your computer.

If you have an Intel CPU with hyperthreading, either turn off the hyperthreading in the BIOS, or set BOINC to use 50% of the processors. (But see below for exceptions.)

If you're using a GPU for other tasks, it may be beneficial to leave hyperthreading on in the BIOS and instead tell BOINC to use 50% of the CPU's. This will allow one of the hyperthreads to service the GPU.

Use LLR's multithreaded mode. It requires a little bit of setup, but it's worth the effort. Follow these steps:

Create a app_config.xml file in the directory C:\ProgramData\BOINC\projects\www.primegrid.com\ (or wherever your BOINC data directory is located). For a quad core CPU, the file should contain the following contents. Change the two occurrences of "4" to the number of actual cores your computer has. The example below is for PPSE. Change the app name (2 places) to whatever LLR app you're running. The app names are listed on your task selection page.

After creating the file, click on "Options/Read config files". You should then restart BOINC or reboot.

The first time BOINC downloads (in this example) a PPSE-LLR task, it may act a little strange and download 4 tasks instead of 1. The run times on this first set of tasks may look a bit strange too. This is normal. This will also occur anytime BOINC downloads more than one task at a time. This can be avoided by setting "Use at most [ 1 ] % of the CPUs" before you download PPSE tasks. After one task was downloaded, increase the percentage.

Some people have observed that when using multithreaded LLR, hyperthreading is actually beneficial. I don't use hyperthreading myself, but I encourage you to experiment and see what works best for you.

Tips for GFN:

Only run GFN on a GPU. Use your CPU for LLR tasks where it will be much more efficient.

Unless you have a really slow GPU and a really fast CPU, leave a CPU core free to service the GPU. You'll want the more powerful GPU running at full speed, even if it slows down the CPU somewhat. A hyperthread should be sufficient if your CPU supports hyperthreading. For example, on a 4 core CPU (without hyperthreading), you could set BOINC to "use 75% of the CPUs" to reserve one core for the GPU.

____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

The first time BOINC downloads an SoB task, it may act a little strange and download 4 tasks instead of 1. The run times on this first set of tasks may look a bit strange too. This is normal. This will also occur anytime BOINC downloads more than one task at a time.

For this tip, I thought an example might be helpful. So...

Example:

Computer A (an i5 running 100% CPUs) is reset to use 4-core multi-thread LLR for ESP tasks. The first time the config file is read, Computer A will download 4 ESP tasks and begin working on 1 task on 4 cores while the other 3 tasks are waiting (one would want to abort these three extra tasks for the TDP).

Computer A then is switched to PPSE for tasks. As the ESP completes, it will download and run 4 PPSE tasks all at the same time (assuming that PPSE has NOT been set to run 4-core multi-thread).

Computer A then is switched back to ESP. In this case, Computer A will again download 4 ESP tasks and begin a single task with 4-core multi-thread while the other 3 wait.

As Mike noted above, one can avoid the extra downloads with his suggestion. Of course, another option is to have all applications that will be used running the same multi-thread settings (i.e., in the above example, if PPSE were set to run 4-core multi-thread, then switching between it and ESP would not result in extra task downloads).

Right now (well, not right now but in about 10 days), what is the most likely place to get one of these? PPSE or GFN 16? By checking the "Newly reported primes" it looks to close to call...

Edit: they are both at the opening post of this thread and I believe the answer might depend on the hardware you have. The question still stands: assuming I can crunch the same number of tasks of each sub-project, which one has been producing more primes per 100 000 tasks in the past few months?
____________
676754^262144+1 is prime

Right now (well, not right now but in about 10 days), what is the most likely place to get one of these? PPSE or GFN 16? By checking the "Newly reported primes" it looks to close to call...

Edit: they are both at the opening post of this thread and I believe the answer might depend on the hardware you have. The question still stands: assuming I can crunch the same number of tasks of each sub-project, which one has been producing more primes per 100 000 tasks in the past few months?

Unless you've managed to build a computer with a powerful GPU and no CPU at all, I'd run both.

But to answer your question, PPSE right now is about 40K digits smaller than GFN16, so it should be somewhat easier to find primes with PPSE.

We do not keep the necessary data for PPSE to tell what the ratio of primes to composites is, but on Genefer 16 it's about 1 in every 16500 candidates.
____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

Unless you've managed to build a computer with a powerful GPU and no CPU at all, I'd run both.
.

I have 2 mid-tier GPU and CPU. My only megaprime popped during TdP 6 years ago. I’m planing to run 2 sub-projects, one for small primes and another one for mega, on the hope of being lucky again. What I haven’t decided is which sub-project to run on what (eg GFN16 on gpu and PPS mega or GFN 17mega/18 and PPSE).

Thanks for your answer and good luck everyone.
____________
676754^262144+1 is prime

Unless you've managed to build a computer with a powerful GPU and no CPU at all, I'd run both.
.

I have 2 mid-tier GPU and CPU. My only megaprime popped during TdP 6 years ago. I’m planing to run 2 sub-projects, one for small primes and another one for mega, on the hope of being lucky again. What I haven’t decided is which sub-project to run on what (eg GFN16 on gpu and PPS mega or GFN 17mega/18 and PPSE).

Thanks for your answer and good luck everyone.

I just realized you may have asked the wrong question.

You asked, in essence, "what's the ratio of prime to composite candidates?

I think the better question is "How many primes were found?"

The important metric isn't "primes/tests", it's "primes/time". And that's exactly what the "prime score" is supposed to represent, i.e., how hard it is to find a prime. It's always easier to find smaller primes, all other things being equal.

But things are definitely not equal here, since GFN runs on the GPU. Assuming GFN16 runs faster on your GPU than PPSE does on your CPU, GFN16 probably gives you the best shot of finding a prime. You need to take into account not only speeds, but the number of CPU cores available when making that determinations. Whichever allows you to run the most tests over time is the one you should do in order to have the best shot at the TdP Prime badge.

I'm running running both. The chance of me finding any prime is small. The chance of a mega prime is much smaller. I'll maximize my chances of finding the smaller prime. Maybe I'll run a PSP and a GFN21 on the 19th. :)

____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

Am I wrong or in last years there was specific date in February that will give special badge?
____________93*10^1029523-1 REPDIGIT MEGA PRIME :) :) :)
57*2^3339932-1 MEGA PRIME :)
10994460^131072+1 GENERALIZED FERMAT :)
31*332^367560+1 CRUS PRIME :)
Proud member of team Aggie The Pew. Go Aggie!

I never really paid attention to mountain stage as I had no hope of getting the polkadot jersey. However now that many more people can earn the special badge, details matter more. Does mountain stage count only primes returned during that period or must the task resulting in a prime have been issued as well as returned during that 24 hour period? That would be a significant detail for slower machines.

I never really paid attention to mountain stage as I had no hope of getting the polkadot jersey. However now that many more people can earn the special badge, details matter more. Does mountain stage count only primes returned during that period or must the task resulting in a prime have been issued as well as returned during that 24 hour period? That would be a significant detail for slower machines.

Primes retunred -- don't have to be issued during the mountain stage.

Don't write off your chances for the polka-dot jersey -- that jersey is largely a matter of randomness (aka "luck"). Last year I found three primes to take the mountain stage jersey, and the previous two years two primes took the mountain stage jersey. A single PPSe or GFN-16 could very well take it.
____________

I never really paid attention to mountain stage as I had no hope of getting the polkadot jersey. However now that many more people can earn the special badge, details matter more. Does mountain stage count only primes returned during that period or must the task resulting in a prime have been issued as well as returned during that 24 hour period? That would be a significant detail for slower machines.

As with the entire month-long TdP, the criteria for counting towards the Mountain Stage is the time the prime is reported to the server. It does not matter when the task was sent.

I'm not the least bit worried about people "hoarding" tasks to return during the Mountain Stage because A) the vast majority of tasks won't be prime, and, most importantly, B) if you hoard tasks that just means your wingman will most likely be the prime finder and you'll be the double checker. You have to be the prime finder for TdP. Double checkers get nothing.
____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

I'm not the least bit worried about people "hoarding" tasks to return during the Mountain Stage because A) the vast majority of tasks won't be prime, and, most importantly, B) if you hoard tasks that just means your wingman will most likely be the prime finder and you'll be the double checker. You have to be the prime finder for TdP. Double checkers get nothing.

that brings me to an idea for a new rating: the most unfortunate double checker
shortest time between prime reporting and double check reporting
____________
Sysadm@Nbg
my current lucky number: 3651*2^1521717+1
PSA-PRPNet-Stats-URL: http://u-g-f.de/PRPNet/

that brings me to an idea for a new rating: the most unfortunate double checker
shortest time between prime reporting and double check reporting

At least once I have had to check the web server logs to get millisecond-resolution timing to determine who was the prime finder. BOINC only records time in seconds, and both wingmen reported the tasks in the same second.
____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

Since only initial finder got something , then there is just, and only one option for increasing chance to be first one :) People can grab many results before start, process it and send it after challenge is started. That first wave can get some initial advantage, but challenge is for whole month. not for one day :)
My tactic will be: mega on CPU and GFN 16 on GPU. To earn at least one badge. It will be fair after all this years on Primegrid :)
____________93*10^1029523-1 REPDIGIT MEGA PRIME :) :) :)
57*2^3339932-1 MEGA PRIME :)
10994460^131072+1 GENERALIZED FERMAT :)
31*332^367560+1 CRUS PRIME :)
Proud member of team Aggie The Pew. Go Aggie!

My tactic will be: mega on CPU and GFN 16 on GPU. To earn at least one badge. It will be fair after all this years on Primegrid :)

I wouldn't be so optimistic about megaprimes. I crunched MEGA + GFN-17 (Mega) on December. No MEGA have been found for 22 days. No GFN-17 (Mega) found for 4 months. Now we have had 2 GFN-17 (Mega) and 1 PPS-MEGA within 12 days on January.

I 'm going to put my hope/hardware in PPSE and maybe GFN-16 since I don't want get mad. :D
I would like to solve the issue of OpenCL using 100% CPU though.

My tactic will be: mega on CPU and GFN 16 on GPU. To earn at least one badge. It will be fair after all this years on Primegrid :)

I wouldn't be so optimistic about megaprimes. I crunched MEGA + GFN-17 (Mega) on December. No MEGA have been found for 22 days. No GFN-17 (Mega) found for 4 months. Now we have had 2 GFN-17 (Mega) and 1 PPS-MEGA within 12 days on January.

I 'm going to put my hope/hardware in PPSE and maybe GFN-16 since I don't want get mad. :D
I would like to solve the issue of OpenCL using 100% CPU though.

Last year I discovered 1 PPSE on February without knowing TdP.

All you say is right:but since ( I expect) in February number of crunched candidates will rise by factor 2 or 3more WU will be crunched in same time ( ordinary time)
Any since pure luck in only part that is always sure: I will relay on luck :)
Of course PPSE is "better choice" but,I do some initial test and cannot get 100% CPU on such short tasks doing it on multicore. Maybe every of my computers do different proth project is better option :)
I have 10 days to get better strategy or to confirm this one :)
____________93*10^1029523-1 REPDIGIT MEGA PRIME :) :) :)
57*2^3339932-1 MEGA PRIME :)
10994460^131072+1 GENERALIZED FERMAT :)
31*332^367560+1 CRUS PRIME :)
Proud member of team Aggie The Pew. Go Aggie!

I hate luck. She's not fair. :(
Maybe there will be more megaprimes, but I would not be still the discoverer. :)
I agree that wishing luck and crunching is better than not crunching.

I observed that PPSE (also SGS) multicore tasks are very very slow. I would crunch without multithreading.

They are slow and SGS is not even count of prime ( since it is too small for T5K) but slow is not problem: it is most important to be first :)
____________93*10^1029523-1 REPDIGIT MEGA PRIME :) :) :)
57*2^3339932-1 MEGA PRIME :)
10994460^131072+1 GENERALIZED FERMAT :)
31*332^367560+1 CRUS PRIME :)
Proud member of team Aggie The Pew. Go Aggie!

I have just realized that instead of setting CPU usage in BOINC to 50%, I can modify app config to tell BOINC that it uses more CPUs - e.g. set app to use 4 threads (-t4), and max/avg CPUs to 8. This should have the same effect as changing CPU usage to 50%, and would save me some preparation work before challenges.
____________

...I can modify app config to tell BOINC that it uses more CPUs - e.g. set app to use 4 threads (-t4), and max/avg CPUs to 8.

Just set <avg_ncpus>.

The "max_ncpus" tag is not used in app_config. It will be ignored if you put it in there. The computer won't care, but any human looking at that file, or anyone with whom you share the file, might get the mistaken idea that the tag does something.
____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

With TdP starting in a day, I looked back at 2016 and 2017 TdP for GFN16.

During TdP 2016, GFN16 b advanced from ~6M to ~10M, a huge 4M range back then.
During TdP 2017, GFN16 b advanced from ~17M to ~22M, a 5M range.

We are 29M5, let's say ~30M.
I wouldn't be surprised if we reach b ~36M at the end of TdP 2018.
This may give us on average about 2 GFN16 primes a day.

Recent GFN16 primes are about position 2050 within T5K.
Recent PPSE primes are about position 2775 within T5K.
With SGS pushed off the T5K limit last year, PPSE primes are safe...for a considerable time. And, GFN16 primes are safe from PPSE for couple more years I guess.

But if we even reach 36M as upper limit, GFN prime will not have 500K digits(it will have around 495617 digits in upper limit)

But if we even reach 36M as upper limit, GFN prime will not have 500K digits(it will have around 495617 digits in upper limit)

I hope that one day GFN 16 will pass 500K

If I did the math right, b=42,598,524 is where we get to 500K digits. That's not all that far away.

The answer would be roughly the same as the starting b for GFN17-mega. However, since you did not do the math right (see dukebg/JeppeSN), does that mean GFN17-mega also omitted a few potential mega candidates? The proper starting point of GFN17-mega would be 42597774 -- was that the one used?

With the start of the 2018 Tour de primes only minutes away, it's time for a...

* * * PUBLIC SERVICE ANNOUNCEMENT * * *

Since TdP is all about finding primes, it's a good idea to set up your prime reporting preferences now, if you haven't already done so.

To set up your prime reporting preferences, click on Your account, located on the left side menu under "Returning Participants". Then click on PrimeGrid preferences, under "Preferences". Scroll down a little bit, and click on "Primary (default) preferences: Edit PrimeGrid preferences".

Now fill in the section under "Reporting primes to the Prime Pages". Click both check boxes, and enter your REAL first and last names. They're required. Entering Daffy Duck or John Q. Public will only delay getting your primes reported. Finally, scroll all the way to the bottom of the page and click the "Update preferences" button.

Alternatively, if you wish to remain anonymous, you can write "report anonymously" in the name field, or "give to DCer" and we'll report the prime in the name of the double checker.
____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

There's a variety of steps that need to be taken before we can display a prime to the public. For small primes, this usually takes less than a day. For large primes, it can take a week or more.

It also depends on whether the person who discovered the prime has set their prime reporting preferences. If they haven't done so, or have not granted us permission to report for them, we have to wait for them. This can delay reporting the prime by more than a month in the worst case scenario.

While a prime prime discovered during February counts as of the time of its discovery, we may not be able to display the prime until later. That's what I mean by "hidden prime".
____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

I see.
And I assume all these steps are taken only after a wingman has double checked the result.

You know what they say about "assume". :)

As long as we don't have reason to suspect a false prime, i.e., that an error caused the computer to incorrectly say the candidate is prime, we'll start some of those steps as soon as we see the first prime result.
____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

... As long as we don't have reason to suspect a false prime, i.e., that an error caused the computer to incorrectly say the candidate is prime, we'll start some of those steps as soon as we see the first prime result.

Thanks for the clarification.

Michael Goetz wrote:

Eudy wrote:

I see.
And I assume all these steps are taken only after a wingman has double checked the result.

You know what they say about "assume". :)

I had to Google for this one, had no idea.
I suppose / imagine / believe / think you're referring to this.
LOL
It's never too late to learn something new !

I had to Google for this one, had no idea.
I suppose / imagine / believe / think you're referring to this.
LOL
It's never too late to learn something new !

You never heard that one? It's a great way to teach people never to assume anything, and to verify everything. Stops you from making a lot of mistakes.
____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

Anyone seeing issues with GEN 16-17 MEGA not getting Tasks ATM?
Looking like only One of my Rigs are not getting any PG GPU Task.
Other BOINC GPU Tasks run fine. Going to remove and add PG see if that fixes is.
Thanks for the Reply ^^

Anyone seeing issues with GEN 16-17 MEGA not getting Tasks ATM?
Looking like only One of my Rigs are not getting any PG GPU Task.
Other BOINC GPU Tasks run fine. Going to remove and add PG see if that fixes is.
Thanks for the Reply ^^

Could be this (This was my issue) Running fine now.
<no_alt_platform>1</no_alt_platform>
____________
Crunching@EVGA The Number One Team in the BOINC Community. Folding@EVGA The Number One Team in the Folding@Home Community.

Anyone seeing issues with GEN 16-17 MEGA not getting Tasks ATM?
Looking like only One of my Rigs are not getting any PG GPU Task.
Other BOINC GPU Tasks run fine. Going to remove and add PG see if that fixes is.
Thanks for the Reply ^^

Could be this (This was my issue) Running fine now.
<no_alt_platform>1</no_alt_platform>

To be clear to anyone else reading this, you do NOT want this tag. It's presence was causing the problem. Don't add this to your setup!
____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

For example you can parse hostid results by a programming language. Not fine for the server because you should open one new page per workunit, but that's it.
It would be better parsing xml files than html pages, it the server supplies them.

For example you can parse hostid results by a programming language. Not fine for the server because you should open one new page per workunit, but that's it.
It would be better parsing xml files than html pages, it the server supplies them.

If you want to use a screen scraper to gather data, that's fine, as long as it's single threaded. I.e., don't start scaping a second page until you're finished with the first page.

____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

For example you can parse hostid results by a programming language. Not fine for the server because you should open one new page per workunit, but that's it.
It would be better parsing xml files than html pages, it the server supplies them.

If you want to use a screen scraper to gather data, that's fine, as long as it's single threaded. I.e., don't start scaping a second page until you're finished with the first page.

It is sufficient to parse the workunit pages. They have the timing data for all the results for each workunit and even say with the "canonical result" which one is the winner.

There's also a strategy of aborting sure losers to gain an edge, ie. less effort spent on tasks that will not win means more time spent on tasks that may win. But it requires you to sit there and monitor each work unit in progress. It could be done effectively with software. I don't think many people are doing that.
The chances of finding a prime are unchanged, merely the chance of being the discoverer.

I've got some tricks that have me winning a large majority of my races. (Except probably the GFN ones.)

Mind to share ?

Yes. :) Somebody's gotta be
<-- that guy. Better you than me! ;)

Ken_g6 is has won 79.2% of TdP races he has participated in, so far. Man, that took a long time to load 1661 validated results (1517 of them from the challenge; 1202 winners). I should add an optional starting date to limit the work for measuring challenge data.

You virtually had that one in the bag.
Your task took 63 seconds less time to run than Scott and you received the task more than 6 minutes before he did.

Maybe but it is Scott and I dear say nothing
For some reason mine was not reported in time, that is how it is.
____________
Crunching@EVGA The Number One Team in the BOINC Community. Folding@EVGA The Number One Team in the Folding@Home Community.

If this is True then this is all a lie Or Cheat
EDIT: Oops, is that supposed to be a high performance secret?
____________
Crunching@EVGA The Number One Team in the BOINC Community. Folding@EVGA The Number One Team in the Folding@Home Community.

There's also a strategy of aborting sure losers to gain an edge, ie. less effort spent on tasks that will not win means more time spent on tasks that may win. But it requires you to sit there and monitor each work unit in progress. It could be done effectively with software. I don't think many people are doing that.
The chances of finding a prime are unchanged, merely the chance of being the discoverer.

Consider this strategy: Every time you notice that the other person has already handed in some well-formed result (now pending validation by you), you just abort that job and pick a new one.

That would be destructive to the community. Seen from a global perspective, a lot of computation would be wasted. And if too many people used that strategy, we would never have enough validations, so we would end up with an ever increasing number of work units (tasks) that would be in limbo and waiting for their second result indefinitely.

There's also a strategy of aborting sure losers to gain an edge, ie. less effort spent on tasks that will not win means more time spent on tasks that may win. But it requires you to sit there and monitor each work unit in progress. It could be done effectively with software. I don't think many people are doing that.
The chances of finding a prime are unchanged, merely the chance of being the discoverer.

Consider this strategy: Every time you notice that the other person has already handed in some well-formed result (now pending validation by you), you just abort that job and pick a new one.

That would be destructive to the community. Seen from a global perspective, a lot of computation would be wasted. And if too many people used that strategy, we would never have enough validations, so we would end up with an ever increasing number of work units (tasks) that would be in limbo and waiting for their second result indefinitely.

/JeppeSN

Similar to the tragedy of the commons. Resource shortage is validations.

EDIT: If this becomes a problem, the solution is for the server to hide the status of all other tasks of a workunit from a participant who has a task in progress.

This policy should be applied universally, stop the problem from ever occuring. Hide all other tasks of a workunit from a participant who has a task in progress for the workunit. However this could be circumvented by a confederation of participants advising on each others' tasks. So I'm not sure what to do about it.

That's way too tedious, though. The beauty of boinc is being fully transparently automated for people, so they don't have to bother with anything. Don't expect 80% of people to have any custom settings at all. Doing something this actively, with this much involvement – I doubt even 1% would go for it.

As an example: there are 344045 crunchers according to the front page (although maybe that is all ever registered, not active users, I don't know). Manual Sieving has only 154 unique people that have done it. Just 0.04% of all users.

There's also a strategy of aborting sure losers to gain an edge, ie. less effort spent on tasks that will not win means more time spent on tasks that may win. But it requires you to sit there and monitor each work unit in progress. It could be done effectively with software. I don't think many people are doing that.
The chances of finding a prime are unchanged, merely the chance of being the discoverer.

Consider this strategy: Every time you notice that the other person has already handed in some well-formed result (now pending validation by you), you just abort that job and pick a new one.

That would be destructive to the community. Seen from a global perspective, a lot of computation would be wasted. And if too many people used that strategy, we would never have enough validations, so we would end up with an ever increasing number of work units (tasks) that would be in limbo and waiting for their second result indefinitely.

/JeppeSN

I really doubt that strategy will be destructive for community. More like, community will not noticed that strategy at all.
Why, answer is given below: only 0.0x% of all users will do that.
In the past ( and I can assume right now) there is few hosts that produce 1 second error and those host in combination with big cache can do many invalid WU per day. Did that behavior do anything (wrong or bad) to Primegrid. Is not in past and will not in future.
____________93*10^1029523-1 REPDIGIT MEGA PRIME :) :) :)
57*2^3339932-1 MEGA PRIME :)
10994460^131072+1 GENERALIZED FERMAT :)
31*332^367560+1 CRUS PRIME :)
Proud member of team Aggie The Pew. Go Aggie!

There's also a strategy of aborting sure losers to gain an edge, ie. less effort spent on tasks that will not win means more time spent on tasks that may win. But it requires you to sit there and monitor each work unit in progress. It could be done effectively with software. I don't think many people are doing that.
The chances of finding a prime are unchanged, merely the chance of being the discoverer.

Consider this strategy: Every time you notice that the other person has already handed in some well-formed result (now pending validation by you), you just abort that job and pick a new one.

That would be destructive to the community. Seen from a global perspective, a lot of computation would be wasted. And if too many people used that strategy, we would never have enough validations, so we would end up with an ever increasing number of work units (tasks) that would be in limbo and waiting for their second result indefinitely.

/JeppeSN

I really doubt that strategy will be destructive for community. More like, community will not noticed that strategy at all.
Why, answer is given below: only 0.0x% of all users will do that.
In the past ( and I can assume right now) there is few hosts that produce 1 second error and those host in combination with big cache can do many invalid WU per day. Did that behavior do anything (wrong or bad) to Primegrid. Is not in past and will not in future.

That's a reasonable argument under ordinary (past) conditions. However, when there's a bot available to do all the work of aborting units, the proportion of people using it could rise significantly because of the badge incentive.

There's also a strategy of aborting sure losers to gain an edge, ie. less effort spent on tasks that will not win means more time spent on tasks that may win. But it requires you to sit there and monitor each work unit in progress. It could be done effectively with software. I don't think many people are doing that.
The chances of finding a prime are unchanged, merely the chance of being the discoverer.

Consider this strategy: Every time you notice that the other person has already handed in some well-formed result (now pending validation by you), you just abort that job and pick a new one.

That would be destructive to the community. Seen from a global perspective, a lot of computation would be wasted. And if too many people used that strategy, we would never have enough validations, so we would end up with an ever increasing number of work units (tasks) that would be in limbo and waiting for their second result indefinitely.

/JeppeSN

I really doubt that strategy will be destructive for community. More like, community will not noticed that strategy at all.
Why, answer is given below: only 0.0x% of all users will do that.
In the past ( and I can assume right now) there is few hosts that produce 1 second error and those host in combination with big cache can do many invalid WU per day. Did that behavior do anything (wrong or bad) to Primegrid. Is not in past and will not in future.

That's a reasonable argument under ordinary (past) conditions. However, when there's a bot available to do all the work of aborting units, the proportion of people using it could rise significantly because of the badge incentive.

Bot can do this: but: someone need to write bot for that, install it.Future is coming, and soon or later it will become reality. But even in that case, you cannot do nothing: and again how many in % users of Primegrid will do that?
Targeted area of those users is obviously prime finding. So those users know something about math. In that case, when all program needed for sieving and processing are available free, who will do such extra work for nothing?
Those users like me ( I am prime hunting users, not badge/score hunting user) will make own sieve , make own range and process it at home. In that case Primegid is last place to be visited.

Similar to the tragedy of the commons. Resource shortage is validations.

Yes.

There are various degrees of "selfishness" that comes from wanting to optimize the chance of being the discoverer (as opposed to wingman) instead of optimizing the credit.

Choosing multi-threading in a case that lowers the overall throughput, is a milder example.

(Concrete example: If you can choose between running four tasks simultaneously, each task on its own processor core, and running one task at a time (multi-threading so all cores participate in this single task). And if for the sake of this example, we assume that in the first setup, you can finish the four tasks in half an hour, while in the multi-threading setup, the one tasks finishes in ten minutes. Then if you choose the first configuration, you contribute a total of 8 tasks per hour. But it takes 30 minutes for each task, so you will often be a wingman. If you choose the other configuration, where each task is returned after ten minutes, you will more often be the discoverer, but here you decide to contribute only 6 tasks per hour. This is why choosing the second option may be considered somewhat selfish, in some people's opinion.)

"Rewards" that encourage optimizing the overall throughput: The credit system (wingman gets same credit) and all badges that depend on credit score.

"Rewards" that may encourage "selfish" behavior: Associating more "fame" to the discoverer than to the double checker (Top 5000 proof-codes), badges that are awarded (for particular prime finds) to the discoverer but not to the wingman, and challenges in the style of Tour de Primes.

It would be a tragedy if all the users did it so. It's not our case. There are a lot of random doublecheckers.
Besides it would be destructive to everyone not to be doublechecker for conjectures primes. We would lose a lot of points. I think that strategy would be useful for slower hosts. My host completes almost 500 tests everyday (I have to calculate first reporter ratio). I don't care if a dozen of them had already completed when they were sent to my host.

I agree about multithreading selfishness on small FFT sizes, but we don't pay for their electric energy. They could object that there are people running tests on inefficient/old computers or that our energy produced by coal is worse. They would be right too.

Similar to the tragedy of the commons. Resource shortage is validations.

Yes.

There are various degrees of "selfishness" that comes from wanting to optimize the chance of being the discoverer (as opposed to wingman) instead of optimizing the credit.

Choosing multi-threading in a case that lowers the overall throughput, is a milder example.

(Concrete example: If you can choose between running four tasks simultaneously, each task on its own processor core, and running one task at a time (multi-threading so all cores participate in this single task). And if for the sake of this example, we assume that in the first setup, you can finish the four tasks in half an hour, while in the multi-threading setup, the one tasks finishes in ten minutes. Then if you choose the first configuration, you contribute a total of 8 tasks per hour. But it takes 30 minutes for each task, so you will often be a wingman. If you choose the other configuration, where each task is returned after ten minutes, you will more often be the discoverer, but here you decide to contribute only 6 tasks per hour. This is why choosing the second option may be considered somewhat selfish, in some people's opinion.)

"Rewards" that encourage optimizing the overall throughput: The credit system (wingman gets same credit) and all badges that depend on credit score.

"Rewards" that may encourage "selfish" behavior: Associating more "fame" to the discoverer than to the double checker (Top 5000 proof-codes), badges that are awarded (for particular prime finds) to the discoverer but not to the wingman, and challenges in the style of Tour de Primes.

/JeppeSN

You use word "selfish": but I dont like that word. Why? If all users ( and that will never be) have same hardware then it will be truly selfish.
But many here has old CPU, and slow GPU-S, and yes, like many times before those host can and are faster then "faster" host but those cases are in extremely low number.
So those users with old CPU and GPU have very low chance to be initial finder.
Everyone of us here need and it is natural to got fame of initial discoverer. And that is nothing wrong with that. So multi-threaded option is not selfish.

I didnot know any of initial discoverer that look at host of wingman, and if conclude that host is old and slow transfer to him "fame" of initial discoverer. Are you?
____________93*10^1029523-1 REPDIGIT MEGA PRIME :) :) :)
57*2^3339932-1 MEGA PRIME :)
10994460^131072+1 GENERALIZED FERMAT :)
31*332^367560+1 CRUS PRIME :)
Proud member of team Aggie The Pew. Go Aggie!

Similar to the tragedy of the commons. Resource shortage is validations.

Yes.

There are various degrees of "selfishness" that comes from wanting to optimize the chance of being the discoverer (as opposed to wingman) instead of optimizing the credit.

Choosing multi-threading in a case that lowers the overall throughput, is a milder example.

(Concrete example: If you can choose between running four tasks simultaneously, each task on its own processor core, and running one task at a time (multi-threading so all cores participate in this single task). And if for the sake of this example, we assume that in the first setup, you can finish the four tasks in half an hour, while in the multi-threading setup, the one tasks finishes in ten minutes. Then if you choose the first configuration, you contribute a total of 8 tasks per hour. But it takes 30 minutes for each task, so you will often be a wingman. If you choose the other configuration, where each task is returned after ten minutes, you will more often be the discoverer, but here you decide to contribute only 6 tasks per hour. This is why choosing the second option may be considered somewhat selfish, in some people's opinion.)

"Rewards" that encourage optimizing the overall throughput: The credit system (wingman gets same credit) and all badges that depend on credit score.

"Rewards" that may encourage "selfish" behavior: Associating more "fame" to the discoverer than to the double checker (Top 5000 proof-codes), badges that are awarded (for particular prime finds) to the discoverer but not to the wingman, and challenges in the style of Tour de Primes.

/JeppeSN

Agreed, except that some computers with large cache have a superscalar effect, so that multithreading actually increases the throughput for large tasks like SoB by reducing the RAM bandwidth requirement. I have one such machine.

v7.8 introduces that tag into app_config.xml, it already existed in cc_config.xml much earlier. Are you sure you are talking about app_config.xml?

The script is trying to solve the issue of the client downloading the next task too early, so that tasks have a better chance of winning a race. There's probably another way to do that.

Sorry, I did not notice app_ instad of cc_. You are right.

Of course. That's the most important issue to me.
Just saying that reporting immediately doesn't work when your client has stated for some reason (e.g. no internet connection for a while) to wait for X minutes before communicating with project server again.

except that some computers with large cache have a superscalar effect, so that multithreading actually increases the throughput for large tasks like SoB by reducing the RAM bandwidth requirement. I have one such machine.

Absolutely; I was not referring to that situation. In that case, choosing multithreading clearly optimizes the total throughput, the amount of "utility" you contribute to the "community".

The script is trying to solve the issue of the client downloading the next task too early, so that tasks have a better chance of winning a race. There's probably another way to do that.

Of course. That's the most important issue to me.
Just saying that reporting immediately doesn't work when your client has stated for some reason (e.g. no internet connection for a while) to wait for X minutes before communicating with project server again.

I think you are talking about deplayed reporting. That's a different issue than I am trying to solve. With constant internet connection, the BOINC client may download a new task minutes before it starts to execute. That's enough delay on short tasks like GFN16 and PPSE to lose a first-reporter race against similar speed machines.

That's way too tedious, though. The beauty of boinc is being fully transparently automated for people, so they don't have to bother with anything. Don't expect 80% of people to have any custom settings at all. Doing something this actively, with this much involvement – I doubt even 1% would go for it.

As an example: there are 344045 crunchers according to the front page (although maybe that is all ever registered, not active users, I don't know). Manual Sieving has only 154 unique people that have done it. Just 0.04% of all users.

I love the fine print LOL. I guess you were trying to make a footnote, but it comes across on my system like the tiny tiny swindle notes you see at the bottom of a contract.

The script is trying to solve the issue of the client downloading the next task too early, so that tasks have a better chance of winning a race. There's probably another way to do that.

Of course. That's the most important issue to me.
Just saying that reporting immediately doesn't work when your client has stated for some reason (e.g. no internet connection for a while) to wait for X minutes before communicating with project server again.

I think you are talking about deplayed reporting. That's a different issue than I am trying to solve. With constant internet connection, the BOINC client may download a new task minutes before it starts to execute. That's enough delay on short tasks like GFN16 and PPSE to lose a first-reporter race against similar speed machines.

Just saying that reporting immediately doesn't work when your client has stated for some reason (e.g. no internet connection for a while) to wait for X minutes before communicating with project server again.

The fix for that is a daemon that listens for network connection events and causes the BOINC client to update if there are any tasks in completed state waiting to be returned. I would have to do some research to figure out the first part. I think udev events on Linux can do that, but I would have to play around with those.

the BOINC client may download a new task minutes before it starts to execute

I know. My hosts are not affected by that. ;)

How in the world did you manage to do that?

One option I have been playing with is telling BOINC that it has only 2 CPUs (via % CPU option) and setting app_config.xml so that each app requires 1 CPU (even the GPU app); the "-t" command line option is unchanged. All this solved was the initial download of many multithreaded tasks on startup. But then BOINC estimates that multithreaded CPU tasks will take many multiples longer to run than they actually do ( like 1 hour 20 minutes instead of 5 minutes 30 seconds).

the BOINC client may download a new task minutes before it starts to execute

I know. My hosts are not affected by that. ;)

How in the world did you manage to do that?

One option I have been playing with is telling BOINC that it has only 2 CPUs (via % CPU option) and setting app_config.xml so that each app requires 1 CPU (even the GPU app); the "-t" command line option is unchanged. All this solved was the initial download of many multithreaded tasks on startup. But then BOINC estimates that multithreaded CPU tasks will take many multiples longer to run than they actually do ( like 1 hour 20 minutes instead of 5 minutes 30 seconds).

Before version 7 , even in version 6: that "bug" appear: before BOinc download work only few seconds before old is finished. I try, I ask, but all answers goes to BOINC developer. And of course I never got answer why BOINC must download even two minutes before task will be finished.
____________93*10^1029523-1 REPDIGIT MEGA PRIME :) :) :)
57*2^3339932-1 MEGA PRIME :)
10994460^131072+1 GENERALIZED FERMAT :)
31*332^367560+1 CRUS PRIME :)
Proud member of team Aggie The Pew. Go Aggie!

With constant internet connection, the BOINC client may download a new task minutes before it starts to execute. That's enough delay on short tasks like GFN16 and PPSE to lose a first-reporter race against similar speed machines.

I know. My hosts are not affected by that. ;)

Do you mind to share how you do it? Even with additional cache set to 0, Boinc will download a new GFN 16 several minutes before the previous one is completed.

Some of you have been around long enough to remember when a change was made in BOINC that permits me to set <report_results_immediately> FROM THE SERVER SIDE.

It's always turned on for PrimeGrid tasks no matter what you do. You shouldn't be able to turn it off. I won't swear it's impossible to turn off because I've never tried, but certainly, if you do nothing, it's ALWAYS ON. You don't need to put it in cc_config. You don't need to put it in app_info. You don't need to put it in app_config.

____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

Some of you have been around long enough to remember when a change was made in BOINC that permits me to set <report_results_immediately> FROM THE SERVER SIDE.

It's always turned on for PrimeGrid tasks no matter what you do. You shouldn't be able to turn it off. I won't swear it's impossible to turn off because I've never tried, but certainly, if you do nothing, it's ALWAYS ON. You don't need to put it in cc_config. You don't need to put it in app_info. You don't need to put it in app_config.

Mike I agree with you, but even set is off or on nothing is changes: that is not problem. Problem is download task before old is finished. On other project minute or two is not "problem" here is "big problem" :)
____________93*10^1029523-1 REPDIGIT MEGA PRIME :) :) :)
57*2^3339932-1 MEGA PRIME :)
10994460^131072+1 GENERALIZED FERMAT :)
31*332^367560+1 CRUS PRIME :)
Proud member of team Aggie The Pew. Go Aggie!

Some of you have been around long enough to remember when a change was made in BOINC that permits me to set <report_results_immediately> FROM THE SERVER SIDE.

It's always turned on for PrimeGrid tasks no matter what you do. You shouldn't be able to turn it off. I won't swear it's impossible to turn off because I've never tried, but certainly, if you do nothing, it's ALWAYS ON. You don't need to put it in cc_config. You don't need to put it in app_info. You don't need to put it in app_config.

Mike I agree with you, but even set is off or on nothing is changes: that is not problem. Problem is download task before old is finished. On other project minute or two is not "problem" here is "big problem" :)

One of my GPUs takes 3 minutes and 10 seconds to crunch a GFN 16. Boinc will download a new task around 1 minute after the previous one has started crunching, so a bit over 2 minutes before it was needed. This increases the time between download and upload to well over 5 minutes. Crawling trough some of my tasks I've seen a few wingman who don't seem to have this issue.
____________
676754^262144+1 is prime

One of my GPUs takes 3 minutes and 10 seconds to crunch a GFN 16. Boinc will download a new task around 1 minute after the previous one has started crunching, so a bit over 2 minutes before it was needed. This increases the time between download and upload to well over 5 minutes. Crawling trough some of my tasks I've seen a few wingman who don't seem to have this issue.

Someone on the Discord server dug through the BOINC client code and found that 3:00 minutes is hard coded into the software. It affects everyone, unless you do one of two things:

* Recompile the BOINC client yourself to remove that code.

* Set up PrimeGrid as a backup project, i.e., 0 share. BOINC won't download a new task until the previous task finishes. This means down time between tasks when the GPU will be idle until the next task is downloaded and ready to run, so your throughput will go down and you'll be running fewer tests overall.

The number of people doing either of those is going to be small, so while that delay might be undesirable, it affects everyone, so everyone is on a level playing field.

Since most people don't optimize the last second out of their systems, the hardest part is finding the prime. I've got five computer running, and I'll be lucky to find even one prime before the month ends. I'm more worried about that than I am about being the finder vs. the DC.

____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

The number of people doing either of those is going to be small, so while that delay might be undesirable, it affects everyone, so everyone is on a level playing field.

Since most people don't optimize the last second out of their systems, the hardest part is finding the prime. I've got five computer running, and I'll be lucky to find even one prime before the month ends. I'm more worried about that than I am about being the finder vs. the DC.

The hardest part is finding the prime.
While preparing for TdP, I got one PPSE on 2018-01-29 and one GFN16 on 2018-01-31, couple hours early you may say.
Considering this situation, my worry was if I hadn't run out of luck and not to come dry during TdP - be it prime finder or wingman.
But I understand that finally finding a huge prime and being doublechecker may feel kind of missed opportunity.

EDIT: On a side note - I was always ready to be doublechecker of very old WUs and challenge cleanup with some of my slower computers (like 4-years old server) - IF there was a chance to provide those cleanup boxes that would always get oldest tasks.
____________My statsBadge score: 1*1 + 5*1 + 7*1 + 8*8 + 9*7 + 11*1 + 12*3 = 187

Someone on the Discord server dug through the BOINC client code and found that 3:00 minutes is hard coded into the software.

Second time I've been refered on forums just as "someone on the Discord", hmm

Here's the full message, for history:

every now and then (every WORK_FETCH_PERIOD=60 seconds + "piggybacking" on result reporting and manual "update project" by user) BOINC is looking if the work buffer is saturated according to the preferences (WORK_FETCH class in the code). That is, wether there is enough work for the next work_buf_min() + work_buf_additional(), in seconds.
Here is how those functions look

As you can see in the first function, BOINC does not "respect" work buffers smaller than 180 sec = 3 minutes. Thus, when the estimated remaining time in a task goes below 3 minutes, it will think that there is spare place to fill in the work buffer and will fetch the next task from whatever project it deems necessary (there are internal measures of how much was "done" for a project and what its share is). It also has the clause that Michael discribed (from compute_rsc_project_reason()):

This clause exempts the zero_share project from filling the 3-minute work buffer (and downloading next task 2-3 minutes early), but also prevents filling the work buffer specified in preferences with tasks of that project. It gets a task to run when idle, but once it got it – it's no longer idle and won't fill the buffer with next ones.

The reasoning behind 180 seconds of min buffer is quite valid for general projects, IMHO. Tasks can take some time to download, doing it few minutes in advance is good thinking on the developer's side. Since boinc is open source you can always change the constant to a smaller one and build your own "faster" version, if you're familiar with how that is done and deem that necessary.

Someone on the Discord server dug through the BOINC client code and found that 3:00 minutes is hard coded into the software.

Second time I've been refered on forums just as "someone on the Discord", hmm

Official promotion:

By royal proclamation, ye art awarded, and shall from this point forward, in all means and communications, be officially referred to as:

"Someone on the forums"

Seriously, though, thanks for digging that up. I had always thought it was a 2 minute lead time, based on mk1 eyeball observations. I appreciate the research.

Point of note: Anything I say or do in the morning before the first cup of coffee should be considered unreliable and the product of a diseased mind.
____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

Seriously, though, thanks for digging that up. I had always thought it was a 2 minute lead time, based on mk1 eyeball observations. I appreciate the research.

No problem, I was bored at work at the time!

I've initially eyeballed it to be 2 minutes myself too and searched the code for constants like 120 :]

Since it only runs the check every 60 seconds (unless intervention of user or reporting result), the lead time is going to be random from 2 to 3 minutes. That means, if we suppose that we randomly open the BOINC window and saw the new task already downloaded and grunted under our breath, the most likely remaining time on the previous task would be 1 to 1.5 minutes. Including standard deviation, that's 0.5 to 2 minutes. So most likely we would convince ourselves that the value is 2 minutes.

Also, just wanted to note composite's message above. If boinc is fooled to believe that the estimated remaining times are larger (we're looking: larger than 3 minutes), it won't do the early download too (it's sense of work buffer becoming empty only coming in on the result reporting).

The number of people doing either of those is going to be small, so while that delay might be undesirable, it affects everyone, so everyone is on a level playing field.

It affects everyone, but we dont crunch at same time same WU on same hardware :)
But as you wrote solution is here: so lets play :)
____________93*10^1029523-1 REPDIGIT MEGA PRIME :) :) :)
57*2^3339932-1 MEGA PRIME :)
10994460^131072+1 GENERALIZED FERMAT :)
31*332^367560+1 CRUS PRIME :)
Proud member of team Aggie The Pew. Go Aggie!

Also, just wanted to note composite's message above. If boinc is fooled to believe that the estimated remaining times are larger (we're looking: larger than 3 minutes), it won't do the early download too (it's sense of work buffer becoming empty only coming in on the result reporting).

I don't understand why my host always downloads 2 GFN-16 tasks when the work buffer is empty, although the estimated time is 6:38 (398s). Project server should send only 1 task.

composite wrote:

I could calculate that for you in several minutes if you unhide your computers. Or maybe you have that capability already.

Yes, I can. Maybe later.

composite wrote:

How in the world did you manage to do that?

One option I have been playing with is telling BOINC that it has only 2 CPUs (via % CPU option) and setting app_config.xml so that each app requires 1 CPU (even the GPU app); the "-t" command line option is unchanged. All this solved was the initial download of many multithreaded tasks on startup. But then BOINC estimates that multithreaded CPU tasks will take many multiples longer to run than they actually do ( like 1 hour 20 minutes instead of 5 minutes 30 seconds).

Something similar.

"Usucapio Libertatis" wrote:

Do you mind to share how you do it? Even with additional cache set to 0, Boinc will download a new GFN 16 several minutes before the previous one is completed.

No. :)

My script doesn't work with GFN-16 at all. My host gets 2 new tasks every time. From my perspective I don't know if it's better to receive 2 tasks and immediately start the first one or to receive 1 task and start every task after 1 minute. My GPU is a 750 Ti and takes almost ~394s per task.

I did not mean start a big discussion on this but thank you for reviewing this.

Who has this BOT that we could all use if it is the correct thing to do.

So we should watch our computer and abort any tasks waiting to start seem a little over the top. I have 4 Tasks Running and 4 Waiting and cannot sit their deleting tasks every 5 seconds that sound pointless only to make sure your completed tasks uploads before some one else's complete.
This Seems a little extreme for me just to get a Prime Number and Jersey.

Scott is the GOD when to Primes and this BOINC Project that is the truth.
Maybe he can lend a hand here?
____________
Crunching@EVGA The Number One Team in the BOINC Community. Folding@EVGA The Number One Team in the Folding@Home Community.

I did not mean start a big discussion on this but thank you for reviewing this.

As I understand it, the PrimeGrid server is configured to send results immediately whether or not you set this. So it doesn't matter here.

Thanks and to funny on your Avatar

Under Version 7.6.31 I get this anyway
PrimeGrid: Notice from BOINC
Unknown tag in app_config.xml: report_results_immediately/
Sat 03 Feb 2018 10:30:29 AM MST
____________
Crunching@EVGA The Number One Team in the BOINC Community. Folding@EVGA The Number One Team in the Folding@Home Community.

For all the excitement/strategizing/angst/planning/paranoia about squeezing out every last second to make sure you are not the double checker, I give you this tidbit...

A person just found a PPSE prime, using an elapsed time of 1175 seconds. (19:35) That's not the interesting part. The fun part is that the computer returned the task 14 hours, 26 minutes, and 54 seconds after the task was sent. No sign of the double checker yet...

I've looked at the primes already showing on the TdP leaderboard. Below are the times between when the prime finder received the task and when the double checker returned it, i.e., how much time he really had to return the task and still be credited with finding the prime.

The second number is how long the prime finder's task actually took to run. The difference between the two is how much extra time he had available.

"Close" is less than a 3 minute margin
"Sort of close" is less than a 10 minute margin
Every other task had more than 10 minutes of leeway

My conclusion is that everyone who is worrying about being the double checker because they were a few seconds too slow is letting fear rather than facts guide their decisions. The data doesn't support those fears.
____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

"My conclusion is that everyone who is worrying about being the double checker because they were a few seconds too slow is letting fear rather than facts guide their decisions. The data doesn't support those fears.
"
Not a big deal and thank you. I have no fears to note it was only a question.
To note I was a few seconds faster not slower but lets let this end what is done is done.
At least that is what I understood from composite about my time.
Thanks
____________
Crunching@EVGA The Number One Team in the BOINC Community. Folding@EVGA The Number One Team in the Folding@Home Community.

"My conclusion is that everyone who is worrying about being the double checker because they were a few seconds too slow is letting fear rather than facts guide their decisions. The data doesn't support those fears.
"
Not a big deal and thank you. I have no fears to note it was only a question.
To note I was a few seconds faster not slower but lets let this end what is done is done.
Thanks

You're but one of a multitude for whom I intended my analysis. Lots of people seem to be, in my opinion, overly concerned that that an extra minute or second will make the difference. Is that possible? Certainly. But it's not likely.

Avoiding being the double checker is the easy part. If you set your cache to 0, you will usually be the prime finder. (And maybe use multi-threading.) It's actually finding the primes that's difficult.
____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

"My conclusion is that everyone who is worrying about being the double checker because they were a few seconds too slow is letting fear rather than facts guide their decisions. The data doesn't support those fears.
"
Not a big deal and thank you. I have no fears to note it was only a question.
To note I was a few seconds faster not slower but lets let this end what is done is done.
Thanks

You're but one of a multitude for whom I intended my analysis. Lots of people seem to be, in my opinion, overly concerned that that an extra minute or second will make the difference. Is that possible? Certainly. But it's not likely.

Thanks for looking at this Michael,
Avoiding being the double checker is the easy part. If you set your cache to 0, you will usually be the prime finder. (And maybe use multi-threading.) It's actually finding the primes that's difficult.

Yes and thank you, My cache to 0 to note and using multi-threading CPU Task.
Though not sure how to run multi-threading on a GPU Task.
We just have to play our best Poker Hand and that is about it.

Thank you for looking at this Michael,
____________
Crunching@EVGA The Number One Team in the BOINC Community. Folding@EVGA The Number One Team in the Folding@Home Community.

To make it clear, I was not complaining on my previous post. I just remembered that back in 2014 TdP, I found 3 primes on the mountain stage, but was the double checker on 2 of them. As far as I remember, at least in one of these 2, the difference in reporting times was just a couple of seconds. Hence, having the option to reduce the time a task stays on my hosts seemed something worthy to consider.
As you’ve said, the hard part is to actually find a prime :)

You can't. Not only is it something you can't do -- it's something that doesn't even make sense. :)

A modern CPU typically has 4 cores (not counting Hyper-Threading.)

LLR single threaded is running one thread.

LLR multithreaded is running a few threads.

A high end GPU, however, is a massively parallel super computer with THOUSANDS of cores. When you're running a GPU app, the program is running on the GPU, not the CPU. The GPU's got thousands of threads running at once. The CPU's job is merely to keep data flowing to and from the GPU, and it only needs a fraction of a single core for that. There's no point in running multiple threads on the CPU.

(Were you instead asking how to run multiple tasks simultaneously on a GPU?)
____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

"Close" is less than a 3 minute margin
"Sort of close" is less than a 10 minute margin
Every other task had more than 10 minutes of leeway

don't take it serious, life's too mysterious !!

to come back to my wish: a badge of the yearly closest doublechecker
LL as lucky looser or something - and yes, if it goes bad you had to discuss webserverlogs and loadbalancing things to decide, who it is ;-)
____________
Sysadm@Nbg
my current lucky number: 3651*2^1521717+1
PSA-PRPNet-Stats-URL: http://u-g-f.de/PRPNet/

As far as I remember, at least in one of these 2, the difference in reporting times was just a couple of seconds.

It certainly happens from time to time, but it's the exception, not the rule.

Us hoity-toity admin types have a page that shows us all the primes that have only one result so far and are waiting for the double checker to come in. About an hour ago I looked at that page and it had NINE PPSE primes waiting for the double checker. We're talking hours, not seconds.

And one of them is yours, by the way. Congratulations!!! :)

____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

"Close" is less than a 3 minute margin
"Sort of close" is less than a 10 minute margin
Every other task had more than 10 minutes of leeway

don't take it serious, life's too mysterious !!

to come back to my wish: a badge of the yearly closest doublechecker
LL as lucky looser or something - and yes, if it goes bad you had to discuss webserverlogs and loadbalancing things to decide, who it is ;-)

and if you find any prime, this disqualifies you from the badge
so the luckiest "I got nothing but double checks" will be honoured ...
____________
Sysadm@Nbg
my current lucky number: 3651*2^1521717+1
PSA-PRPNet-Stats-URL: http://u-g-f.de/PRPNet/

With constant internet connection, the BOINC client may download a new task minutes before it starts to execute. That's enough delay on short tasks like GFN16 and PPSE to lose a first-reporter race against similar speed machines.

I know. My hosts are not affected by that. ;)

Do you mind to share how you do it? Even with additional cache set to 0, Boinc will download a new GFN 16 several minutes before the previous one is completed.

I don't know if you got your answer, but I used the PrimeGrid PREFERENCES to set the RESOURCE SHARE to 0%, I only got one GPU job instead of 2. When the job completes, there is a short GPU idle time while it downloads another job.

You can then play with the cache and other Computing Resources settings without interfering with the single GPU job.

I won 87.3% TdP races. I faced 66 times "multithreaded" hosts and I won only 49.5% races against them.

P.S. mt condition is "runtime < cputime".

Your condition for detecting MT is faster than scanning the task's stderr output for the command line option.

Other metrics:

"weighted average" number of threads used = cputime / runtime
"weighted average > 1" is equivalent to "runtime < cputime"

Weighted average threads doesn't tell you the efficiency of these threads relative to a single thread unless you know how many cores have been applied with BOINC %CPU, and this info isn't public on PrimeGrid. However the CPU's thread capacity is in the host information, and there is enough info with task sent and received times to determine how many PrimeGrid tasks were running concurrently.

"delay" time = "task received time" - "task sent time" - runtime. This is what we are trying minimize to help win races.

I don't know if you got your answer, but I used the PrimeGrid PREFERENCES to set the RESOURCE SHARE to 0%, I only got one GPU job instead of 2. When the job completes, there is a short GPU idle time while it downloads another job.

Also, just wanted to note composite's message above. If boinc is fooled to believe that the estimated remaining times are larger (we're looking: larger than 3 minutes), it won't do the early download too (it's sense of work buffer becoming empty only coming in on the result reporting).

I don't understand why my host always downloads 2 GFN-16 tasks when the work buffer is empty, although the estimated time is 6:38 (398s). Project server should send only 1 task.

I also have the problem of downloading too much GPU work when its work buffer is empty. One approach to fixing that is to specify more CPU power needed by the GPU task than is really used. For example, specify the GPU task in app_config.xml to require 0.6 CPUs.
Maybe the problem is that the BOINC client doesn't check GPU times at all, and relies only on CPU time for filling the work buffer. Has anyone checked into that?

I did not mean start a big discussion on this but thank you for reviewing this.

As I understand it, the PrimeGrid server is configured to send results immediately whether or not you set this. So it doesn't matter here.

A "result" is BOINC's name for a task. When the server sends a result, it's a task going out for execution on the participant's computer with the BOINC client. Upon finishing the task the client reports it back to the server, and this tag controls whether the client's report is immediate or deferred. We would like the report to be immediate, and the mere existence of this tag implies that the default state of the BOINC client is to defer reporting.

So I'd like to you to confirm whether I understand your last assertion. Since this is a BOINC client-side setting, you are saying that the PrimeGrid server causes the client to report the result immediately regardless of the tag's absence in app_config.xml. Is my interpretation of your statement correct, and if so can Mike confirm the PrimeGrid server does this? (Not that it matters much in light of the foregoing discussion of the client's work buffer).

Who has this BOT that we could all use if it is the correct thing to do.

We have been trying to say that a BOT is NOT the correct thing to do, but getting there isn't straightforward.

So far as I have seen, only Luigi R. and I have admitted to having independently developed our own BOINC client controllers and/or PrimeGrid scrapers. I posted complete source code for a couple of these things I wrote in the last few days, in this forum thread and on this one, although I didn't post my code for race analysis. I gather that Luigi's does the same thing, slightly differently, but I don't know what language he used. You will note that my stuff is for Linux, and will possibly work on Mac and Windows (Perl script is portable, and Bash script might work with MingW with some changes) - I don't develop for those platforms so I wouldn't know the details. I shan't be handing out more code, these example are ample starters for writing your own if that's what you want to do.

So I'd like to you to confirm whether I understand your last assertion. Since this is a BOINC client-side setting, you are saying that the PrimeGrid server causes the client to report the result immediately regardless of the tag's absence in app_config.xml. Is my interpretation of your statement correct, and if so can Mike confirm the PrimeGrid server does this? (Not that it matters much in light of the foregoing discussion of the client's work buffer).

I've already said this is true.

It doesn't matter whether you you set this on your computer. It doesn't matter if you put it inc cc_config.xml. It doesn't matter if you put it in app_config.xml. All PrimeGrid tasks are forced to "report_results_immediately" by the server.
____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

I've written bot code to minimize WU start delays. I haven't written code to abort WUs where I'm the wingman yet, but I'm not ruling it out.

I think a reasonable but suboptimal approach is to flush out all the work (by processing it) before getting some more, as long as you are successful at fetching only one CPU & GPU task as a time. It guarantees there is no delay before starting to work on tasks.

This approach would work well on my system because PPSE and GFN-16 tasks finish within a few seconds of each other. For others, YMMV.

I also have the problem of downloading too much GPU work when its work buffer is empty. One approach to fixing that is to specify more CPU power needed by the GPU task than is really used. For example, specify the GPU task in app_config.xml to require 0.6 CPUs.
Maybe the problem is that the BOINC client doesn't check GPU times at all, and relies only on CPU time for filling the work buffer. Has anyone checked into that?

I tried 0.99 and 4.00 CPUs. It still downloads 2 tasks.

I gather that Luigi's does the same thing, slightly differently, but I don't know what language he used. You will note that my stuff is for Linux, and will possibly work on Mac and Windows (Perl script is portable, and Bash script might work with MingW with some changes) - I don't develop for those platforms so I wouldn't know the details. I shan't be handing out more code, these example are ample starters for writing your own if that's what you want to do.

PHP.
It would work for everyone if I upload to a webserver. Then it would need countermeasures to limit usage to one user at time.
It wouldn't work at all cause of 30s execution limit on free webservers. User could set start/end offset to take less than 30s. Input is by GET parameters.