This project is LIVE! Please see the second post for previously important Beta information.

This search is for primes of the form b^2^n+1. The numbers F(b,n) = b^2^n+1 (with n and b integers, b greater than one) are called generalized Fermat numbers. In the special case where b=2, they are called Fermat numbers, named after Pierre de Fermat who first studied them.

The original Generalized Fermat Prime Search by Yves Gallot was very active from 2001-2004. It was a premier project ranking second only to GIMPS in organization and size of primes found. In 2009, PrimeGrid, through its PRPNet, revitalized the search thanks in large part to David Underbakke, Mark Rodenkirch, and Shoichiro Yamada, each of whom provided the necessary software updates to get the project moving again. Now with Michael Goetz's native BOINC port of a modified GeneferCUDA, it's time to AWAKEN the potential of this project.

The search will concentrate on N=4194304 (n=22) which, if successful, has the potential of discovering the world's largest known prime number. Due to the size of the work units at this N, this search is a GPU-only project. WUs will get longer with time, but a rough estimate is that one WU will take 8 days to crunch on a GTX 460. Values of b up to approximately 490k are within the testing limits of GeneferCUDA. As of January 2012, a prime found with a b >= 1248 would produce a new world record. (FYI, 1248^4194304+1 is composite. It's not going to be that easy!)

GeneferCUDA is currently only available for Windows and Linux clients. GenefX64 (CPU) is available for MacIntel.

An Nvidia GPU with double precision floating point hardware is required for this project. The following GeForce GPUs will work:

Many Tesla and Quadro GPUs also will work. They must be CC 1.3 or higher. Check Nvidia's documentation for a comprehensive list.

GeneferCUDA is particularly sensitive to overclocking, and since this project requires 100% correct operation of the entire GPU over an extended period of time, over clocking is not recommended. If you would prefer to crunch smaller WUs and/or use a CPU for crunching, the PRPNet GFN project is seaching at N values of 32768, 65536, 262144, and 524288.

If you have the resources, the drive, and the desire to be the finder of THE LARGEST KNOWN PRIME, then this is your project.

A search such as this obviously needs a big sieve effort. If you would like to help out with the manual sieving effort, please see the instructions in the GFN Prime Search Sieving thread. It's available for Windows 64 bit CPU's ONLY...but can be run virtually within Linux.

For more information about generalized Fermat numbers and primes, please visit these links:

This project is still in the beta testing phase. There's several differences in the project at this stage, and some additional information you need to know.

Currently as of 1/22/2012, testing is being done at N=262144. At this size WU, CPU crunching is feasible, and there is a MacIntel CPU client available in addition to the Windows CUDA client.

There is a problem running GeneferCUDA on GTX 550 TI GPUs.

The problem is not well understood, and it is unknown if it affects every GTX 500 Ti, or just some of them.

It's hypothesized that the problem may be related to the unusual "mixed density" memory architecture on this GPU, and if true, that would mean that a small number of other GPUs that also have this architecture might also be affected.

The problem manifests itself by having WUs fail with "MaxErr Exceeded" errors. This occurs at random times during processing.

There has been some success with mitigating this problem by lowering the memory clock to 1700 while leaving the core and shader clocks at their normal values.

The latest software release (1.05, which should be in production soon, or 1.05 beta 3 which you can run now with app_info) has a tuning parameter that you can set from the PrimeGrid preferences web page. Although initially designed as a tool for dealing with potential screen lag problems, it turns out that you may be able to achieve a small performance improvement using this feature. Please see GeneferCUDA Block Size Setting for more information.

Other known problems:

There is an intermittent problem with the stderr output that shows up on the result webpage either being truncated or missing. This does not appear to affect the outcome of the WUs, which still get validated correctly.

Screen lag. Like any GPU program, GeneferCUDA has the potential for interfering with the GUI on your computer. At lower N values this isn't usually very severe, although there are some circumstances where it's more intrusive. Certain programs (e.g., Windows Live Mail) or activities (e.g., screen dimming during UAC dialogs), and full-screen video are affected more than others. At least one user has reported extreme lag, at all times, due to GeneferCUDA. It is expected that lag will get worse at higher N values as we move towards full production. It may be necessary to uncheck the "Use GPU when computer is in use" box in the Boinc preferences.

Until just recently, the WUs sent out by the Boinc server had a GFLOPS rating that was too low. This caused bad things to happen to the scheduler in your Boinc client. Now that this has been corrected on the server side, the following steps can correct correct the problem on your client. (assumes you're using Boinc 6.12.34)

Question: Why does GFN use the same progression (20K, 200K, 1M, 2M, 4M) as sieves rather than the LLR tasks (10K, 100K, 500K, 1M, 2M)? I would think it would use the same progression as all the other Boinc primality projects.

If the answer is "Because it's a GPU project.", then my next question is why does the TRP Sieve use the same progression as the PPS and GSW sieves?

I don't actually care which it is; I'd just like to understand the thinking behind the rules.

Speaking of badges, I'd like to give a shoutout to the folks over at GPUGRID -- I really like what they did with their new badge system. Besides the badges based on the number of credits you've racked up, they also award badges based on the percentile of your contribution to their published papers. That's a nice touch.
____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

Question: Why does GFN use the same progression (20K, 200K, 1M, 2M, 4M) as sieves rather than the LLR tasks (10K, 100K, 500K, 1M, 2M)? I would think it would use the same progression as all the other Boinc primality projects.

Highly optimized apps deserve higher goals.

In the past, only the sieves offered optimized apps - namely 64 bit vs 32 bit. The GPU optimization came quite a bit afterwards. The first project to benefit was AP26. It was highly optimized over its 32 bit CPU version. It made sense to apply the "higher" badge system then and it still does today.

Yes, llrCUDA is on the horizon, but certainly not as significant of an optimization over 32 bit as other applications. However, this will definitely muddy the waters...but we're not there yet. ;)

Times have changed considerably since the badges were introduced but the spirit of accomplishment remains the same.
____________

Question: Why does GFN use the same progression (20K, 200K, 1M, 2M, 4M) as sieves rather than the LLR tasks (10K, 100K, 500K, 1M, 2M)? I would think it would use the same progression as all the other Boinc primality projects.

Highly optimized apps deserve higher goals.

In the past, only the sieves offered optimized apps - namely 64 bit vs 32 bit. The GPU optimization came quite a bit afterwards. The first project to benefit was AP26. It was highly optimized over its 32 bit CPU version. It made sense to apply the "higher" badge system then and it still does today.

Yes, llrCUDA is on the horizon, but certainly not as significant of an optimization over 32 bit as other applications. However, this will definitely muddy the waters...but we're not there yet. ;)

Times have changed considerably since the badges were introduced but the spirit of accomplishment remains the same.

If this is the case then how are you calculating credit? If you are comparing optimized apps such as PPS_Sieve for badges you would think to award similar credit per minute, BUT since it is a primality test it shouldn't be that high. I think that 1/2 the credit per minute (or second) of PPS_Sieves would be appropriate. On my OC GTX 460 I finish the PPS _Sieves in about 26 minutes. That's about 130 credits per minute (rounded). Half of that is 65 x 72 minutes (average) a 262144 WU takes, would be 4680. As of now even though I REALLY want to find a GFN I am still drawn to the sieves for better credit for my electric guzzeling space heater(s) (with guzzeling a/c to counteract it/them). Short term for all the testing and occasional runs later on is what I see myself doing, as would many others at current per minute rates. If you REALLY want to draw crunchers to use more of their precious GPUs cycles on what will eventually be like SOBs on a GPU, you need to have a little more incentive. Some don't care at all about credit, some care only about credit and then there are what I think is the majority, people who want to find primes AND get fair credit while doing it. That is where I kind of stand, in the middle. I DON'T want to start a debate, just some insight on what I see a lot of peoples psychy seems to be. When testing progresses to 524288 maybe a per minute recalculation? I know it's just testers for now but it would be a fairer amount even for us 'lowly' testers (and better on our RACs).

I agree, if you want to get a big number of GPU crunchers, the credit will have to be raised. Especially when you realise how long the units will take to finish. You need an incentive to get people of the sieves to these very long LLR's.
But that will probably prove itself when it will be release to the wide public.
____________

I agree, if you want to get a big number of GPU crunchers, the credit will have to be raised. Especially when you realise how long the units will take to finish. You need an incentive to get people of the sieves to these very long LLR's.
But that will probably prove itself when it will be release to the wide public.

I think the current rate is selected for the current length of a WU. If the n=4194304 units also get 3600 credits... well, that would be extremely stingy methinks and therefore unlikely :) (450 cr/day, 18.75 cr/hour)
____________PrimeGrid Challenge Overall standings --- Last update: From Pi to Paddy (2016)

And the lack of (pictures including) a huge parade, cheerleaders and several kegs of beer also mean that there hasn't been a release party yet ;)
____________PrimeGrid Challenge Overall standings --- Last update: From Pi to Paddy (2016)

I know it's still beta but I was noticing that there are many lower end to middle speed cards testing but only 2 570s and one 580 and no 590s unless I missed someone in the threads. All the fastest cards are still on PPS_Sieves and I would think you would want more top cards beta testing so a better incentive might be needed. The current credit is not drawing enough of the big guns in as of now, especially someone with a GTX 590 (or a pair) as there may be unknown issues with those dual chip cards. My example of the different credit for the current n=262144 was merely a suggestion and I know as n increases so will the credit but I was just thinking that getting a baseline ''credits per minute/second/GFlops'' number that will be the production number determined 'now', that then can be used to extrapolate on bigger n's. By doing that now you may draw more fast cards to beta testing. This is all assuming that more fast cards are wanted/needed for testing and eventually longer termed crunching through a higher credit incentive. And as Mike said in another thread that doubling n doesn't translate into doubling WU time. That it's more than double so just doubling 3600 or whatever number when testing goes to 524288 won't work. Any other suggestions that may be helpful? Or is this all a cart before the horse thing?

The question of how credit is determined to begin with, was a serious but curious one. Although, if Pandora's box is how it's done than John or Rytis (or both) must know Pandora pretty well, what with the different sub projects and all. ;)

I know it's still beta but I was noticing that there are many lower end to middle speed cards testing but only 2 570s and one 580 and no 590s unless I missed someone in the threads. All the fastest cards are still on PPS_Sieves and I would think you would want more top cards beta testing so a better incentive might be needed. The current credit is not drawing enough of the big guns in as of now, especially someone with a GTX 590 (or a pair) as there may be unknown issues with those dual chip cards.

Here are my 2 cents:

For some reason, slower cards are more efficient on GeneferCUDA than the faster ones: slower (or smaller, if you prefer) are much closer to the bigger ones than they are in the sieves. For instance, I ran some Genefertasks on a 590 less than twice faster than on a 550 TI. When sieving, the first would be at least three times faster. The double precision thing is the cause for this, I think. This could help to explain the lack of high end cards.

GFN262144 tasks in PRPNet worth ~22000 PRPnet credits. At the current implicit conversion ratio (20:1), a task done there will grant you 1100 Boinc credits in the PSA subproject. Under this perspective, crunching genefers in Boinc is over three times more rewarding than doing it in PRPNet.

We could not find primes without sieving. But sieving, unlike Genefer, will never give you the pleasure of saying "I was the one that found this prime" (actually, your cpu did, but ok), so it is fair that sievers are rewarded more generously on the credits side. I would gladly trade my 7 million or so sieving credits and badges for a top 5000 prime :)

I asked to run this project and as was stated earlier it started out with no credits and then 1 credit. As it's still in beta, questions on the credit rating would be valid to bring up so thought can be given. I'm pretty sure that it will even itself out sooner than later since PG is a very well run operation.

For me, it's more about finding a big prime even though I do like credits.

On a side note for those of you interested (Tim pointed this out) upgrading to the 951.51 beta nvidia driver speeds up the jobs. I've gone from running a job around 2440 seconds to 2412 seconds.

edit: wasn't sure if the option for genefer was available to everyone or if they still needed to ask to run it.
____________
@AggieThePew

There's two sides to this credit debate. Well, lots of sides, but consider this counter argument to "You need more credit to draw people off the sieves".

Mind you, this counter-argument is NOT good for PrimeGrid, but it does have a certain appeal for the people who are reading this board.

One of us is possibly going to be the world record holder in the not so distant future. This person may end up holding that record for more than a few years.

Would you rather that person be someone who is interested in finding primes, or someone whose only interest was in getting to the top of the credit leader board?

Now, of course, the goal is for SOMEONE to find it, and the odds of that happening goes up with more people crunching. And do not forget that this IS a race and speed is very important: I'm not certain where GIMPS is searching right now, but if they find another Mersenne before we find a GFN at N=4194304, their new prime may be above what we can find at that N. So we have a window of opportunity here, and that window might close.

So, that's my counter-argument AND my counter-counter-argument. :)
____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

How do I go about running the genefercuda part, I've tried connecting to the servers but it says something along the lines of no work is available, maybe I've configured it totally wrongly but it appeared to be working for the other parts I tried, PPS & SGS.

It also prompts me that cudart_32_32_16.dll is missing, should I just download this and retry?

How do I go about running the genefercuda part, I've tried connecting to the servers but it says something along the lines of no work is available, maybe I've configured it totally wrongly but it appeared to be working for the other parts I tried, PPS & SGS.

It also prompts me that cudart_32_32_16.dll is missing, should I just download this and retry?

At this moment, there appears to be no work on the server. I don't know what the story is with that.

This is a BOINC project, and BOINC should download everything automatically for you. If you're running with an app_info file, you'll need to update it appropriately.
____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

Apologies, I was trying via the prpclient-5.0.4 command line, in BOINC I don't see the option to subscribe, probably because I haven't requested it.

How do I request to get it added or do I just add the lines to the app info file, and if so what would the lines be?

Cheers

You have to PM John to get added to the beta, although I suspect the beta will be going open in the near future.

To run this code with PRPNet (as opposed to the version that comes with PRPNet), download the executable file here. That's the latest version which will be running on Boinc in the near future.

Place that in the directory you want to run it in. If you're going to run it under PRPNet, rename it to GeneferCUDA.exe.

If you need the cuda3.2 DLLs, you can download them from the PrimeGrid download directory. That directory listing is alphabetical, so just scroll down until you get to "cu...". They should go in the same directory where you put the executable.
____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

Does the due date have to be so short (~24 hours)? If your queue size is set to anything greater than 1 day, it over-downloads. (Yes, I already checked my DCF, it is currently 1.1.)
____________
Dublin, CA

are you planning to add a section for the sub-project gfn under a users home screen.
e.g. where you see how many tasks you computed, how many credits for this subproject and where you can click on your prime-list etc?

are you planning to add a section for the sub-project gfn under a users home screen.
e.g. where you see how many tasks you computed, how many credits for this subproject and where you can click on your prime-list etc?

are you planning to add a section for the sub-project gfn under a users home screen.
e.g. where you see how many tasks you computed, how many credits for this subproject and where you can click on your prime-list etc?

The project's still in beta. Slowly, they've been adding in all the webserver parts:

The badges are in.

The stat export is in, so GFN shows up on the free-DC sub-project stats.

The application page lists the Genefer applications.

EVERYTHING necessary to make GFN actually work! :)

I think this is the only remaining part to go, and it will be done.
____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

are you planning to add a section for the sub-project gfn under a users home screen.
e.g. where you see how many tasks you computed, how many credits for this subproject and where you can click on your prime-list etc?

The project's still in beta. Slowly, they've been adding in all the webserver parts:

The badges are in.

The stat export is in, so GFN shows up on the free-DC sub-project stats.

The application page lists the Genefer applications.

EVERYTHING necessary to make GFN actually work! :)

I think this is the only remaining part to go, and it will be done.

In our admins we trust! :)

As always, take your time. I'd rather have an app that works (damn near) perfectly without stats than stats and a borked app that wastes my C/GPU time! :)

As always, take your time. I'd rather have an app that works (damn near) perfectly without stats than stats and a borked app that wastes my C/GPU time! :)

You should be able to get both, because the admins putting the server together and the people doing the de-borking are different people, so we can work in parallel! :)
____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

If temperature affects the results, then it's not a software problem. The sensitivity to overclocking is definitely a hardware issue.

This regularity observe in folding@home and in gpugrid. Also, this projects sensitive to overclocking the memory (incompatibility bond frequency/timing set). For example, first WU heats the card and get error, next WU get almost immediately error. link to image

If temperature affects the results, then it's not a software problem. The sensitivity to overclocking is definitely a hardware issue.

One mystery solved.

Again, thanks.

Actually, it isn't really a "hardware" issue either, since the same hardware at the same overclock would likely function stably in another environment with lower temps. The hardware part comes into play in that there is no set temp at which each chip/card combo becomes unstable. For example, I have one card running quite stably at exactly 82C.

Given that I have several different cards of various speeds/shaders/memory/etc., I'd offer the following guidelines when it comes to overclocking on the GFN subproject (*note: much of this advice applies to some degree to the other GPU subprojects as well as to overclocking in general, and as always, YMMV):

1) Keep it under 80C...while I have had some success with cards that run hotter than this (even without an overclock), generally speaking, it is once one exceeds the 80C range that stability issues start emerging in my experience. Some cards/chips can exceed this, but if you are running that hot, you need to keep a very close eye on things (and be prepared to downclock or buy another card eventually...excessive heat will shorten the card's lifespan).

2) Is that a Laptop GPU you are overclocking?...Don't do it! Frankly, it is not the best idea to run such work on a laptop, and many would advise against it generally. I have been successfully running CPU and GPU work on them for years, however, without issue. I do by taking all the necessary precautions of running in a relatively cool environment with a laptop cooler (with active fans) and a regular cleaning schedule of canned air at least every month (including less frequent opened case cleanings). Even with those precautions, I would not overclock anything. Doing so risks running into other issues (see #1 above).

3) Go up in small increments when overclocking. For the most part, the shader and core clocks are linked for GFN work (the exception is for GTX 2xx cards and related Quadro/Tesla cards built on these chips). When you find a clock that is unstable, back off a couple of overclock steps and you should be fine (assuming #1 above isn't an issue).

4) The GFN project appears to be particularly sensitive to memory overclocks. These don't gain you greatly in overall speed increases anyway, so I wouldn't recommend playing with them at all. Indeed, if you are not overheating and haven't done much to the shader clocks but are still getting stability issues, you might consider downclocking the memory (similarly to the workaround for the GTX 550 Ti cards).

5) Remember that overclocking necessarily uses more power. This in turn puts more stress on your power supply, thereby increasing overall heat. This leads to several possible issues including A) over stressing the power supply resulting in shortened PS life span, possible shorts in other parts of your system such as the motherboard, and non-GPU instability; B) hotter running GPUs and CPUs (I have reduced heat in some systems just by installing a more powerful/efficient PS); and C) problems with GPUs that do not have extra power connectors (i.e., the PCIe slot is limited to 75W...overclocking some cards without external power can exceed this limit, and running at the limit can produce instability in some systems).

6) That GT530 you bought is never going to be a GTX 560 Ti (or fill in whatever card comparison you like). That is, you are not going to make a mid-range card out of an entry-level card nor are you going to make a top-end one starting with a mid-range. You may find that you can take a card from a particular series and overclock it successfully to perform at or near the stock clocked card from the next higher series (e.g., I have my wife's superclocked EVGA GTX 550 Ti performing about as well as a stock clocked GTX 460), but you are not going to be able to do better than that 99% of the time (*note: there may be rare exceptions). If you really want a top-end card, buy one...you aren't going to overclock your way there with something else.

Overclocking can be a very useful tool to increase performance for your equipment. This can be a Win-Win situation where the user gets more credit and increases their chance of finding a prime and where the project benefits from the overall increased work completion. But it is a Lose-Lose if you don't do it very carefully and responsibly.

are you planning to add a section for the sub-project gfn under a users home screen.
e.g. where you see how many tasks you computed, how many credits for this subproject and where you can click on your prime-list etc?

The project's still in beta. Slowly, they've been adding in all the webserver parts:

The badges are in.

The stat export is in, so GFN shows up on the free-DC sub-project stats.

are you planning to add a section for the sub-project gfn under a users home screen.
e.g. where you see how many tasks you computed, how many credits for this subproject and where you can click on your prime-list etc?

The project's still in beta. Slowly, they've been adding in all the webserver parts:

The badges are in.

The stat export is in, so GFN shows up on the free-DC sub-project stats.

The application page lists the Genefer applications.

EVERYTHING necessary to make GFN actually work! :)

I think this is the only remaining part to go, and it will be done.

Rytis added GFN stats to account page. :)

I just realized I forgot one (make that two, um, three...) item(s) from my list:

I am never one to complain about getting too many wus LOL but I am getting way to many wus to do in the time given to do them, so I changed my computing preferences for 1 days of work and it is not working. I got 49 wus to do by 8:17:29 tomorrow night. I have lots of others to do before that.

My GPU is a GTX560, I am averaging around 20 wus a day and I have Vista if that helps.

Is there anyway of controlling the amount of wus I am getting? I really hate to abort them because I can't do them in time.

I am never one to complain about getting too many wus LOL but I am getting way to many wus to do in the time given to do them, so I changed my computing preferences for 1 days of work and it is not working. I got 49 wus to do by 8:17:29 tomorrow night. I have lots of others to do before that.

My GPU is a GTX560, I am averaging around 20 wus a day and I have Vista if that helps.

Is there anyway of controlling the amount of wus I am getting? I really hate to abort them because I can't do them in time.

Thank you for your time in answering ~smiles~

You can reduce the additional buffer in Boinc preferences. Setting it to 0 (or 0,1, for instance) would do the trick (assuming you have a permanent connection to the server).

You could also change DCF (duration correction factor) in the file client_state.xml inside primegrid folder.

I am never one to complain about getting too many wus LOL but I am getting way to many wus to do in the time given to do them, so I changed my computing preferences for 1 days of work and it is not working. I got 49 wus to do by 8:17:29 tomorrow night. I have lots of others to do before that.

My GPU is a GTX560, I am averaging around 20 wus a day and I have Vista if that helps.

Is there anyway of controlling the amount of wus I am getting? I really hate to abort them because I can't do them in time.

Thank you for your time in answering ~smiles~

Since this is still in beta, they have the deadlines set to 24 hours. You need to have your buffer set to less than that or you will have problems. Once we go up to a larger N they will most likely raise the deadline, if for no other reason than the WUs will be substantially longer. (We're currently at n=18. At n=19, the WUs are about 3 times longer, and at n=20, they're about 12 times longer. Our eventual target is n=22, where the WUs are 130 to 250 times longer.)

I think we're ready to move up to n=20 -- and the deadlines will have to be longer for that. With longer deadlines, you can make your buffers larger and Boinc will be able to handle it.
____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

And if( for now) WU on my card is need 58 Min to be finished, and I got 3600 credits for one: how many will I get for WU that will be few times longer?
____________93*10^1029523-1 REPDIGIT MEGA PRIME :) :) :)
57*2^3339932-1 MEGA PRIME :)
10994460^131072+1 GENERALIZED FERMAT :)
31*332^367560+1 CRUS PRIME :)
Proud member of team Aggie The Pew. Go Aggie!

(We're currently at n=18. At n=19, the WUs are about 3 times longer, and at n=20, they're about 12 times longer. Our eventual target is n=22, where the WUs are 130 to 250 times longer.)

I think we're ready to move up to n=20 -- and the deadlines will have to be longer for that. With longer deadlines, you can make your buffers larger and Boinc will be able to handle it.

Are we just going to jump right over n=19 or do you just mean that the project is debugged enough to be able to move to n=20?

It's not my choice, but my opinion would be that there's no reason to do n=19 on the Boinc side since it's been crunched for over a year at that level on PRPNet.

N=18 made sense as a stepping stone because A) the WUs are actually useful, and B) they're not trivially short the way the earlier WUs were. But now that we have some experience running at decent lengths, I think the next step should be to an "n" that hasn't been searched before. As a beta test, I don't think there's anything to learn by doing n=19 here.

Perhaps we go straight to n=22, or maybe we crunch n=20 and then n=21, perhaps hoping to find a prime at those levels before moving on. I don't think any decision has been made.

There is a reason to stay at n=18, however: with all the GPU power available on the Boinc side, there's a possibility for finding some nice primes during the Tour de Primes. The winner of the green jersey this year, in my opinion, will probably be someone using a GPU crunching GFN at n=18.
____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

There is a reason to stay at n=18, however: with all the GPU power available on the Boinc side, there's a possibility for finding some nice primes during the Tour de Primes. The winner of the green jersey this year, in my opinion, will probably be someone using a GPU crunching GFN at n=18.

Yes, finding prime even at n=18 would be nice.

There is another reason to stay at n=18 for a while longer...we are not quite finished with sieving for n=20,21,22.

Have anyone seen some probability calculation of how many primes can we expect at each n?
Maybe putting lower n's primes in a table may give us idea of what to expect.
Primes at higher n's are rare...but how rare?
____________My statsBadge score: 1*1 + 5*2 + 8*9 + 9*6 + 12*3 = 173

There is a reason to stay at n=18, however: with all the GPU power available on the Boinc side, there's a possibility for finding some nice primes during the Tour de Primes. The winner of the green jersey this year, in my opinion, will probably be someone using a GPU crunching GFN at n=18.

Unless someone has a finding at n=19 on PRPNEt side as you did before :)

On the debugging process, looking at my wingman results, I keep seeing a lot of tasks aborted (probably due to DCF issues), but also a high rate of "errors while computing" and "marked as invalid". This may be acceptable with tasks that take less than two hours on a low end card, but might mean a big waste of crunching power at higher n.

The positive side is that most "errors while computing" seem to happen at the beginning (looking at reported times) and are mush more frequent than invalid tasks (by other causes). My sample is small though. Not sure if it represents the whole Genefer-Boin universe.

There are alot of invalid and errored workunits. Some of the users are running a mix of sieves and genefer and I'm guessing they have o/c'd cards for the sieves which causes issues with GC as we know. Almost every one of my units seems to have been run 3-5 times but then we are still in beta mode so I kind of expect this.

There is another reason to stay at n=18 for a while longer...we are not quite finished with sieving for n=20,21,22.

We're not even close to being finished with the sieving -- but sieving is a "diminishing returns" endeavor; we have probably already eliminated about 95% of the total candidates that would be removed if the sieve was complete. For example, we're roughly 1/8th of the way through the n=22 sieve, and have eliminated about half of the 50 million candidates in the sieve file. For the remaining 7/8 ths of the sieve, a paper napkin calculation shows us as eliminating maybe 1 million more candidates. 88% of the sieving remains, but it's only going to find 4% of the factors.

In other words, if we start crunching now, we're running at only 96% efficiency compared to if we waited. Of course, while we're waiting, we're running at 0% efficiency. At the speed the sieve is progressing, it looks to be a long wait for that last 4%.

Is it worth sieving? Yes, because it's still more efficient at removing candidates. But it's not, in my opinion, a reason not to start crunching for PRP. I'll probably start crunching at N=22 on my own if we're not at least at n=20 by the end of the month.

At n=20 and n=21 we're not as far along as we are at n=22, but because of the diminishing returns, in terms of factors, the equations are fairly similar.

As for "how many primes..." I'll leave that to others to answer.
____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

On the debugging process, looking at my wingman results, I keep seeing a lot of tasks aborted (probably due to DCF issues), but also a high rate of "errors while computing" and "marked as invalid". This may be acceptable with tasks that take less than two hours on a low end card, but might mean a big waste of crunching power at higher n.

The positive side is that most "errors while computing" seem to happen at the beginning (looking at reported times) and are mush more frequent than invalid tasks (by other causes). My sample is small though. Not sure if it represents the whole Genefer-Boin universe.

This is what I see looking at my wingmen:

Lots of "aborted by user". This may be due to DCF problems. Most people probably did not bother to use my procedure for fixing the DCF, so even though the estimated GFLOPS was corrected, it will take a while for the DCF to drop to a reasonable value.

There's a LOT of errored WUs. There's a lot of causes; some of which I can guess at, some I can't. There's a heck of a lot of ways to make a GPU task go bad. Most of them cause the WU to fail at initialization, or shortly thereafter. That's not so bad.

There's a lot of people who are getting what appear to be overclocking related errors. Most people overclock their GPUs. We're quickly learning that the hardware circuits, either the FPU or video ram, is highly susceptible to errors when overclocked.

So lots of people are going to get errors until they lower the clocks on the GPU. Then they'll get FEWER errors. Fewer errors isn't so bad on 90 minute WUs, but it's going to be a killer with 200 hour workunits.

My recomendation therefore is, was, and will probably always be to run GPUs at stock clocks when running Genefer. What's the point of running 10% faster if you never complete a WU?

Experience on the 550 Ti seems to indicate that it's the memory clock that's most important in reducing the error rate. There's not a whole lot of data to back that up, however, and it's only a theory at this point.
____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

I have alot of errored units. It doesn't show my which card ran the unit, but I suspect it's the 8800 and not the 460 that is having the errors. I don't recall any errors before adding in the 8800 for testing it out since I got it used. It's not only these units I'm seeing errors on, the gpu tasks for MilkyWay also have been erroring out since adding the 8800.

Is there a way to set the units to check if the card is compatible before trying to run it? It won't be an issue for me much longer, waiting on a few more pieces to come in for a new system and the 8800 is going in that, but for those who are running more than one card with a possible incompatible card....
____________
771*2^1354880+1 is prime

I have alot of errored units. It doesn't show my which card ran the unit, but I suspect it's the 8800 and not the 460 that is having the errors. I don't recall any errors before adding in the 8800 for testing it out since I got it used. It's not only these units I'm seeing errors on, the gpu tasks for MilkyWay also have been erroring out since adding the 8800.

Is there a way to set the units to check if the card is compatible before trying to run it? It won't be an issue for me much longer, waiting on a few more pieces to come in for a new system and the 8800 is going in that, but for those who are running more than one card with a possible incompatible card....

V1.06 was intended to diagnose your exact problem.

What happens is that A) the WUs are configured on the server to only be sent to computers with an appropriate (Compute Capability 1.3 or higher) GPU, and B) GeneferCUDA checks that it's running on an appropriate GPU.

What seems to be happening is that the server correctly sends you the WUs, but the Boinc client doesn't seem to be smart enough to understand that they can only run on one of your two GPUs. Once GeneferCUDA starts running, it immediately detects that it's running on the wrong GPU and aborts.

If you look at the result page for the failed WUs, you'll see this in the stderr output:

If you are running a boinc client 6.13.x or higher, you can use this construct in cc_config.xml to tell the client not to use the 8800 for geneferCUDA:

<exclude_gpu>

Don't use the given GPU for the given project. If <device_num> is not specified, exclude all GPUs of the given type. <type> is required if your computer has more than one type of GPU; otherwise it can be omitted. <app> specifies the short name of an application (i.e. the <name> element within the <app> element in client_state.xml). If specified, only tasks for that app are excluded. You may include multiple <exclude_gpu> elements. New in 6.13
<exclude_gpu>
<url>project_URL</url>
[<device_num>N</device_num>]
[<type>nvidia|ati</type>]
[<app>appname</app>]
</exclude_gpu>

I'll try it out when I get a chance, right now, I'm just going with having another gpu application sending work, since the 460 is faster than the 8800, it picks up these before the 8800 has a chance, since the 8800 runs the longer GPUGrid tasks in about 24-30 hours....

Its not perfect, but it ensures the 8800 isn't always running and erroring a Genefer task....

And if it helps with anything, my 460 hasn't errored any units and its oc'd to 865/1730/1950 and voltage set at 1 v.... it stays a constant 68C crunching a genefer unit with the fan at 90%....
____________
771*2^1354880+1 is prime

I'll try it out when I get a chance, right now, I'm just going with having another gpu application sending work, since the 460 is faster than the 8800, it picks up these before the 8800 has a chance, since the 8800 runs the longer GPUGrid tasks in about 24-30 hours....

Its not perfect, but it ensures the 8800 isn't always running and erroring a Genefer task....

And if it helps with anything, my 460 hasn't errored any units and its oc'd to 865/1730/1950 and voltage set at 1 v.... it stays a constant 68C crunching a genefer unit with the fan at 90%....

You know, based on what everyone's saying I'm thinking it might be more of a problem with the temperature than with the clock.

It might be simple as just pegging the fan at 100%.
____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

are you planning to add a section for the sub-project gfn under a users home screen.
e.g. where you see how many tasks you computed, how many credits for this subproject and where you can click on your prime-list etc?

The project's still in beta. Slowly, they've been adding in all the webserver parts:

The badges are in.

The stat export is in, so GFN shows up on the free-DC sub-project stats.

The application page lists the Genefer applications.

EVERYTHING necessary to make GFN actually work! :)

I think this is the only remaining part to go, and it will be done.

Rytis added GFN stats to account page. :)

I just realized I forgot one (make that two, um, three...) item(s) from my list:

are you planning to add a section for the sub-project gfn under a users home screen.
e.g. where you see how many tasks you computed, how many credits for this subproject and where you can click on your prime-list etc?

The project's still in beta. Slowly, they've been adding in all the webserver parts:

The badges are in.

The stat export is in, so GFN shows up on the free-DC sub-project stats.

The application page lists the Genefer applications.

EVERYTHING necessary to make GFN actually work! :)

I think this is the only remaining part to go, and it will be done.

Rytis added GFN stats to account page. :)

I just realized I forgot one (make that two, um, three...) item(s) from my list:

The first two I forgot because those pages are currently disabled for the challenge. They may, in fact, already be done.

The third is something out of sight, and it may probably already done.

Also 'Pending credits' page doesn't support GFN.

Yes it does. Just the amount of credit that's pending is borked. But that's always the case with work units for which you always get the same amount of credits.
____________PrimeGrid Challenge Overall standings --- Last update: From Pi to Paddy (2016)

And if it helps with anything, my 460 hasn't errored any units and its oc'd to 865/1730/1950 and voltage set at 1 v.... it stays a constant 68C crunching a genefer unit with the fan at 90%....

I've also got a 460, but I run it at stock (1350). 1730 is... a lot faster!

Last night I clocked my card to 1630. Everything ran hotter, and faster. That cut 15 to 20 minutes off the WU.

With the big WUs, overclocking like that will mean cutting a day or more off the run time. Very tempting -- but I probably still won't do it and opt for an extra margin of stability.
____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

are you planning to add a section for the sub-project gfn under a users home screen.
e.g. where you see how many tasks you computed, how many credits for this subproject and where you can click on your prime-list etc?

The project's still in beta. Slowly, they've been adding in all the webserver parts:

The badges are in.

The stat export is in, so GFN shows up on the free-DC sub-project stats.

The application page lists the Genefer applications.

EVERYTHING necessary to make GFN actually work! :)

I think this is the only remaining part to go, and it will be done.

Rytis added GFN stats to account page. :)

I just realized I forgot one (make that two, um, three...) item(s) from my list:

And if it helps with anything, my 460 hasn't errored any units and its oc'd to 865/1730/1950 and voltage set at 1 v.... it stays a constant 68C crunching a genefer unit with the fan at 90%....

I've also got a 460, but I run it at stock (1350). 1730 is... a lot faster!

Last night I clocked my card to 1630. Everything ran hotter, and faster. That cut 15 to 20 minutes off the WU.

With the big WUs, overclocking like that will mean cutting a day or more off the run time. Very tempting -- but I probably still won't do it and opt for an extra margin of stability.

1730 is the max stable clock on my ASUS TOP 460 card...gain is almost exactly 20 minutes per unit. Heat doesn't exceed 68C. Above 1730 ( really 1728 in actual clock), I get immediate errors, which is likely due to actual hardware issues (clock timings, etc) since heat doesn't go up much per clock bump in my case.

And if it helps with anything, my 460 hasn't errored any units and its oc'd to 865/1730/1950 and voltage set at 1 v.... it stays a constant 68C crunching a genefer unit with the fan at 90%....

I've also got a 460, but I run it at stock (1350). 1730 is... a lot faster!

Last night I clocked my card to 1630. Everything ran hotter, and faster. That cut 15 to 20 minutes off the WU.

With the big WUs, overclocking like that will mean cutting a day or more off the run time. Very tempting -- but I probably still won't do it and opt for an extra margin of stability.

1730 is the max stable clock on my ASUS TOP 460 card...gain is almost exactly 20 minutes per unit. Heat doesn't exceed 68C. Above 1730 ( really 1728 in actual clock), I get immediate errors, which is likely due to actual hardware issues (clock timings, etc) since heat doesn't go up much per clock bump in my case.

I actually lowered it because of heat issues during the challenge, I had it stable at 900/1800/2000 with voltage at 1.025v but the fan would go back and forth between 80% and 100%. Haven't tested it with those settings on Genefer, but its at 875/1750/1950 stable. It's in the range where gpu usage isn't a constant 99%, but it crunches without issues and no screen lag, but I go higher and I have to increase the voltage, and I really don't want to do that with both cards in as it seems like it sets the voltage for both and the 8800 isn't oc'ed. I've been alittle iffy on running the fan at 100% for too long even though it was more likely coincidence that setting my 295 to 100% fried it 5 min later....

The answer is technically yes, since the OSX app is very similar to the Linux app.

However, my understanding is that there's very few compatible Apple computers out there, which is why this is a low priority. Those computers that ship with Nvidia GPUs generally only ship with single precision GPUs.

Looking at your computers, you have one computer running Darwin with 2 8800 GT GPUs (those are single precision), another with a GT 120 GPU (also single precision), one that has a GT 130 (single precision), and some more that don't have GPUs.

If you have an OSX computer that has a compatible GPU, there would be more incentive to make a build for OSX.
____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

I know it's still beta but I was noticing that there are many lower end to middle speed cards testing but only 2 570s and one 580 and no 590s unless I missed someone in the threads. All the fastest cards are still on PPS_Sieves and I would think you would want more top cards beta testing so a better incentive might be needed. The current credit is not drawing enough of the big guns in as of now, especially someone with a GTX 590 (or a pair) as there may be unknown issues with those dual chip cards. My example of the different credit for the current n=262144 was merely a suggestion and I know as n increases so will the credit but I was just thinking that getting a baseline ''credits per minute/second/GFlops'' number that will be the production number determined 'now', that then can be used to extrapolate on bigger n's. By doing that now you may draw more fast cards to beta testing. This is all assuming that more fast cards are wanted/needed for testing and eventually longer termed crunching through a higher credit incentive. And as Mike said in another thread that doubling n doesn't translate into doubling WU time. That it's more than double so just doubling 3600 or whatever number when testing goes to 524288 won't work. Any other suggestions that may be helpful? Or is this all a cart before the horse thing?

The question of how credit is determined to begin with, was a serious but curious one. Although, if Pandora's box is how it's done than John or Rytis (or both) must know Pandora pretty well, what with the different sub projects and all. ;)

NeoMetal*

I have a GTX590 and just crunched a couple of WU's.
The card is stock and the runs times were 3285 seconds.
Are these times ok?... The block size is set to zero.

Thanks for the response. At the time I asked, I was considering upgrading my Mac Pro to a GTX 285. But I have since learned that my 2006 vintage does not support it. Ah well.

I have the same mac pro, 2006 model. Do you know of any Nvidia DP card for this model?

I do not know if your specific model will handle it (power supply issues, etc.), but a Quadro FX 4800 was made for the Mac, it is DP capable (basically the same thing as a GTX 260, and is a fairly big card so it needs a tower case.

Thanks for the response. At the time I asked, I was considering upgrading my Mac Pro to a GTX 285. But I have since learned that my 2006 vintage does not support it. Ah well.

I have the same mac pro, 2006 model. Do you know of any Nvidia DP card for this model?

I do not know if your specific model will handle it (power supply issues, etc.), but a Quadro FX 4800 was made for the Mac, it is DP capable (basically the same thing as a GTX 260, and is a fairly big card so it needs a tower case.

Yes, I had/have my Preferences set to 0.5 Days & the CUDA GFN Wu's still want to run in Priority Mode, some of my Box's will have as many as 5 or 6 of the GFN Wu's in some stage of Completion even though there's only 1 or 2 GPU's in the Box. Was watching one Box yesterday and 2 or 3 of the Wu's finished a few Minutes over their Deadline because other later deadline Wu's ran first ...
____________

Yes, I had/have my Preferences set to 0.5 Days & the CUDA GFN Wu's still want to run in Priority Mode, some of my Box's will have as many as 5 or 6 of the GFN Wu's in some stage of Completion even though there's only 1 or 2 GPU's in the Box. Was watching one Box yesterday and 2 or 3 of the Wu's finished a few Minutes over their Deadline because other later deadline Wu's ran first ...

This problem is still happening if you are crunching PPS_LLRs, while running GFN WUs, because they are way off themselves with their GFlops estimates. The PPS_LLRs will raise your DCF to around 9 which will screw with the GFN WUs (and most others as well), which have been corrected to near 1.0 now. As a PPS finishes they will raise GFN to 9x what they are supposed to be throwing BOINC into high priority mode, IF you have more than 2-3 GFN WUs waiting to run. They keep changing the ranges of the PPS WUs (sometimes running more than 1 range at a time) and keep forgetting to set the proper GFlops. I know they have a lot going on all the time BUT this shouldn't happen CONSTANTLY! Aaaahh, we can dream of a day when ALL sub projects DCF's are at (or near) >>>[[1.0]]<<< so we can end all this BOINCing crazy Hi-P.

I haven't been running any other PrimeGrid Wu's since the Last Challenge ended ...

Your DCF number goes up instantly but goes down very slowly, so you may be not have crunched enough other WUs including GFNs. To get a DCF down from 9 to 1 will take 20-30 proper GFlop WUs. If you have gone through more than that, AND not crunched even 1 PPS, then something else is happening that I can't think of at the moment to cause Hi-P.

I haven't been running any other PrimeGrid Wu's since the Last Challenge ended ...

so you may be not have crunched enough other WUs including GFNs.

NeoMetal*

lol, yeah I need to crunch more ... ;) ... I've been running the GNF's for 4-5 Days now at least, the Wu's take under an Hour each so I think I've done at least 30 on each box by now ... I've also been running WCG & the SIMAP Challenge non-stop since the PG Challenge ended so I think I've run enough other Wu's too ... :)
____________

Regardless of a DCF problem, 24 hours deadline is way too short. Anyone who keeps even 2 days of queue will be constantly over-downloading and late to return results. And it won't play nice with other cuda projects.
____________
Dublin, CA

AS of UTC time 2012-02-07 05:30:45 (local 23:30)
I am trying to get Genfer to limit WU's to be processed before
local time 2012-02-07 23:30:45 from 53 WU's (avg of 2.5 hrs per WU elapsed time)

Have boinc setting set to 0 days additional, and still get slammed.

Also processing llr-TRP's - no overload on them

Thought the thread said Genfer now had 3 day deadlines.

Isn't so in Oklahoma anyway.

Same here, we may have to work thru the Wu's that were in the Pipeline already before the Deadline was increased before we start seeing Wu's with a 3 Day Deadline, don't know about that fer sure though ...

Same here, we may have to work thru the Wu's that were in the Pipeline already before the Deadline was increased before we start seeing Wu's with a 3 Day Deadline, don't know about that fer sure though ...

This is correct. I should have clarified that the 3 day deadline was only for new tasks there weren't already in the buffer. Once those clear out, you'll see 3 days. Apologies for the delay.
____________

Are you allowing 3 days for those of us that got 3 days worth of 1 day wus?

I don't think they can do that, but I wouldn't mind to be proven wrong! The best thing to do is to abort enough WUs so that the remaining ones aren't at risk of missing the deadline. There's not much downside to aborting WUs that haven't started yet; they just get sent out again to other computers.

It's good Boinc etiquette to abort WUs you can't finish on time: it's better for the project than letting them miss the deadline, and it's better for your wingmen, too.

____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

Are you allowing 3 days for those of us that got 3 days worth of 1 day wus?

I don't think they can do that, but I wouldn't mind to be proven wrong! The best thing to do is to abort enough WUs so that the remaining ones aren't at risk of missing the deadline. There's not much downside to aborting WUs that haven't started yet; they just get sent out again to other computers.

It's good Boinc etiquette to abort WUs you can't finish on time: it's better for the project than letting them miss the deadline, and it's better for your wingmen, too.

that's why I asked... have already aborted quite a few that were about to go out of date... but I have known projects that have extended deadlines in the past when something like downtime intervened...
____________

I suspect that yes, those hits count for everyone who finds the prime. There could have been a Prime found recently that wasn't announced yet, or Free-DC could have recorded everyone who "found" that first prime before the validator was fixed. That was at least 3 people, and might have been 4. That could also be the reason for Free-DC showing 4 hits.

Maybe the other one was not confirmed prime.

Definitely not.

The way I understand it, PRP in Genefer is the same as "proof" in other programs. Apparently Yves Gallot was rather strenuous in what he considered "proof". With the exception of a known condition where small powers of two yield false positives, I doubt anyone will ever see a Genefer PRP that is composite. The odds of a "PRP" from that size GFN being composite are approximately 1:1000000....0000000 (that last part has 3 million zeros in it). The odds of winning the lottery a thousand times in a row are much more likely than one of these PRPs being composite. :)
____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

The way I understand it, PRP in Genefer is the same as "proof" in other programs. Apparently Yves Gallot was rather strenuous in what he considered "proof". With the exception of a known condition where small powers of two yield false positives, I doubt anyone will ever see a Genefer PRP that is composite. The odds of a "PRP" from that size GFN being composite are approximately 1:1000000....0000000 (that last part has 3 million zeros in it). The odds of winning the lottery a thousand times in a row are much more likely than one of these PRPs being composite. :)

Unfortunately, the math to show that his test proves primality doesn't exist.

Unfortunately, the math to show that his test proves primality doesn't exist.

Maybe someone here should work on that...

GeneFer does a base-2 prp test (not even an SPRP test). It doesn't prove primality. Of course PRP's are exceedingly rare, especially at large sizes. But you still have to prove them.

Which brings me to a thought I have been having. It is possible for GeneFer code to be adapted to help with the primality proving. By using different base(s) for the PRP testing, it is possible to implement a rigorous primality test.

Iain can probably build an up to date version if there's a desire for it and the admins want to provide it. If they gave the go ahead for a Linux version, I don't see why they wouldn't want a Windows version.

(I don't currently have an environment where I can build the assembler parts of the x64 and x87 CPU versions on my Windows box, but Iain has it set up in a Windows VM on his Mac. Yeah, I know. How ironic is that?)

Because of the inefficiency of working with such "small" numbers (the chunks of work being given to the GPU are so small that more time is being spent preparing the work for the GPU than actually doing the computations on the GPU), GenefX64 is almost as fast as GeneferCUDA.
____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

GenefX64 has now been released for 64 bit Linux. The GFN524288 search is now open to hosts with 64 bit Linux.

Run times on an Intel Core i5 @ 2.8 GHz with 4G RAM is ~36 hours.

Any plans for a windows version?

BOINC development is ongoing for the following MacIntel, Windows, and Linux applications:

GeneferCUDA (recommended for N>=262144)

GenefX64 (recommended for 131072<=N<=1048576)

Genefer (recommended for N=131072)

Genefer80 (available for N=131072; required for N<=65536)

All at some point can become available in BOINC. However, with only two GFN projects in BOINC now, the two applications you'll see are GeneferCUDA and GenefX64.

NOTE: even Genefer80 has a max b limit. To complete testing up to a desired b level, pfgw will have to be used. Each application is significantly slower than the previous so as b increases, it quickly becomes increasingly inefficient to search for GFN primes. The only practical path forward is better hardware and/or better software.
____________

Because of the inefficiency of working with such "small" numbers (the chunks of work being given to the GPU are so small that more time is being spent preparing the work for the GPU than actually doing the computations on the GPU), GenefX64 is almost as fast as GeneferCUDA.

Yeah, a known characteristic.

Would it be possible to share some parts and do several small tests on GPU at once?
There might be a common part (preparation of data or whatever) when N is the same for each test.
Or it is totally crazy idea?
____________My statsBadge score: 1*1 + 5*2 + 8*9 + 9*6 + 12*3 = 173

Because of the inefficiency of working with such "small" numbers (the chunks of work being given to the GPU are so small that more time is being spent preparing the work for the GPU than actually doing the computations on the GPU), GenefX64 is almost as fast as GeneferCUDA.

Yeah, a known characteristic.

Would it be possible to share some parts and do several small tests on GPU at once?
There might be a common part (preparation of data or whatever) when N is the same for each test.
Or it is totally crazy idea?

Feel free to grab the source code and do whatever you want to it. :)

I won't, for one REALLY good reason: GeneferCUDA is useless at that N because b would be too high for it to process. There's absolutely zero benefit to making the program run faster at low N when there's no possibility of it being used. Plus the numbers at that N are pretty small by today's standards and wouldn't be anywhere close to getting on the top 500 list.

If anyone wants to put that kind of effort into improving software, I can think of a lot better places to expend that effort!
____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

Interesting little glitch. My modest nVidia GTS 450 has been running a Genefer World Record WU for quite a while. I had to suspend the computation due to a really nasty thunderstorm. When I restarted the computation the time to completion went from approximately 60 hours to over 200 hours. Well I'm now over the "end date". I'm letting it run to the end.
I noticed there were 4 other "wingmen" with computation errors on this WU. I am assuming that if mine computes properly (before the next "wingman" completes the WU I will get some credit?

My other 2 rigs have very high end ATI/AMD graphics cards, shame you can't utilize them on this part of PrimeGrid.

Interesting little glitch. My modest nVidia GTS 450 has been running a Genefer World Record WU for quite a while. I had to suspend the computation due to a really nasty thunderstorm. When I restarted the computation the time to completion went from approximately 60 hours to over 200 hours. Well I'm now over the "end date". I'm letting it run to the end.
I noticed there were 4 other "wingmen" with computation errors on this WU. I am assuming that if mine computes properly (before the next "wingman" completes the WU I will get some credit?

My other 2 rigs have very high end ATI/AMD graphics cards, shame you can't utilize them on this part of PrimeGrid.

First of all, that WUs deadline is TOO SHORT. It's 5 days, and that's just not enough. So it's definitely not your fault. Current WUs (which run about twice as long) have a deadline of 3 weeks. So you should be fine on the next WU.

I'm guessing the total run time for your card will be around 150 hours, so there's no way you could have made the deadline anyway. Yes, you should keep crunching -- first of all, with the high error rate right now, there's a decent chance you'll return the WU before any other wingman does But even if they do return it before you, as long as you return yours before the WU gets purged from the database you will still get full credit as long as your result is correct.

I'll let the admins know about this.

____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

Preferences page should be updated to include linux cpu. It is running quite well: less than 24 hours with a 2500k on a vm, which is not that bad when compared to some slow gpus, if you consider that the latter runs a single wu and a cpu can run as many as the cores within it.
Are there any news on the genefer64 for windows on boinc?
____________
676754^262144+1 is prime

The build is done, we're just in the process of implementing the new builds on BOINC.
____________Twitter: IainBethune
Proud member of team "Aggie The Pew". Go Aggie!
3073428256125*2^1290000-1 is Prime!

Running my first Genefer on my new GTX 560Ti installed only a few hours ago. It's the first time I have run a GPU, so kind of disappointed the first WU has taken over 4 hours already. Was under the impression that using a GPU would cut the run times down dramatically.
The WU is currently about 56% complete so it looks like it will take another 3 hours plus to finish.

Is there any information on how to optimise GPU's for running PrimeGrid in general?

Settings for the Generfer are CUDA Short WU's.

Am I missing something especially after reading earlier in this thread these WU's only taking about 3994 seconds to complete.

Look forward to your replies.

Kind regards

The Knighty NI
____________
The art of flying is throwing yourself at the ground and missing.

The crunchtime has been cut down significantly! Using a modern CPU it would take somewhere been 30 and 40 hours to finish such a unit if I'm not mistaking. :)

To respond to your question about earlier units, yes you are missing something. They were much smaller and therefore of course took a lot less time to check.
____________PrimeGrid Challenge Overall standings --- Last update: From Pi to Paddy (2016)

For the short WUs, my 460 would complete 280658^524288+1 in 5:13:08. That's the current leading edge for the short WUs (i.e., the longest short WU sent out so far), and the time estimate should be accurate to about 1% or better.

I'm not sure off the top of my head how the 560ti compares to the 460, but unless the 560Ti is a slower card, 8 to 9 hours seems too long.

Just read your second post: the GPU shouldn't be turning on and off like that. You should check your BOINC settings, under Tools-->Computing preferences...:

On the Processor Usage tab:

Make sure "while processor usage is less than ## percent" is set to 0.

Also make sure "Use at most ##% of CPU time" is 100.00

Other settings on that tab will turn of computing, but that may be desirable depending on how you use the computer.

____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

Could be a driver issue (266.58), a normal hickup or something inside the OS...
I have something like that with Collatz on my GTS450. In the past only low CPU-time/usage but yesterday the same app uses a full CPU-core and Run time = CPU time.
Only difference is the driver, my Lubuntu11.10 installation updated to 295.40...

Every CUDA-enabled card has a "Compute Capability" (or CC) level, and all cards with CC 1.3 or above can be used. The CC for each card is listed in the specifications for the card.
____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

I have a Genefer 1.07 CUDA32_13 running on my comp for 174hrs with 73hrs left. Is this a Generalized Fermat Prime Search WU? If so, why did I get it? I have set the long search to disable. Setting preference is for short task only! The WU was due on 11/16 at midnight so it will be long pass that when it's finished. And all my other GPU WU for other projects are now behind as well. What good is setting my preferences if the project ignores them. I have been gone for 4 days or I would have aborted this WU. I will really be pissed if this WU fails <angry face>

Nope, that's just a regular short Genefer WU. It's taking such a long time because the 600-series NVidia cards are really bad at double precission arithmatic which is needed for genefers. Even the top of the line cards are, if I'm not mistaking, slower than the fastest cards from the 500-series. Seeing how you have one which is on the lower end of the 600s, yours is particularly slow.

Basically what can be concluded from this, if nothing else is hindering your speed, is that running genefers on low end 600s is best avoided as even the average CPU seems to be faster.

If you still want to use your GPU for primegrid I would suggest switching it over to PPS Sieve. This project only requires single precission arithmatic, something at which the 600 series are very good.

Oh and with regards to this unit: You will still get credit if you manage to get it done before it gets purged from the system. I'm not entirely sure when that'll happen, but I'm sure someone else will share that info :)
____________PrimeGrid Challenge Overall standings --- Last update: From Pi to Paddy (2016)

Indeed, the GT 620 is a very slow GPU -- not just for double precision programs like Genefer, but for all programs. It's one of the "value" or "entry level" GPUs in the 600 series.

That being said, a 250 hour run-time for that work unit is surprisingly long -- the fastest GPUs can complete those about 100 times faster.

I'm wondering if there's something that's somehow interfering with the GPU and keeping it from running full speed?
____________Please do not PM me with support questions. Ask on the forums instead. Thank you!

Indeed, the GT 620 is a very slow GPU -- not just for double precision programs like Genefer, but for all programs. It's one of the "value" or "entry level" GPUs in the 600 series.

That being said, a 250 hour run-time for that work unit is surprisingly long -- the fastest GPUs can complete those about 100 times faster.

I'm wondering if there's something that's somehow interfering with the GPU and keeping it from running full speed?

I did have the screensaver running which is now shut off. By slow, which specs should I be looking at for speed? The GT620 has a core clock of 700mhz and the GT560ti has a 900mhz core. Dosen't seem thats much of a difference. The memeory speeds are quite different though.

The GT 620 is not a Kepler card. It is a rebranded 520 as I recall (i.e., a Fermi card). The most important thing to look at for card differences in CUDA performance is number of shaders (and then shader clock after that). As I recall a GT 620 has 48 shaders. Comparing that to a mid-range Fermi card like the GTX 460 (336 shaders) and you can see how substantial the performance difference would be.

Better projects for a card like yours would be the PPS sieve on BOINC or on PRPnet this card would do okay with the Wieferich or Wall-Sun-Sun prime searches.