Probably your end. Look in the results.dat - have you got a single results in there with like 800 million Mpts for it (and a yield of about -1.99)? Because that's what turned up here! I've added a filter to stop silly results like that.

Think I found it in the results.dat on the last possible machine (of course):<snipped>Any clue to the problem?

You gave 5 full simulation outlines there, so I'm going to run them myself, just to check. The first one has given a result in the 1.25-1.28 range so it's not the simulations.

Yes, but not out of the ordinary. I dialed it back some though. Tests stable. Also, seems to me we had a power outage about that time, so the machine undoubtedly rebooted at least once, maybe more. My UPSes aren't able to handle the wattage load of the dual GPUs any more so they're just plugged into the surge protection side.

Yes, but not out of the ordinary. I dialed it back some though. Tests stable. Also, seems to me we had a power outage about that time, so the machine undoubtedly rebooted at least once, maybe more. My UPSes aren't able to handle the wattage load of the dual GPUs any more so they're just plugged into the surge protection side.

If power surges were a problem, I'd have massively off results all the time (I think I've had 6 power outages that have been longer than an hour, so far this year.) Overclocking can produce erroneous results in calculations (tests stable doesn't mean much unless you're using a chip analyser to test every transistor).

There was also the theory that it was an overclocked machine that returned on project (think it was RC5-64) where because it was OC'd, it messed up the 'correct' unit, meaning the project went to 107% before finding it.

It does have a lot going on at any given time. 2 cores are running DPAD and the 4 other cores are currently doing a combination of NFS, Spinhenge and Leiden. Also a GTX460 running PrimeGrid PPS Sieve and an HD5870 running 2 instances of MilkyWay. I did have an NFS WU error a while ago but that's normal with the huge memory footprint for v1.09 (a percentage of the v1.09 WUs are bad so they award that version extra credit to make up for the bad ones). I did lower the OC after the DPAD glitch. Have noticed no other problems. DPAD seems to play very nicely with the other projects and I've noticed that GPU output has increased since I've been running the project on a couple cores of the x4 and x6 machines. It seems to release timeslices better to the GPUs compared to most of the BOINC projects (running in "b" mode).

Not running any of the BOINC screensaver junk. The GPU tasks all use a certain amount of CPU time. If anything makes them wait for the CPU it will slow the GPU task noticeably. Since the GPUs put out a huge amount of work I like to let them run as freely as possible. I tried the command line version of DPAD first and it works well, but the background mode is even better. According to taskmanager it's still getting it's full 2 cores even in "b" mode.

Other thing it might be is that Muon1 has copious calls to Sleep(0) while it's waiting for the (in your case) 2 worker threads to finish each timestep. That call is a pure release of time back to the OS, telling it to loop through all its other processes' time slices before coming back to Muon1. Since Muon1 timesteps are often going by at 10 per second or more, I guess there are lots of opportunities for task switching.

In the 8-and-a-bit years since v4.3 came out, featuring a lattice system, there have been 44 lattices made public (plus two more closed scan projects). So I thought it would be an idea to release a series of videos showing the best design of each lattice, and who did it.

To date, there have been 4, with a new one each Friday, and you can get them at the following links.

Edited to add:Maybe I misunderstood you, and you misunderstood me. I'm doing the videos by lattice order, chronologically. So the only order, is the order the lattice was released in. The first one with an [ARS] member, is PhaseRotEd1.

You've pretty much single handedly kept us in this project. We need more crunchers to help out. This would be a good focus for a front page article if someone will step forward and do it. I'm sure a lot of us can help with ideas, wording,etc. but we need a point person...

.. Congrats to Whizbang for making 100 million, and commiserations to Dave for being overtaken (again) after all these years. (I still feel that he has been the mainstay of the team, as he has been pumping them out longer than everyone, including me.)

With the extra crunching provided by Whizbang, CanoeBeyond & 7im there's a good chance of pegging Team Norway unless they have been keeping some powder dry.

Anyway, we have overtaken team Free-DC, and that must stick in their craw a bit as I remember a bit of the old "Ha Ha's" coming our way when they passed us several years ago. &

Wow, nice dumpage! It's good to see you and Dave still at it. You're right, that man has been carrying the banner here longer than I can remember. You've both been cranking away before me, and long after I lost the plot. I still see my name in the stats, and see 7im has passed me, so congrats to him who is so wise in the ways of science.

I've had DPAD running no-net for a while on an office machine, so it's built up some results. I've also now got DPAD on another couple of machines that struggle to meet BOINC deadlines, so my participation should go up a little.

I remember reading before that DPAD is basically a genetic algorithm, rather than having individual work units and that it was possible to essentially build your own branch, by keeping your results and not downloading the best ones or something like that. Can anyone remember exactly what to do and what files to edit? I fancy going my own way

I've had DPAD running no-net for a while on an office machine, so it's built up some results. I've also now got DPAD on another couple of machines that struggle to meet BOINC deadlines, so my participation should go up a little.

I remember reading before that DPAD is basically a genetic algorithm, rather than having individual work units and that it was possible to essentially build your own branch, by keeping your results and not downloading the best ones or something like that. Can anyone remember exactly what to do and what files to edit? I fancy going my own way

Yes indeed.

ok, in config.txt make sure this line is set as follows

Code:

Download sample results file after a number of hours (0=don't, min. 6): 0

All done :-)

<edited to add this bit>I don't use samplefiles in my clients, and I'm on my own optomisation pathway. You can see it quite clearly in this yield-plot for Linac900Ext9Xc2.The bright green line at the bottom is my own pathway. and while it's not as developed as the others, it's also the results of just ONE computer (and a 3year old Q6600 at that, that's also used to edit and process videos, such as the best designs for each lattice video - more on that tomorrow) <end of edit>

Actually, after a LOT of badgering from me, and going through the results from the last few years, we've noticed a 'trend' in 90% of the work following the samplefiles extensively, meaning everyone focuses on the best current result extensively, and doesn't spread out more. So now the samplefile will only be updated once a week, rather than 8 times a day.

The best explination I've had is this (and it helps if you know British geography, since both me and Stephen are British)imagine trying to find the highest point on earth. You start off in England and soon find the Pennines. Everyone rushes there and looks for the highest point there. Meanwhile they're ignoring everywhere else. Then someone comes across the Welsh Mountains, and one or two look there (since thats now in the samplefiles, and is more mountainous) and eventually everyones rushed off to find Snowdon (highest Welsh Mountain). Eventually, someone's found the Grampians, and looks there, and will call everyone over there to search and will eventually find Ben Nevis. There's now been 3 regions that have had the investigation of everyone, that are fairly close together, and meanwhile the rest of the planet has been ignored, all to find Ben Nevis (1,344 m (4,409 ft) and every investigation by those clients elsewhere is looking for better ones around it (because that's what the predominance of their own results are)Meanwhile, no-one's found the Alps or the Rocky Mountains, which have peaks 3x higher (Mt Blanc at 4,810 m [15,781 ft], and Mt Elbert at 4,401 m [14,440ft] respectively) let alone the Himalayas and Everest at 8,848 m (29,029 ft) .

The idea in the end is that we do more searching individually, and then share the best once a week, rather than the current scramble to be 'where the best is' that we have at present.

Yes I realise I need to avoid downloading sample results. But how do I use the results found on machine A which has never downloaded sample results to start seeding searching on machine B? I have a vague memory of being able to edit results files from machine A into the lattice files or sample files on machine B to start it pointing on it's own way.