[TAM] Team Atomic Milkshake - DPAD Perpetual Thread

Like in EON & POEM, it's a day of thirteens. Biggles dumped an admirable 13 million + points and of course Me@Home and Dave Peachey are cranking out major credits on a daily basis. For help from above in this project: as always we trust in GOD.

I managed to download the project but now that I'm running it the cmdline client is telling me it can not download the latticelist.txt and it goes through a timeout count down before going ahead and calculating something. After completion it is unable to locate a server.csv and starts the entire loop over again.

I'm trying to get back into distributed computing after being away and the ARS DC thread recommended this project as one that needs extra power.

All is ok for me and the server status page indicates that the servers themselves all good. All that seems to have happened is that the servers.csv file, which contains a list of servers for the project to return results to is missing or wrongly named on the project server. I tested an older one there and it worked ok.

You can paste the following lines into a text editor and save the file as servers.csv and it should work all ok for you.

Another option is if you run the config_nonet batch file, you can just save results for whenever you want to dump them and you'll no doubt find that the file issue has been resolved in the next day or so. I always find it a bit more fun when I drop a bunch without warning :-D

All is ok for me and the server status page indicates that the servers themselves all good. All that seems to have happened is that the servers.csv file, which contains a list of servers for the project to return results to is missing or wrongly named on the project server. I tested an older one there and it worked ok.

You can paste the following lines into a text editor and save the file as servers.csv and it should work all ok for you.

Another option is if you run the config_nonet batch file, you can just save results for whenever you want to dump them and you'll no doubt find that the file issue has been resolved in the next day or so. I always find it a bit more fun when I drop a bunch without warning :-D

Great to have you back crunching!

I think there is something wacky between me and him on the internet as I can't load the server page that you linked above. I copied your server.csv and it is no longer giving that error but it can still not retrieve the latticelist.txt at this time. It seems to be happily producing results that it is saving as the result.txt is growing larger.

Just looking at the stats, I'd agree with you that there is something amiss. Free-DC reports no teams returning any work today, and most teams way off their normal production yesterday. The nice thing with this project is that you don't need to contact the server to keep cranking away.

Stephen posted on the forums yesterday that he'd be unable to fix the servers f if something went wrong, because he's at a Conference in Beijing. 6 hours later (at 9am UTC on the 16th was the last run) the stats server stopped running. Now the servers aren't responding much at all.

Keep crunching, it'll be back up next week.

And the problem is that being based out of a secure government facility (RAL) I can't connect in and restart things. And even if I could, it's all custom code written by Stephen - I wouldn't know where to start in finding the issue.

Stephen posted on the forums yesterday that he'd be unable to fix the servers f if something went wrong, because he's at a Conference in Beijing. 6 hours later (at 9am UTC on the 16th was the last run) the stats server stopped running. Now the servers aren't responding much at all.

... been a bit quiet here lately, although there are still some intrepid crunchers (Dave, me@home, whizbang & Galuvian) continuing on with the good work.

Unfortunately, I have had to reduce my output considerably because my main farm moved from a place of free power to one where power has to be paid for. Another farm bit the dust (Client sold to new owner). That put a huge dent in my crunch power (I had long ago stopped crunching at home for power cost reasons).

@GOD For a project that's been around for so many years, your 60% of team total is a huge testimony of your loyalty. Thanks for laying the rubber down while you had it. I'm sure that having a smaller farm is going to feel odd, but it's a numbers game. As long as everyone keeps a minimum of one processor running, as a group it'll provide that combined 40%.

@GOD For a project that's been around for so many years, your 60% of team total is a huge testimony of your loyalty. Thanks for laying the rubber down while you had it. I'm sure that having a smaller farm is going to feel odd, but it's a numbers game. As long as everyone keeps a minimum of one processor running, as a group it'll provide that combined 40%.

Ain't that the truth @GOD, you've kept us in a good place at DPAD. I pulled mostly out after it became clear that TN wouldn't be caught off guard again. Still run some through Yoyo though...

Yes, I've been focussing, exclusively, on the "nosample" lattice (hence the high output) and was surprised/delighted to be able to get the resulting muon %age above 4% ... especially as I don't engage in any "tweaking" of results for improved outputs.

I'll stay on this one for as long as possible until Stephen B decides it's reached an inevitable conclusion; not going anywhere at the moment

The rate overall has gone down since Stephen (and the project) moved from the Rutherford-Appleton labs (just outside Oxford, UK), to Brookhaven National Lab (Long Island, New York).

It's stayed fairly constant at around 100 trillion particle timesteps a week since the move, and it was doing 300 trillion/week in mid 2013. (I run weekly stats most Tuesdays, which I post to the projects twitter, facebook, and google+ pages)

So about three years (and one measly page on this thread) back I had the idea of creating my own "branch" based on combining results.dat files that I'd been saving. I can't even entirely remember my train of thought from back then. However, I'm resurrecting that idea somewhat. Maybe extend it to make it an TAM branch or something.

Anyway, my thinking was that if I pick just one lattice (say Linac900Ext7Xc2) and ran with no samplefile, whilst keeping 5 MB of the best results, after a while I could cherrypick the top 100 results to create a samplefile of my own (as long as I remembered not to overwrite it with a new one after a few hours) and that I could in theory share this samplefile with other team mates so we started working our own branch so to speak. Is that feasible? Are there any other considerations?

Is is possible to convert my results into a samplefile? I know I'd need to take out the hash, but is there a significance to the order of the parameters? Do the #runs and #gen bits mean anything? If so, can they be calculated by me? Failing that, I guess the best I could do in terms of creating an Ars "branch" would be to distribute a results.dat file that's all based on the one lattice and for anybody using it to make sure that they used it as their starting point and disabled samplefiles themselves.

It's just an idea, but I like the idea of breaking out of the convergence that the standard samplefiles give.

Take out the hashyes, but the order of the values means nothing. the runs means how many runs it has taken (runs=5 means it was run 5 times, generally as a quarantine recheck if it got an initial 'best' result - it also means that the MPTS value will be 5x higher or however many runs it did) The gens parameter is saying what method was used to generate the design, the only one I know is gens=0 is "random".

I've always run my own branch of results on every lattice back to solonoidsto15cm. I don't like working with the pack, and sometimes i get lucky with a manual design..

Really going to need to script some of this due to the vast number of results files I have kicking about (most sadly 95% duplicates, with just a handful of new results). Still, there's got to be a few new results in there somewhere...

EDIT - realised that line I posted had #gen=5; right at the end of it, so used it instead of #gen=0 at the beginning of the line.

Really going to need to script some of this due to the vast number of results files I have kicking about (most sadly 95% duplicates, with just a handful of new results). Still, there's got to be a few new results in there somewhere...

EDIT - realised that line I posted had #gen=5; right at the end of it, so used it instead of #gen=0 at the beginning of the line.

I get an email notice when there's a new comment.

just pull the data from your results files, even just your results.txt files

As you could (probably) tell from the facebook and twitter accounts, I stopped about a year ago. I'd put almost 15 years in on it, and things weren't really moving on. I think I've spoken to Stephen once in the past 18 months (and then on twitter), where it used to often be daily, part of that is because he's been busy with his new job (he's been at the Brookhaven National Labs in NY state, rather than RAL in Oxfordshire) in that time, and I've been busy getting my books written, and so on.

I still run it now and then, but I have so much work now, which often takes a lot of cpu power (video editing for instance)