I noticed an issue where the datafiles for v0.18 were missing a parameter, which was leading to some WUs not validating and some errors on the server side. I've updated the application to v0.19 which should fix this issue -- although may cause some of the WUs generated for v0.18 to error out. Sorry about that!

http://csgrid.org/csg/top_hosts.php soon will start some of "fun"
As i see most computers have only 32 gigabytes of memory so soon when they will finnish old work and "suck" the new 0.19 application ram killer ))). starts most machines connected to this project to fail and be "dysfunctional" becouse the lack of ram . ..
Now need aprx. 1,5 gb per core..
I see one of my host with 32 cores and 32gb ram completly freeze (and its ssd there as sys drive) soo its complete waste of work becouse i must delete project to "unfreeze" and connect other project , or disable project ,but i dont have plan put more ram to this host ,so i dont will finnish work later ..for exmpl..
Maybe they have to share some app or config to limit the cores for project until is this situation )
Or better implement feature like amicable number actually have
↑ You can set CPU core limits and fine-tune GPU here ↑ in your pref.
or BEst if you can quickly set - cores per project on every computer/host .. in "your" settings on project page . to avoid that waste of work

http://csgrid.org/csg/top_hosts.php soon will start some of "fun"
As i see most computers have only 32 gigabytes of memory so soon when they will finnish old work and "suck" the new 0.19 application ram killer ))). starts most machines connected to this project to fail and be "dysfunctional" becouse the lack of ram . ..
Now need aprx. 1,5 gb per core..
I see one of my host with 32 cores and 32gb ram completly freeze (and its ssd there as sys drive) soo its complete waste of work becouse i must delete project to "unfreeze" and connect other project , or disable project ,but i dont have plan put more ram to this host ,so i dont will finnish work later ..for exmpl..
Maybe they have to share some app or config to limit the cores for project until is this situation )
Or better implement feature like amicable number actually have
↑ You can set CPU core limits and fine-tune GPU here ↑ in your pref.
or BEst if you can quickly set - cores per project on every computer/host .. in "your" settings on project page . to avoid that waste of work

Well, it's not quite that bad. I'm currently running 4 different searches:

http://csgrid.org/csg/exact/overview.php

Two of those are using the MNIST dataset (which should use the same RAM as before), and there are two using the CIFAR-10 dataset (which will be 1+GB RAM). So only half the WUs being generated will be requiring more RAM.

What I can do is modify the memory requirements in the workunits, if that would help the BOINC scheduler schedule things better?

"I'm currently running 4 different searches:" but work unit only from one.. this morning only from exact2 0,19.
i am just pointing on incomming situation , no problem with big ram app, just we need prepare before :-)

"I'm currently running 4 different searches:" but work unit only from one.. this morning only from exact2 0,19.
i am just pointing on incomming situation , no problem with big ram app, just we need prepare before :-)

So workunit names look something like:

exact_genome_1487653029_21_46088_0

This is basically:

exact_genome____

Workunits with a search id of 9 or 10 are running the larger CIFAR-10 dataset, and workunits with a search id of 11 or 12 are running the MNIST dataset. The search ID matches up to the searches on the overview page:

Thanks for the explanation of the Work Units.
That helps us to understand the activity we are seeing on our computers.

Any description of the project(s) to inform us about the productive science being done is also welcome.

Here's the latest version of a paper I've been working on which describes some of the most recent results and the algorithm I'm using:

https://arxiv.org/abs/1703.05422

The newly updated code adds some improvements to how I'm doing backpropagation, and it also is evolving the hyperparameters used to train how backpropagation works.

I'm also evaluating things on the CIFAR-10 dataset (https://www.cs.toronto.edu/~kriz/cifar.html), which is more challenging than MNIST and also color images so if things work well on that -- we'll be well suited to start working on Wildlife@Home images.

If you have any questions about the paper (it may be too academic) just ask away!