This makes it a royal PAIN trying to figure out which machine has gone down, especially since Moo Wrapper (and BOINC in general) never seems to identify the GPUs in a machine correctly to the various stats pages.

8)
Message boards :
Number crunching :
Multiple GPUs on same WU
(Message 7681)
Posted 20 Jun 2017 by QuintLeo
It wouldn't be practical on Moo Wrapper to run one work unit on multiple GPUs even if BOINC specifically supported it, as Moo is nothing more than a "wrapper" for the Distributed.net client that client does not support running one "block" on more than one GPU at a time.

It's possible to have AMD and NVida GPUs mixed on a Linux machine - but an AMD GPU MUST be the one used for video output, you MUST install the AMD drivers first, THEN install the NVidia drivers from the .run package MANUALLY using the "--no-opengl-files" command line switch - otherwise the NVidia opengl stuff overwrites the AMD opengl stuff and completely borks the AMD drivers to the point the AMD GPUs flat out won't work at all.

In theory you can hand-install the applicable files but that's a royal pain at best and will leave you with an unusable system if you miss even ONE of them, or use ONE wrong file.

Mixed GPUs is one of the very few things Windows does better than Linux, and IMO one of only 2 things it does a LOT better (the other being "support games").

As I recall from the rebadge between the HD 7xxx series and the R9 2xx series, your 270x was the HD 7870 in the previous generation - I've got *2* of those cards been running Moo for quite a while now, under both Windows in the past and currently in a LINUX machine.

Any of the versions between 15.7 and 15.12 should WORK though - if you're still getting failing work units, you have a configuration issue somewhere or your card might be dying.

Windows 10 is NOTORIOUS for "upgrading" working drivers to BROKEN newer ones - it's one of the reasons I only have one Win 10 machine (for playing DirectX 12 games on) and REFUSE to allow that broken PoS joke of an OS near any other machine I will ever own.

150 MKeys/sec on 630 graphics (G4600) if you're not doing ANYTHING else on the system, commonly more like 90 if you ARE using the CPU for anything - my generations-old A10-5700 manages a bit over 400 WITH the CPU doing stuff.

For reference, you have to go to the higher-end HIGH COST workstation CPUs from Intel to get a "better" iGPU than the 630 (it's the same iGPU that's on the Kaby Lake i7-7700k among other Intel offerings).

15)
Questions and Answers :
Unix/Linux :
opencl problem in dual gpu(amd and nvidia mix)
(Message 7621)
Posted 17 Apr 2017 by QuintLeo
BTW - AMD APU and AMD discrete card are trivial, if you have a motherboard with an updated-enough BIOS to recognise the AMD discrete card *OR* if you have a recent-enough APU. I've had SEVERAL such systems running at times, though I do recall having issues with my A10-5700s not being usable when I installed a discrete card - 'till I upgraded the BIOS on the motherboards in question.
Sadly, those motherboards STILL won't let me use the GPU on my A10-5700s and my RX 470s at the same time - but they work fine with my older HD 7xxx and R9 2xx cards now.

The main limit though is that you can only get display output from ONE brand of card on the same machine, AFAIK, without doing some sort of "switcher" application that reconfigures things then reboots the machine.

You CAN get both to crunch at the same time - best to use an AMD card as your display output card though, as there is an option on the Nvidia driver installer to install it so it does NOT install OpenGL and end up overwriting some of the stuff the AMD driver installs - the AMD driver doesn't have that option.

Unfortunately, I can't find my note with the command-line option to use (I moved recently and am still getting stuff unpacked and sorted out) - I DO recall you have to manually install the Nvidia driver AFTER you have installed the AMD driver.

A10-7860K or A10-7890K (512 shaders, running the GPU overclocked a bit at 800 Mhz, GCN) both achieve a hair over 600 Mkeys/sec (which happens to be the same figure I see out of my HD 7750 cards that are also 512 shaders at their stock 800Mhz clock).

I have been curious about what kind of keyrate the Intel graphics-on-the-CPU stuff was capable of. I'm afraid I have to say that I am not impressed, though given the other competative benchmarks I've seen I'm not exactly shocked.

Intel should just give up on trying to compete with AMD or Nvidia on graphics, they've never been close and usually aren't even in the same ballpark.

Try driver version 15.12 - that seems to be the best performance highest stability driver for any AMD GCN cards prior to the Fury.

I suspect however that you have a configuration issue somewhere.

19)
Message boards :
Number crunching :
AMD RX 480
(Message 7617)
Posted 17 Apr 2017 by QuintLeo
Distributed.net RC5-72 speeds have almost ZERO resemblence to any other benchmark in existance, don't bother trying to compare.
The only benchmark I have ever seen that is even CLOSE to similar is Bitcoin mining, which is also a form of cryptographic work in it's root basis - but still involves a lot more data usage and therefore memory throughput affects it a lot more than RC5-72 work does.

The keyrate of any AMD GPU that is capable of running Dnet at all is almost 100% proportional to the product of the number of shaders/cores on the card time the core clockrate.
The RC5 CODE is very simplistic - it does a few rotates and a few adds and not much else - so comparing it to a benchmark that does a LOT more complicated code and needs a lot of data to/from memory doesn't work.

I don't have an exact number, but it works out to 1 Megakey/sec per shader/core at a little under 700 Mhz clock - and it doesn't matter if it's a current GCN card or the older Terrascale stuff like the HD 76xx series and older (including the applicable GPU on AMD A-series APUs).

The memory usage is TINY - all the "core" code AND all the data fits easily in cache memory, even on the older CPU-based clients, since at least the AMD K6 and Intel Pentium II/Celeron series which had cache memory sizes in the several KILObyte range.
It is entirely practical to clock your memory on a GPU to the LOWEST it will go and have ZERO effect on keyrate, while saving a bit of power and heat.

RX 480 has 2304 cores, the R9 290X has 2816 (22% more) - the RX 480 DOES clock higher but it is AT BEST going to have similar keyrate - it comes down to can you clock the RX 480 more than 22% higher than you managed on your R9 290X.
On the POSITIVE side, the RX 480 uses a LOT less power to manage similar keyrate.

I don't personally have a R9 290X or a RX 480, but I DO have several R9 290 and a pair of RX 470 - which are both slightly-downsize from your cards, and have the same "about the same keyrate due to more cores but slower clock" comparison you see.

20)
Message boards :
Number crunching :
Multiple GPUs on same WU
(Message 7616)
Posted 17 Apr 2017 by QuintLeo
As far as I know, MooWrapper specifically runs each work unit on an individual GPU, even when you have multiple cards of the same model in the machine, and there is no viable way around this (technically there IS a potential work-around via editing the "master" copy of the .ini file but it messes up the wrapper assign work code if you try to do so).

If you're using a mixed-card machine, this actually makes a LOT of sense as it prevents one WU tying up all your GPUs while there is only enough left for one to work on, while having little if any downside on a machine with all of the GPUs in it the same.

The base DNet client itself does not assign one key to multiple GPUs to work on at the same time, even when running it as a multi-GPU setup it still assigns a key to a single GPU, so there is no actual benefit to trying to force MooWrapper to assign one WU to all the GPUs in your system.