Does anybody successfully setup a rig with 5+ GPUs (or know someone who has) with the following mobos:

ASRock ATX Z270 Killer SLI/BR (LGA 1151)

Gigabyte GA-990FX-GAMING (AMD AM3+)

They have a good price where I live (the Gigabyte AM3+ is because I have an old Phenom II CPU around, so would save some more money).

Thanks

THE "Z" BOARDS NEED UPDATED BIOS--

They can be tricky for 5+ GPUs. I don't have one. I do have several GigaByte 990FXA boards, they need updated BIOS and can be tricky. Try ASRock H81 Pro BTC v2.0 for a mining board. They have Intel 1150 CPU sockets.

The Gigabyte 990FXA boards were very difficult with Ubuntu 14.04.1, but loaded Win 7 or 8 with no problem. Later versions of Ubuntu (14.04.4+) were able to load. They will work with Sempron CPUs or better, and can unlock an AMD CPU for more cores. One of my Semprons unlocked to an AMD Athlon XII, the other did not. I just upgraded that board to an AMD 4350, it mines 24/7 on Win 7 x64 and 5 GTX 960 GPUs. Getting the 6th GPU to work was too much trouble. My other GB 990 FXA board has 6 GPUs, Win 7 x64, nVidia 750ti GPUs and an AMD 4350 CPU. I will again try 6 GPUs on my GTX 960 rig when I get my first GTX 1060.

I don't know if the "6th GPU" problem was heat or lack of CPU power. The Sempron 145 is single core, it worked in 2013-2014 for early mining algorithms. Newer algorithms may need more CPU power.

--scryptr

Thanks for your answer scryptr! So, do you think the ASrock Z270 Killer Sli would work with a BIOS update? I couldn't find many reports of mining rigs with this board, but found one or two (and also a few complaining it didn't work). Its BIOS has the "TOLUD" setting, which I think is similar to the "Above 4G Decoding" setting necessary for 3+ or 4+ GPUs. The Asus Prime Z270-A I know is a sure shot, but much more expensive here...

IF YOU HAVE AN AMD CPU, USE IT--

I don't have either board that you are looking at. I'd go for saving money with the AMD board, because you already have a CPU that is enough for mining.

If you want to mine, buy a mining board. The BioStar and ASRock boards designed for mining will give you less trouble with 6+ GPU rigs. Remember that you will need a huge PSU for 6 GPU cards, unless you are mining on GTX 750ti or RX 460 GPUs.

You seem to know about some of the BIOS settings that are required. That varies per motherboard, and after the nightmare I had trying to load an early version of Ubuntu 14.04 on the 990FXA, I started buying the less expensive H81 boards with 4 PCIe slots. A sturdy 4 card rig was easier to manage; the motherboard cost $50 plus a $40 Celeron.

Have a problem with loading the OS as it tells me "xorg PROBLEM DETECTED" and then reboots and shows: error: unknown filesystemgrab rescue>

What can it be and how can I solve this? Used flashing tools as described and tried it at least twice. I am using ASrock h110 and at the moment just one Manli P106-100 card just so I can test if I can install the OS before installing all 13 cards.

I need one or two of the:

P106-100

to test and ensure nvOC will properly support these GPUs. A number of members have had problems using these GPUs. If someone is willing to sell me 1 or preferably 2 Please pm me.

I have solved this issue by editing the line "XORG FAIL" to XORG "OK" which worked and the system started, now I have the different problem :

When I run more then 6-7 cards on the board no matter in what miner I either get error 15 " cannot get current temperature " or the system just freezes after in about half an hour or a bit longer and the only thing that helps is to turn off/on the power socket... I have updated the drivers and now trying to wait for it to freeze on 6 cards since it works the longest out of all the set ups I tried. As soon as I try 10 or more GPUs I either get error 15 or just frozen system. I don't think it's one of the cards since I mixed them a couple of times and what I noticed is that it crashes faster as I add more GPUs.

Do you have any idea where the problem can be? I have also 2 PSUs connected together one 600W Zalmann powering motherboard and 2400W HP server PSU which was remade for GPU mining and powers risers and GPUs, both of those seem to be working fine , so I have no idea where to look for solution.

If anyone had this problem please let me know , would be greatly appreciated... Even willing to donate to somebody who's advice will help.

PS. The cards are in stock mode, so it's not OC.

UPD Been pretty stable on 6 cards , have been running for 8 hours now , probably will try and add a couple after a 16-20 hour mark...

If you have another rig, I would try testing each of the GPUs an verifying they work individually; if they all do, then I would get a pico for your server PSU and try using only the server PSU. If you are using an atx PSU for the mobo and a server psu for everything else without joining them; this can cause problems.

Thanks for your answer.

Unfortunately this is the test rig and I don't have another rig to try each card by itself. I've been swapping cards in different orders and the error seem to come up with different cards all the time and in different intervals , for example half an hour or just a couple of minutes or even more then 10 hours. Also I've tried different risers at different times, they seem to have no effect on when this error comes up . The PSU's are connected with the board adapter , I've tested each pin connecter on each PSU with a voltmeter and also no problem detected. Sometimes the system freezes straight away and sometimes it takes a couple of hours which as I said doesn't seem to depend on the set up as I've tried around 50 different rotations of risers/videocards. With the same cards and risers it can have different time of error coming up. I also can't use just one server PSU as it doesn't have 2 molexes to connect to the motherboard for additional power supply, only PICO and 8pin for CPU power. I tried to unplug the connector between 2 PSUs and again no affect on when this error comes up.

Could this be software related problem maybe drivers or something else? Also can you advise on the overclocking of these cards as no methods I could find( trying to attach "fake" monitor to each GPU using console and treaking the system) would make any difference on the OC ?

PS. THe PSUs are connected to one power plug adapter which turns them on simultaneously anyway so this shouldn't be any problems in synchronization as the server PSU turns on with ATX at the same time through the WIFI plug I have.

UPD. Been running for about 8-9 hours with 7 cards and still froze after that time... Not really sure what went wrong again as I wasn't present at that time but I suspect it was the same error again.

I experienced something similar when I was building 2psu rigs, except in my case it would be the same GPUs causing the problems, if I took all the problematic cards out and built a smaller rig (1PSU) just out of the "unstable" GPUs, they gave no issues and were stable for days. My conclusion was that the problem were 2 psu setups. In the end I managed to get one of the 2 double PSU rigs stable with 11 cards, the other I simply swapped for one 1200W psu and problems went away, all the other 1 psu rigs that came into being with what I thought were defective cards, are stable and "set it and forget it".

Have you been using 2 ATX PSUs or one was server PSU like in my set up? I don't have quick access to new PSUs so swapping them gonna take about 5 days to one week , it's very strange how this error seems to appear in completely unpredictable timings. I also worked out strange thing that when I disconnect the PSUs from each other it seems more stable than with the rele connector synchronizing them , when they are connected it takes a couple of minutes to freeze and otherwise it can work for a couple of hours even with more then 8 cards( using 11 atm seems to be impossible for some reason to plug in all 12 without something crashing).

Have you been using 2 ATX PSUs or one was server PSU like in my set up? I don't have quick access to new PSUs so swapping them gonna take about 5 days to one week , it's very strange how this error seems to appear in completely unpredictable timings. I also worked out strange thing that when I disconnect the PSUs from each other it seems more stable than with the rele connector synchronizing them , when they are connected it takes a couple of minutes to freeze and otherwise it can work for a couple of hours even with more then 8 cards( using 11 atm seems to be impossible for some reason to plug in all 12 without something crashing).

Yes those relays are not very stable, out of 3 of them only one seems to work reliably for me.I can't speak to the actual causes of why it crashes since there are many variables (PSU, motherboard/bios/controllers, GPU manufacturers etc) but after battling with this issue for a few weeks I'm running stable with only one dual PSU system. That system has all the cards on one PSU and the CPU/Board/PCI-bus molexes on the other PSU.

Have you been using 2 ATX PSUs or one was server PSU like in my set up? I don't have quick access to new PSUs so swapping them gonna take about 5 days to one week , it's very strange how this error seems to appear in completely unpredictable timings. I also worked out strange thing that when I disconnect the PSUs from each other it seems more stable than with the rele connector synchronizing them , when they are connected it takes a couple of minutes to freeze and otherwise it can work for a couple of hours even with more then 8 cards( using 11 atm seems to be impossible for some reason to plug in all 12 without something crashing).

Yes those relays are not very stable, out of 3 of them only one seems to work reliably for me.I can't speak to the actual causes of why it crashes since there are many variables (PSU, motherboard/bios/controllers, GPU manufacturers etc) but after battling with this issue for a few weeks I'm running stable with only one dual PSU system. That system has all the cards on one PSU and the CPU/Board/PCI-bus molexes on the other PSU.

Have you been using 2 ATX PSUs or one was server PSU like in my set up? I don't have quick access to new PSUs so swapping them gonna take about 5 days to one week , it's very strange how this error seems to appear in completely unpredictable timings. I also worked out strange thing that when I disconnect the PSUs from each other it seems more stable than with the rele connector synchronizing them , when they are connected it takes a couple of minutes to freeze and otherwise it can work for a couple of hours even with more then 8 cards( using 11 atm seems to be impossible for some reason to plug in all 12 without something crashing).

Yes those relays are not very stable, out of 3 of them only one seems to work reliably for me.I can't speak to the actual causes of why it crashes since there are many variables (PSU, motherboard/bios/controllers, GPU manufacturers etc) but after battling with this issue for a few weeks I'm running stable with only one dual PSU system. That system has all the cards on one PSU and the CPU/Board/PCI-bus molexes on the other PSU.

Hm, Do you mean that one PSU powers CPU/Boards/ additional PCI power molexes and the other one for risers and cards? I've tried again today and been on for 3 hours on 11 cards, seems that 12th one causing some problems all the time. This seems like a new problem now as soon as I install the 12th card the drivers crash, tried different risers and their positions but still no result, the card is okay though , not sure what the cause of that is...

I personally don't really need that relay connector since the server PSU turns itself on as soon as the power is on anyway so I don't have a problem of turning them both on at the same time.

I am trying to find somebody near my location to solder 2 additional Molexes to my server PSU to try and power the board and cards of single PSU instead of using 2 but I am thinking it's not gonna be easy to find a reliable guy for this job...

After the change there are two processes in System monitor, but CPU usage is still the same (100% one core, 20% second core).

Is it possible that System monitor doesn't show the correct CPU utilization?Will it help if I change the CPU from G3900 to i3 or perhaps i5?

Fullzero, I see You have 13 GPUs on Asrock H110 PRO BTC (I am using the same motherboard). What kind of CPU do you have so that everything is working smoothly?

Using a kabylake i5;

I'm making another 13x rig soon with a g4560; I'll let you know if it works well as well.

Thanks for the info. I will buy an i5. It looks like the G3900 is not strong enough.Regarding g4560: Is the Intel HyperThreading Bug fixed for ASrock H110 PRO BTC by default?

And thanks for all the work on nvOC and great support by you and all the other forum members.

g4560 works as well.

I don't believe the H110 chipset has the HT bug; but I could be wrong. I also haven't tested with a skylake CPU. If one is buying a new CPU for this mobo; I recommend using a Kabylake g4560 or higher as it will unlock 2400 for ram.

Hi, same problem here about CPU usage. Rig with Skylake G3900 on H110+13Gpus 1060. Using Claymore; if I disable some gpus (keys 0-9, so only 10 can be disable) the cpu usage goes down almost to normal. System is centos7+gnome but I tested with Nvoc and I got exactly the same behavior. With OC the cpu load is bigger, very heavy. When no OC gets a lot better but still too heavy.By the way please I'd like to know how Nvoc starts on bootup. Check cron and unity startup scripts etc but found nothing.

first off, THANK YOU SO MUCH for this!!!!my ONLY issue, im running the MSI Z170A GAMING M5, and i have 8 cards connected. (ive tested it on a machine that had 5 cards, and then i added the 6th and it picked up) i took it to our server rack that each machine has 8 cards running and its only picking up 7. is there a quick command fix i can do for this? Or should i start from a fresh copy on a thumb drive?

any help would be greatly apprecaited. If i can get all 8 rigs working with 8 cards id absolutely be in debt to you.

Thanks again!

denellum

When I tested this mobo I was only able to get 7x GPUs to work correctly.

A while back a member reported having an 8x rig with this mobo: but I don't remember who it was.

While mining SIGT ccminer (https://github.com/krnlx/ccminer-skunk-krnlx) exits every 30minutes or so. Because script runs in screen I can't see the real error. I added -S flag to ccminer to log to syslog and now I see weird thing. Every time ccminer is restarting I see following lines in syslog right at that time. Anyone has a clues what are they and why do they affect ccminer? Thanks.

After the change there are two processes in System monitor, but CPU usage is still the same (100% one core, 20% second core).

Is it possible that System monitor doesn't show the correct CPU utilization?Will it help if I change the CPU from G3900 to i3 or perhaps i5?

Fullzero, I see You have 13 GPUs on Asrock H110 PRO BTC (I am using the same motherboard). What kind of CPU do you have so that everything is working smoothly?

Using a kabylake i5;

I'm making another 13x rig soon with a g4560; I'll let you know if it works well as well.

Thanks for the info. I will buy an i5. It looks like the G3900 is not strong enough.Regarding g4560: Is the Intel HyperThreading Bug fixed for ASrock H110 PRO BTC by default?

And thanks for all the work on nvOC and great support by you and all the other forum members.

g4560 works as well.

I don't believe the H110 chipset has the HT bug; but I could be wrong. I also haven't tested with a skylake CPU. If one is buying a new CPU for this mobo; I recommend using a Kabylake g4560 or higher as it will unlock 2400 for ram.

Hi!Since Intel H110 chipset is a Skylake would it work more stable with Skylake CPU (like Intel Core i3-6100 for example)?

The 13x asrock does away with many ancillary devices usually found on mobos which also utilize the pcie bandwidth. It also splits the pcie ports 4 ways; the 16x is direct to the CPU, then each group of 4x (1x pcie ports) has its own bus. This is about as optimized as it can get. Although most linux distros will support 16x gpus, 13x is very close.

I would have to also test using a skylake on another system before I could offer you any useful comparison between using skylake and kabylake with these mobos. There is no substitute for actual testing; I have another one of these mobos on the way; I will try a skylake with it. However, both my kabylake 13x builds are stable (also 1 is using an m2 ssd and the other a usb key).

I'm trying to mine ZCOIN on Mining Pool Hub. I keep getting "reject reason: low difficulty share of ________". Is there any way to fix this?

Thanks so much!

I haven't used mining pool hub enough to remember off the top of my head if it has this feature: some pools allow you to set the default difficulty with an argument or by using a password that indicates the desired difficulty.

I haven't found a way to actually tell which. There are some programs which provide such info for system ram; but not GPU ram with linux.

I suspect the way this is done with windows programs is by comparing the GPU memory info against a data structure containing all known memory; and returning the matching result. Without this; or reconstructing a similar data structure I don't see this happening on linux.

Have you been using 2 ATX PSUs or one was server PSU like in my set up? I don't have quick access to new PSUs so swapping them gonna take about 5 days to one week , it's very strange how this error seems to appear in completely unpredictable timings. I also worked out strange thing that when I disconnect the PSUs from each other it seems more stable than with the rele connector synchronizing them , when they are connected it takes a couple of minutes to freeze and otherwise it can work for a couple of hours even with more then 8 cards( using 11 atm seems to be impossible for some reason to plug in all 12 without something crashing).

I have been building new rigs and troubleshooting existing ones in a mine. I would try IAmNotAJeep's suggestion; I have also had dual joined PSUs cause problems before.

I have seen many similar problems (even with 6 gpu rigs) and then also had them disappear when I remove all the trouble GPUs and make another rig out of them. This has been the most time effective way of resolving all GPU problems; however, it does have a high cost in regard to using additional components.