Evidently I haven't updated my old thread in so long that it had to be locked so here's a fresh thread.

I ran another scan today and a few things are new: the RX 570 and RX 580s are on the charts now -- surprisingly they aren't running quite as fast the RX 480s on the chart -- it might be luck but there are over a dozen hosts and over 2000 tasks counted for each so I'm not sure that this is just sampling error (in contrast the RX 480 stats in this scan covered almost 150 hosts and over 19000 tasks so I'm pretty confident).

There aren't enough Vega parts in circulation for them to qualify for stats - I'll run another scan in a month or so and see if there are enough then.

One possible explanation occurred to me for the new RX 580 scores being lower than expected: if these cards recently replaced a older and slower card my scripts might misconstrue older results from the old card as from the RX 580 -- to ease pressure on the SETI servers I get host information and only the summary of the task stats (for me to dig into each task to handle this would be 20x more server queries).

I might also dig through and see if I can find some Vega parts for some preliminary results since I'm curious.

Hi Shaggie76, I recently installed a waterblock on my Vega FE. There should be about 1 day of results since then. Tasks ran prior to that would be thermally throttled. The hosted is: 8341269

One possible explanation occurred to me for the new RX 580 scores being lower than expected: if these cards recently replaced a older and slower card my scripts might misconstrue older results from the old card as from the RX 580 -- to ease pressure on the SETI servers I get host information and only the summary of the task stats (for me to dig into each task to handle this would be 20x more server queries).

I might also dig through and see if I can find some Vega parts for some preliminary results since I'm curious.

Rick, looked at your stderr output and the only thing I might offer is increasing -tt to 1500. I don't know anything about the ATI cards but it might respond to that parameter like the Nvidia cards. Gives the GPU more time to process data before switching out from the kernel. From the docs ...

-tt F: Sets desired target time for kernel sequence. That is, how long kernel/kernel sequence can executes w/o interruption and w/o switching
to another tasks like GUI update. F is floating point number in milliseconds. Default is 15ms. App will try to adapt kernels (currently
implemented for PulseFind kernels) to run designated amount of time. To increase performance try to increase this value. ?High values
could result in GUI lags. If use_sleep active try to use target times divisible on sleeping time quantum for your particular system.
For example at least some AMD-based systems have 15ms sleep quantum. That is, Sleep(1) will actually sleep 15ms instead of 1ms.
Has no effect in iGPU build (USE_OPENCL_INTEL path).

I'd have expected it to be a bit higher but maybe it's too soon to tell?

Me too! With 60% increase in clock speed, I expected it would be significantly better than my Fiji based cards. I ran a bench test on a WU between Fiji and Vega and only saw a 10% improvement. I have not tried tweak command line arguments at all, so there may be some unrealized potential.Instagram: rpc_labs

Rick, looked at your stderr output and the only thing I might offer is increasing -tt to 1500. I don't know anything about the ATI cards but it might respond to that parameter like the Nvidia cards. Gives the GPU more time to process data before switching out from the kernel. From the docs ...

-tt F: Sets desired target time for kernel sequence. That is, how long kernel/kernel sequence can executes w/o interruption and w/o switching
to another tasks like GUI update. F is floating point number in milliseconds. Default is 15ms. App will try to adapt kernels (currently
implemented for PulseFind kernels) to run designated amount of time. To increase performance try to increase this value. ?High values
could result in GUI lags. If use_sleep active try to use target times divisible on sleeping time quantum for your particular system.
For example at least some AMD-based systems have 15ms sleep quantum. That is, Sleep(1) will actually sleep 15ms instead of 1ms.
Has no effect in iGPU build (USE_OPENCL_INTEL path).

Hi Keith, Thanks for the recommendation. I will give this a try today. I probably need to redo some of the DOE work I did on Fiji to optimize command line options.Instagram: rpc_labs

I was hoping for more Vega parts in circulation by now but we aren't quite there yet (I require a certain number of completed work units per card to qualify and then enough separate computers for the card to show up on the charts. It's close, but not quite there yet:

Simple - the nVidia offerings are far better at number crunching than the AMD offerings at (just about) every price point.Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?

Fanboy-ism and trolling aside, we all know historically AMD cards have (almost) always had better raw computational power in the consumer market. Professional offerings are almost neck-and-neck (drivers and SW support not taken into account).

A GTX580 (stock clocks) has ~1.6 GFLOPS of compute power. RX 580 on the other side has ~6.2 GFLOPS. That is 5 times (!) more raw power.

Do you really believe the alt coin miners would go for 100s of AMD cards if they had a way to make GTX580 profitable?

No way nvidia's GTX580 crunches more numbers than a RX 580, all other factors aside. This means that the key is in the "other factors", e.g. CUDA vs OpenCL, or other cruncher optimizations. Maybe the workload is just too non-typical and AMD cards have no shortcuts crunching it? Maybe I'm simply misinterpreting the chart?

Anyhow, yesterday I got an email from S@H about how much more processing power is needed for the new telescopes and projects. Maybe, just maybe, if some skilled individual(s) spend some time optimizing the code for AMD cards more, we'll get some of that needed power for "free"? Bear in mind however, that I don't really know how many of the AMD owners are contributing to SETI@home, with all the mining craze currently raging. It might be not worth it to optimize further for just a few AMD cards, which would be really sad for me, since my RX is crunching for SETI most of the time.