I was wondering if anyone else was experiencing a drop in RAC recently?

My machines are pretty much on all the time yet the average is consistently falling. I don't know if it is a coincidence, but this seemed to start happening as the new, larger, WUs came online. Is it just that my units are pending validation or do the new units offer less credit.

Here's my desktop machine (Win10-DELL) which peaked at 6,600 and is now falling through 3,400.

You don't seem to be as NVidia-GPU-dependent as I am, but my RAC wavers up or down dependent on what proportion of Arecibo to Green Bank work is being distributed; since Tuesday it's been falling as there has been more Green Bank GUPPI VLAR and less Arecibo. (Are Intel GPUs similarly affected? AMDs are not.)“Never doubt that a small group of thoughtful, committed citizens can change the world; indeed, it's the only thing that ever has.”--- Margaret Mead

You don't seem to be as NVidia-GPU-dependent as I am, but my RAC wavers up or down dependent on what proportion of Arecibo to Green Bank work is being distributed; since Tuesday it's been falling as there has been more Green Bank GUPPI VLAR and less Arecibo.

I've been experiencing what Hal described mostly. I think it only depends on the mix of BLC/Arecibo tasks coming off the splitters and the luck of the draw in matching up with fast or slow wingmen. The recent spate of "noisy" bombs didn't help either. I got a ton of them.Seti@Home classic workunits:20,676 CPU time:74,226 hours

I've been experiencing what Hal described mostly. I think it only depends on the mix of BLC/Arecibo tasks coming off the splitters and the luck of the draw in matching up with fast or slow wingmen. The recent spate of "noisy" bombs didn't help either. I got a ton of them.

I'll see if it levels off over time, then. I'd just never come across such a large swing before.

You probably had a falsely evelated RAC due to the WOW event . Since it ended last month most people's RAC have been declining as the rate of validation has been declining when all those computers left. If you were fortunate enough to be match with fast wingmen then it benefited your RAC

I assume you are talking about this https://setiathome.berkeley.edu/results.php?hostid=8282565 host.
My own system, as well as a number of the systems of my wingmen I had a look at all had a number of valid tasks that was slightly higher than the number of pending tasks. Your system however has a number of valid tasks that is less than half of the number of pending tasks.
This may be caused by wingmen being late returning work, but it may also have to do with your own system.
One thing to check is this task: https://setiathome.berkeley.edu/result.php?resultid=5952168692. It was sent to your computer on August 18, but you have not yet returned it. Two things I can think of that may have caused this: 1. you have a ghost task (sent by server, but you never received it) or 2. this task is somehow stuck and using one of your CPU cores. The second option might explain your falling RAC.

That task is not in my task list at all. I have 14se08ab but not 14se08ac. It's possible that some tasks got burned when I upgraded to BOINC 7.8.2 - and I think I added the Lunatix apps to support my graphics card at some point. That might have trashed some units too. But the Windows boot of this machine is pretty stable.

I don't think I have a stuck unit(s) - is it possible to check wingmen? I've never had a problem like that before - my rig is hardly the quickest in the world, I would imagine most users are waiting for me to finish a unit.

I'm not sure how the embedded Intel GPU core is seen by BOINC. I would think it counts just like a regular external GPU so should be allotted 100 tasks. That with the CPU nets you another 100 tasks. So I don't know how your tasks in progress shows 301 on one of your machines. That to me ... indicates at least 101 "ghosts"Seti@Home classic workunits:20,676 CPU time:74,226 hours

The embedded Intel GPU is seen as just another GPU from the point of view of the 100 tasks per GPU limit.

Ghosts are tasks that were sent to your computer but, for whatever reason, never got there and the server still thinks they are there.Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?

In Details it shows as an i5-7200U CPU with a INTEL Intel(R) HD Graphics 620 graphics core in it. It should be allotted 200 total tasks in its cache. Yet it shows 301 tasks In Progress. So by my knowledge that means that 101 of the tasks that the server sent to you are "ghost" tasks which you in reality never received.

The procedure for recovering "ghosts" is simple. But it does require some specific timing and some time to recover the tasks which are only resent by the servers in 20 task bunches. So you would have to run through the procedure 5 times to recover those 100 missing tasks. Probably a good portion will have already been cleared out by other crunchers by now.

First, you have to make sure you have room in your cache for any new tasks. You would need to set NNT for a while to drop below your allotted 200 task so that you have something like 180 tasks in progress.
1) Set No New tasks in the Projects tab for SETI.
2) Open up the window for the Event Log and keep it open.
3) Then open up the Activity tab and hover your mouse cursor over the Suspend Network Activity selection.
4) While watching the printout in the Event Log window, wait for the 5 minute countdown to end and watch for an entry - Sending scheduler request: To fetch work.
5) Immediately click the Suspend Network Activity selection. You have to get the timing correct. If you see - Scheduler request completed you missed it and you will have to wait out the next 5 minute cycle.
6) If you did the suspend correctly, you should see in the Event log and in the Projects tab a message about the scheduler request is in progress. It can't progress any farther because you turned off the network communication.
7) Wait out the next 5 minute cycle after suspending your work request. Exit BOINC completely for at least 1 minute. That means both the Manager and Client can't be running.
8) Start BOINC back up. Open the Projects tab again and now set Allow New Work. Look for the confirmation in the Event Log window.
9) Now, re-enable Network Communication in the Activity tab. You should now see the client completing the pending work request in the Event Log window.
10) You should see the client upload and report all the tasks completed since you shut off network communication.
11) Then you should see the server communication that it is resending 20 "Lost tasks"

Rinse and repeat for another batch of 20 ghosts to clear them out.Seti@Home classic workunits:20,676 CPU time:74,226 hours

Here's my desktop machine (Win10-DELL) which peaked at 6,600 and is now falling through 3,400.

The run times for that system's CPU processing are extremely poor.
My old, overloaded, Core2 Duo with a lower clock speed is able to process WUs faster than your i5 system.
Me, approx. 5-6 hrs for a Arecibo VLAR. You, over 9 hours.

I'd run Task manager, or better yet Process Explorer, and see if there are any other programmes eating up your CPU time.
Have you got the system to always process work? You haven't set a CPU% for non-BOINC work? Have you recently enabled the iGPU, whereas before it wasn't being used?Grant
Darwin NT

Yes, that does sound like the well known problem of an enabled onboard graphics unit competing for CPU time. I would try disabling the graphics core and run a CPU task by itself and see if the run time improves dramatically.Seti@Home classic workunits:20,676 CPU time:74,226 hours

I've suspended GPU on both my i5 machines to see if they pick up their speed - I'll give it a couple of days and check back.

A quick check is to look at the processing times.
WUs that are presently taking 9+ hours should drop down to around 3-4 hours to complete.

Times are looking good - uploads occur overnight so, wingman-depending, I may see a change over the next few days. Looks like the Lunatix GPU element eats up too much CPU on my rig. I'll wait to make that a conclusion until I have results post-suspending the GPU tasks.