and sure enough - go back and let LatencyMon run and within a few minutes it starts to rise ... it now says it is not suitable for audio and the values are above 3mS for the Highest Reported DPC and ISR.

I guess I'll need to try to hunt down what is causing it - assuming it is one thing. It says the 'culprit' is NDIS.sys (network driver), suggesting I turn off the wifi - its not on ... and wdf01000.sys or Kernel Framework Driver Routine for the DPC (says to check cpu settings).

I'll do a google search on these - if anyone knows up front (wouldn't that be fortunately) what to look for I'd (and others I'm sure) would appreciate it.

Gary

still can't say that I've ever heard any anomalies re the audio though.

Gary, I'm not aware that there any published "benchmarks" showing what others are seeing in LatencyMon (LM). There are published benchmarks associated with Passmark, but that is measuring overall and system sub-component speed performance, which is not the same as latency. As far as where you are good to go in terms of latency, obviously, the lower, the better. But, if LM is indicating that you are capable of running real-time audio, and you are not seeing audio drops and pops, you're there. I am not presently seeing any drops or pops even though LM indicates occasionally spikes associated with the NVIDIA driver. I am currently using Windows Performance Analyzer (WPA) and related Recorder to try to track that down. Its a very complex program but it is giving some useful, and surprising, data. More on that later.

I did not realize that this thread had gone into page 2 when I posted my last response so I did not see Scott’s or your last response before I posted. Sorry to hear that you have the dreaded latency issue. However, if it’s not causing drops or pops, resolution is more a matter of seeking perfection than necessity. But I have latency OCD so I’m not one to suggest that any one shouldn’t chase this thing down and put a stake through it. I think you will be disappointed with the info you will get from Google. Other than updating your drivers, most of the information is so generic and unspecific to your build and software environment as to be basically worthless.

As mentioned, I am currently using Windows Performance Analyzer (WPA) to trace this. It’s a time-consuming and complex process. You can’t run the recorder too long as the files become huge and unmanageable. So, I had to keep running the recorder while all potential problem software was running as well as latencyMon (LM) and dump the data if I didn’t see an interesting event within one minute. After about 10 minutes in on 7/19, DPC latency jumped to early red and I captured it in about 31 seconds of data. I processed it and will show you in some screen shots what I found. Again, every different build and software environment will probably show something different, but if you download and run WPA, you can get a pretty good idea what your specific issues are. The first screenshot is of the entire 31 second recording. I arranged the data to show GPU and CPU usage in detail. I could see from the other graphs that nothing interesting was happening in storage, power or memory. The event occurred at about 18.5 seconds in and is pretty obvious in the graph.

ScrnShot 2.jpg (445.2 KiB) Viewed 4101 times

The second shot shows the mouse hovering in the center of the event generating a horizontal line through both graphs which indicates that (a) Firefox was the process involved and (b) it did not cause a spike in CPU usage, only in GPU activity under rendering. That wasn’t a huge surprise since I am using a Xeon E5 2699 v4 with 22 cores, so it takes a serious anomaly to get its attention.

ScrnShot 3.jpg (448.93 KiB) Viewed 4101 times

The next shot shows (in the blue spikes) an “unknown” process is generating some GPU activity/latency due to “memory transfers.” I have no idea exactly what that means but I note that there is no unusual system RAM data so I assume that is in the graphics GDDR.

ScrnShot 4.jpg (442.63 KiB) Viewed 4101 times

The next shot shows the fairly large amount of time that was involved in rendering in Firefox at the inception of the event. It is about 60% of all the rendering time used by Firefox in the entire 31 seconds of the recording, so A LOT! Hence, the DPC spike.

The last shot shows the comparative highest amount of time involved in the mystery “memory transfer” issue. It suggests that is not the problem, whatever process is causing it.

So, what do I know now that I didn’t before? Microsoft Edge will become the only browser up when I’m running HPSDR.

I did a search and didn't find any dedicated thread on using and interpreting Latency Mon ... I know it is mentioned in this thread ... please let me know if i need to move it ...

so I have a new computer - it is in the 12,500 range for passmark for the CPU, 96%, and 91% for memory but is way behind in graphics (using the onboard Intel 630 for now, card not installed yet) and in disk (no SSD yet, researching whether or not it will do an NVMe or needs just a normal SSD) ...

Latency mon is better than my currently used i5-4460 .... but it still says unacceptable typically in an hour or so of running - always tcpip.sys. So how does one go about finding out whether or not I have the latest version (and then how do you upgrade it if you don't)? I have upgraded everything on the computer from the "brand new out of the box" state using HP's facilities ... that included BIOS, Graphics, Audio... nothing else is being reported as needing an upgrade... my suspicion is, since Windows was the first thing it upgraded (to the latest) it has the latest tcpip.sys ... so I think this is a red herring and i'll not be able to do a great deal (read: anything) about it ... it is currently using wifi as its in the dining room and not connected to ethernet as it will be when in the shack.

This is the main LatencyMon topic. It is the only topic with the title "LatencyMon...", which a quick search easily reveals. I have moved your posts here.

I am by no means an expert on how to solve LatencyMon reported problems. I have had good luck updating drivers, and finally built a new, very fast, computer. I use a combination of Windows Update, PC manufacturer update utilities, and specific, individual component (e.g. NIC card or NIC chip) update utilities.

One very good idea is to disable devices and/or services that are giving you problems to confirm they are the source, such as Wi-Fi NICs, etc. After that it tends to be a lot of googling to see if anyone else has lucked onto the solution. Mostly you find the best info on gamer and DAW websites, since those are really the only other demographics that are badly affected by these problems.

I don't think I'm writing anything you don't already know. Unfortunately it is not always an easy problem nor an exact science to make LatencyMon happy.

Well, hmmmh. Soooo, let me see if I got it: 20 some pages later, MS is still working on it ("we feel your pain"), the thread finally gets into the 21st century, a few posted highly meaningful, good-looking LM results for exactly 6 seconds (exaggeration, kinda), traffic on the issue died out (most likely out of frustration) and still no apparent generic solution. Distilling what seemed pertinent, I decided to experiment with some NIC power settings although I am no longer experiencing any audio glitches of any kind. I have long since found and disabled every possible power saving feature in the BIOS and Win 10. However, I found and disabled the "Energy Efficient Ethernet" option on the Advanced menu of the dedicated NIC in Properties and did notice that when I powered HPSDR back up, I had "0" under/over flows and "0" OOOPS from the git-go. I have never seen that before. Usually, there is some initial instability and then it becomes highly stable. Time will tell if that is meaningful or another rabbit hole. Stay tuned.

So I ran my current shack computer, an i5-4460 on LatencyMon again - ran it for over an hour - never fell out of "suitable"! This new super fast i7 within 30 minutes would fall out ... frustrating. Then I found a setting I hadn't seen before. Under CONTROL PANEL | SYSTEM | Advanced System Settings | Advanced then the top line "Performance, settings" selection ... it was defaulted to "let the computer decide", changed to max performance and restarted the computer. Now the i7 seems better - it has run for as much as 90 minutes without falling out ... but eventually it will. Now more so with the second line "highest measured interrupt to process latency" ...and there is no indicated 'chief offender' listed for this one ... searching for this one I've not found, surprise surprise, a solution or even a good method to track it down.

I've got a few days before I run out of time to return this computer if I decide to do so ... so far other than passmark its not been an overwhelming standout. Maybe just having a very fast processor just isn't enough.

so apparently I lucked out and hit some of the items that need to be (re)set. I added another one after that last one - I don't remember where it was ... the new computer has now run for over 6 hours and still is "acceptable" ... although I suspect JUST so as the one category that it previously failed at is sitting at 860uS and I suspect 1mS is the trip point ... which got me thinking - I assume that no one knows the relevance of the numbers - the actual worst case numbers that are posted and their impact on real world results re clicks/pops and tics ... ? The numbers are just very slightly better than the i5 system ... but would the pure processing power of the new system (passmarks 200% one compared to the other) make other potential 'issues' better? For example I cannot run above 192K on the i5 ... it really acts up in strange ways ... now for the most part I (almost) never use above 92K which is perfectly suited to my needs (except when working split on 40 when I need to go to 192K) ... but I'll be curious if the i7 works in this area ... also hopefully, the reason why I bought the i7 was to future proof - hopefully if and when we have an extra VAC (and/or one dedicated to I/Q for CW Skimmer) and I use Skimmer it runs fine ...

I ask about the values because thinking about it - just simplistically, a 1mS latency in response back from an interrupt would mean a huge loss of data for a 92Khz stream ... or at least it seems. Yet that isn't the case - so I wonder how the numbers relate overall.

As for where LatencyMon and DPC Latency Checker set their pass/fail limits, I have no idea.

I'd suggest, however, that anything that stalls the primary DSP processing thread in PowerSDR (or Thetis) for longer than either a half or certainly a full DSP buffer time period is likely to generate bad results. I'm guessing quite a bit, but if we use the DSP buffer size in Setup > DSP > Options as a guide, then consider that 64 IQ samples at 192KHz sample rate = 333uS.

I'd further suggest that the real problem is in time spent away from the primary DSP thread is the real issue, not problems with VAC. The audio runs at a sample rate of 48KHz. With a buffer size of 256 samples that's 5.3mS, which is a long, long time.

Just as an aside, for me an important one, I had changed that setting:

CONTROL PANEL | SYSTEM | Advanced System Settings | Advanced then the top line "Performance, settings" selection ... it was defaulted to "let the computer decide"

and this improved or reduced the latency issues on the new computer .... with this setting alone it went for 10 hours without 'tripping' .... I did the same setting on my current radio computer, the i5 ... and after having shut it down and back up again noticed that the fonts were woefully dim and thin and ill-defined... it was this setting! So the trade off is performance for poor visual display .... a real shame.

I've recently been able to reduce my settings to the absolute minimum with rock solid performance. My Ring Buffers are 512, 512. I am running with Primary audio setting at 64 and VAC at 128. I have Buffer Latency at zero. I run HOURS with no uptick in overflows/underflows or OOOPs now. I typically see only the last digit of the "Var Ratio" changing while running with the next to last sometimes changing. Here is how I managed this...

1) I went to the advanced settings on my network card that attaches to my Anan 8000 board and set virtually everything OFF/Disabled. I increased both receive and transmit buffers as high as they would go, 2048. I did leave "Interrupt moderation" enabled and the moderation rate is set to "Adaptive". I also have "Packet Priority" enabled.

2) I then found that under DSP Options, SSB/AM TX buffers MUST be higher than 64. I tried 128 then settled on 256 and that seemed best for me. I set all the other TX buffers at 128 for CW, FM, and Digital.

3) I use VoiceMeeter Banana and have the Behringer UMC202HD. My buffer settings on the UMC202HD are 128 samples. Safe mode unchecked. In VoiceMeeter I have "Buffering ASIO" set to default. All my sound settings are as Scott suggested; 48khz. I use 24 bit. I'm sure 16 bit is probably fine too.

All of the above had to be done to get to the levels I'm now running.

My CPU is an I7-3770. Not super fast. It runs a Passmark of 4674, CPU Mark of 9552. I typically run Firefox with multiple screens, including radar tracking, Dxmaps, my local security cameras displayed, all the DxLab apps running, JTDX, and JTAlert, and email (Thunderbird). So lots of apps running.

I don't know if any of the above settings might help you but thought I would provide what I had done here just in case.

Thanks for the post - unfortunately some of this doesn't relate directly to my setup where I am using, what is it WDS or something like that under VAC ... I'm using VAC-M (Eugene's) ... I'm not sure how I'd edit the network 'card' ... for me its going through a 1Ghz switch then to the main computer which has MB networking. So don't know if that switch has a bearing on it or not or even where I'd edit the internal (computer) network connection if that is what you mean. Sorry for my ignorance on these issues...

I can try setting the buffers - although I'd have thought I'd be setting the DIGITAL one and not SSB. Also the CW/FM have no bearing on this I'd guess and they're working fine so don't want to mess with them. Possibly you use the VAC connection for all and thus it might be germane for your setup.

I notice that WSJT doesn't decode all that well sometimes... signals are clearly there and nothing decodes ... and wonder if this has something to do with the radio settings. I do know that on TX I see these burps and bubbles and splatter (as a general term of what it is) that is at times only 35 db down from the peak signal (using DUP, not there without DUP)... Scott wants me to post a picture of it - I'll need to do a video as its only there for the briefest of moments and there is no "freeze" function for the display (its on my list of "wants" turned in to the development team, probably very low priority) ... one of these days I'll get it done. I've been busy working new ones on 6 ... started the season with around 56 countries - now at 94!

Thanks for the info though - I'll see what I can get out of it on my setup. Have a great day.

I'm curious ... even though I'm an embedded designer both HW and SW I'm not familiar with the details of how a PC handles interrupts. So I'm throwing this out there...

I have an old program that is used to download firmware to the product that I sell ... and for it to work reliably I have to set the affinity to one processor. I was wondering if anyone has played with this either regarding 'shunting' any offending drivers/exe's and/or redirecting PowerSDR. So if a core handles interrupts, a big unknown for me - and you could focus an offending program to not use say core 6 and set up PSDR to only use core 6 it might eliminate the effects of interrupt latency at least for that (most) offending program. I don't know that there is any way to test this using Latency Mon as it by default is looking at everything ... but if there were someone that had issues with pops/clicks for example and one could use Latency Mon to find the worst case offenders ... if they can figure out how to set the affinity for each of these they could turn off say 2 of the 6 cores that are used (if the processor for example had 6 total) ... and then set up PSDR to only use those 2 cores....

I supplied the info for all the modes the radio operates but certainly digital is your focus so I would try setting that to 128 buffers on the filter page, if it is at 64 and see if that makes any difference. I also run digital modes at 48K rather than 192K where I run all other modes. This seems to work much better for that mode and certainly anything above 48k is unnecessary on digital mode.

I noticed this year you guys to the East of us here in the Atlanta area seemed to get more of the "E" cloud than we did. Congrats on the big pickup in 6m countries! I picked up quite a few but nothing like you did.

If I were you, I would try connecting my NIC directly to the Anan rather than going through a switch/router just as a test. Obviously there is more latency going through a switch/router path than a direct connection to a NIC from the radio. For this test I would also turn off the different advanced options on the NIC as I discussed above before assuming it makes no difference initially.

I tried playing with Affinity some time in the past but discovered it didn't "stick" and you would have to reset it every time you started the App and some Apps changed it even while they were running so I had no luck with this setting. It sounded like a good idea as I noticed that most apps don't use all cores; most use just the first one or maybe the first two. My thinking was I could assign HPSDR to cores 7 & 8 to get it away from most other programs but it just didn't work out for me to solve the latency problems I had then.

Also, as Scott has stressed many times, everything in the audio chain MUST be set to identical levels; 48K and either 16 or 24 bit. If not, problems are sure to occur as I discovered when I missed one. Missing one is not hard to do as there are so many in different areas to keep up with. Such a pain particularly when Microsoft deems it necessary to undo what you've done frequently with updates. They seem to love to screw with the audio settings with almost every update.

Your mentioning of the "burps and bubbles" on transmit reminds me of what I saw on SSB (and have seen on FT8) with PureSignal OFF and running a two tone test and seeing what I thought looked like a "irregular shaped cloud" popping up from below on the waterfall frequently. This happened even with VAC OFF. The problem causing this turned out to be the TX buffer size settings on the "DSP Options" page, as I mentioned earlier. I had it set to 64 and found it had to be at least 128 to mostly get rid of this problem. Even at 128 I would occasionally see a burp so I settled on 256 and it seemed to completely eliminate it. This was why I mentioned I set all other modes to 128. Some were set to 64 previously. Those may need to go to 256 also but I haven't tested them all yet. This "cloud" looked very similar to the display anomaly caused when running PureSignal with the noise blanker on in previous versions of HPSDR which made me think it was just a display issue but when I ran PureSignal with the buffers at 64, I saw a glitch in the "Amp View" display of PureSignal every time I saw the "cloud" popped up so my thought was it is a real problem and not just a display anomaly.

I thought I would update you on my fooling around with Affinity today. I decided to play with it to see what I could discover. I assigned PowerSDR.exe to cores 5,6, & 7. I assigned all the firefox instances, JTDX, and JTAlert to cores 0 thru 4 as these were the major users of CPU cycles. I then opened resource monitor and just watched the different cores and I did notice a distinct difference in cores 5,6, & 7 compared to cores 0 thru 4. There was indeed less total activity. This didn't really help me as I don't have issues anymore but it did clearly show that most other traffic was off of cores 5,6,&7. Here is the picture of the cores and as you can see the majority of usage is on cores 0 thru 4 now.

and I tried changing PSDR to just one core - core 6 on my i5 (6 cores total) ... there was no change that was obvious. The CPU usage was the same as I'd expect it to be ... note my thought on this is that if one were to do this you'd want to have the programs you're concerned with, for example WSJT, set to the same core .... the purpose here being to reduce the chance that it will be on a core that has something that has long latency and as a result be held up by it ... of course without knowing what that program is (although LatencyMon tells us the worst case one(s)) we can't eliminate it but we might reduce the frequency.

As you can see in my picture, cores 5, 6, & 7 are obviously not following cores 0 thru 4 and are running at a lower cpu rate so the Affinity change worked for me.

If you only moved the programs you wanted to core 6 and did not prohibit other programs from using Core 6 then that may be why you didn't see a change. You can't just move PowerSDR (and your other stuff) to Core 6 without removing other programs from using core 6 too, otherwise you haven't accomplished anything and in fact you may have hurt the apps you moved to core 6 as they now have only one core to operate on along with all the other programs that are using core 6 and all other cores.

Agreed ... however there are LOTS of things running ... I don't know how programs that do not run multi-thread (if that is the right term for actually using multi-cores by design) end up using a core ... if they are only typically on a core - then unless there is some randomness on how they're assigned I'd think that there might be a tendency to overload the lower core numbers - that was my assumption. So I selected the last core (which by the way on this i5 is actually core 3, not 6). That's what I was going with. But first and foremost i need to figure out a way to see my cores as TM is only showing the CPU results - I know I USE to be able to see cores - not sure why they removed/moved it on Windows10.

Also apologies for not being explicit... when I said it didn't change I meant there was no obvious bad result - it ran as normal with no visible (or audible) effect.

I have since played with it a good bit more - this time looking at turning on the resampler which previously never resolved and was always hunting. With both PSDR and WSJT on the same core - nothing else changed, I was able to see what appears to be an improvement in the resampler - there were times with certain settings, changing the Buffer size and the Buffer latency value/manual(auto) where it would settle down - only a single digit changing for a good couple of minutes ... but ultimately it too would 'run-away' with a sudden rapid change in the ratio and huge jumps in the OF/UF numbers ... so I don't know if that is 'better' or if it is just different... it was indeed different... I realized later that I really need to put VAC-M on the same core as well - but I've no clue where to find it in the services or details tab in order to change the affinity. There's nothing obvious - I could use help here if someone knows what its 'details' would be.

This is just an experiment - I agree with you that moving things off of the "radio system" core would be ideal - but unless there is a way to automate that we're just adding to the (already) many tasks required at computer/radio startup time ... I'm currently looking at a slightly less onerous set of changes to see if that improves things noticeably.

so that new computer that i have ... it is an i7-8700 3.2Ghz ... it ran a 14800 on CPU/Passmark test 2 weeks ago or so ... today when I run it it is anywhere from 11500 (tops) down to 10000! WHAT CHANGED? I can't figure out why it would drop so much. That is a huge difference and its now running about as fast as my 5 year old i7-4770. The "nominal" for this new CPU is 15200. The only thing that I can think of that changed is that I installed a Samsung NVMe SSD (it is blazingly fast). I probably installed some programs since then - I'm sure i did - but I've disabled any that come up in the STARTUP and don't see any change in the results - still 11K'ish. I have searched on the internet to see if there's anyone that has had Passmark results that changed this drastically - nothing found. Has anyone seen this (and hopefully resolved it)?

I have seen similar changes in the Passmark ratings of my PC. It has never been as fast as the first week I had it running. It would seem that as Windows gets clogged up with the detritus of normal, daily operations, the system does tend to slow a bit. However my changes are not nearly so dramatic as yours.

I'd "accept" that except for when I tested this computer, the 5+ year old i7-4770, it's rating is spot on for this processor results as shown by PassMark (showing hundreds of results). I didn't do anything to this computer before I ran the test other than start it up. Also we're talking 19 days since I ran that first test, not months...

frustrating.

I'm tempted to return it, the result I'm getting is in the bottom 0.1% of all submissions!