GPU 'upgrade' not an upgrade?

I recently decided to upgrade my graphics card since a lot of my work involves multi-cam edits, and I've recently added a 4k track to the timeline. Playback suffered and made cutting quite difficult and inaccurate.

I moved the project from my HDD to my SSD to check if this was a factor, and it wasn't.

I'm running on a Intel® Xeon® E3-1246 v3 3.50 GHz with 32GB RAM.

My previous GPU was a GTX 570, and I just upgraded to a RT 390. I'm not super knowledgeable on these things, and so rely on whatever info I can garner on the web. I was confident I was making a decent decision.

The results - playback is as choppy as it was before, and where the GTX 570 made use of OpenCL to speed up renders, the RT 390 shows no difference using OpenCL or CUDA compared with just the CPU only.

So, am I doing something wrong? Or was I wrong to expect any improvement?
What exactly is the bees knees in terms of GPUs that make Vegas glide along these days?

That wouldn't favour either the old GPU nor the new. It just improves general render times. What I'm wanting to know is why what seems like an upgrade (as in newer and more expensive) is in fact yielding sometimes poorer results when dealing with Vegas.

Actually that is a VERY important aspect when dealing with preview!!!! I had the same question as to why I had no better preview on a GPU upgrade and John R. here on the forums instructed me to make sure that was disabled and my previews have been way better. I edit 4k and below with smoothness I never had before.
The question to why you are not getting better results is likely the older code Vegas is using for preview. I have great luck with previewing 4K on a Radeon 290X, likely the GPU architecture is newer than Vegas can utilize.

Here is a clip from that thread I mentioned...
[Scott Francis] "I have just started to disable resample on the 60p clips. That does seem to help a bit with preview. "
It helped me more than a bit. I went from 6 fps back up to 29.97 fps during transitions. If you don't disable resample, Vegas Pro will try and blend frames on the fly to conform to the projects frame rate. Other NLE's would force you to conform your video first. This isn't a bad idea. Just because Vegas Pro lets you throw mixed frame rate clips on the timeline doesn't mean that you should. ;-)
[Scott Francis] "So my main question is WHY is preview still so CPU heavy when we are utilizing GPU preview? This doesn't seem to be logical (at least in my mind) as I have been chasing this end of the rainbow since SVP11!!!"
It's about striking a balance. The GPU doesn't work on it's own. It is a co-processor which means that main processor (CPU) needs to feed it work. Perhaps an 8-core can feed the same GPU better than a 4-core can and keep it busy. There is a lot of setup and breakdown of data before and after the GPU does it's work and the CPU needs to do all of that. Think of it this way... have you ever been so busy and someone says to you, "why don't you delegate that to someone else" and you say, "it would take me longer to explain to them what to do than to do it myself"? That's the relationship between the CPU and GPU. The CPU needs to do work to use the GPU. It's not free from overhead and sometimes it can even be slower because the overhead is greater than the computation.
[Scott Francis] "Also, how useful is multicam editing in SVP vs the way I am handling it?"
Multicam in Vegas Pro assumes that everything is synced up correctly before you start. We have tools at VASST like infinitiCAM and Ultimate S Pro that take a different approach using markers for camera switches. Here is a video of infinitiCAM that you can download and view (sorry it's in WMV format). This will show you the workflow.

Thanks Scott,
I've always disabled resample - but more just to reduce the motion blur type look i get sometimes. On this occassion I hadn't and so I gave it a try. Unfortunately I saw no improvement in preview. Going from 2 up to 15 occasionally and back to 2/3/4.

Gotcha, I was just wondering if you did....since that had been an issue for me prior.

I don't know what (other than the aforementioned new GPU thing) that could be going on.
Is the one you are using able to do openCL and Cuda?
I thought only nVidia cards did Cuda, and AMD openCL.... I am not familiar with the one you mentioned.

Do you have GPU accelerated process applied to the events which contain the footage you are playing back for testing? The GPU only takes on some processes that are supported by GPU Acceleration, such as many effects in Vegas. The GPU in your system will only take on processes that are meant to be accelerated by it. Otherwise, your CPU will take on the rest of the work.

I just built a new computer for 4K editing using an Asus X99 A 2 Motherboard, 32 Gb of ddr 4 memory an Asus R9 390 along with an intel 5960 x. The hard drive is a Samsung 1 tb ssd, and it plays back any 4 K footage I have put on the timeline without s stutter along with any transitions I have used. The render times are better than real time as well. To say I am more than satisfied with this system would be a gross understatement. I also have the ability to scroll around the timeline in full resolution using 4K settings on everything. I also decided to go with windows 7 Pro and haven't had any issues. When rendering and going into task manager it is awesome to see 16 threads running at 116 percent or better.... WOO WOO WOO!!!!!! It took about three hours to put together and during the burn in process the power supply failed. After replacing the power supply the system has been awesome.

The Intel® Xeon® E3-1246 is more like a Gen3 i7 and pretty much meant to be a server chip for 1U rack systems. The 5960x is more like a dual proc xeon with the way the x99 chipset is laid out. The comparison is apples and oranges.

Intel® Xeon® E3-1246
16 lanes of PCI 3.0
2 channels of DDR3

vs

5960x
40 lanes of pci 3.0
4 channels of DDR4

Even with 32GB of ram on the Intel® Xeon® E3-1246 the memory bandwitdth is likely half or less of the 5960x/x99. The reduced amount of of PCI lanes on the E3 cpu is why you have to make sure the GPU has full bandwidth, and not sharing pci lanes with other motherboard devices. Latency on the system, like memory freq, x16 on the GPU, and no bad drivers causeing DPC issues is a must with reduced hardware.

Keep in mind that Vegas operates in RGB space, so all footage is converted from .264 to uncompressed RGB, multicam and this is of course 4x the bandwidth needed. 4K uncompressed 8-bit RGB is somewhere around 750MB/s, times 4x streams, so your CPU/Mem/GPU need to be able to maintain those rates in a very solid fashion.

"Winsat mem" from an admin command console will give you the memory bandwitdh that windows/vegas sees.

My guess is your GPU driver set and Opencl may not be functioning correctly, and so not getting the benifits of the GPU for playback. Keep in mind that with the way OpenCL works, you are sharing the GPU with general display bandwidth capabilites. Here again the 5960x has more PCI lanes, and so a dual GPU with one for display and another dedicated for compute would be optimal.

Also sounds like you may not be running the latest build of VP13-453.

It is unlikely an i7-gen4 or less system hardware will ever push enough decompression of h.264 encoded content, then mux it, and stream the results to monitor in a smooth fashion. If you look under the hood, Vegas would like 8 threads for AVC decompression alone, and another 16 for rendering, vegas itself will run around 70+ threads. All that in contention with the other software you have running in the background on your system, and only 8 CPU threads to service them all in a time share fashion.

So yea, I uninstalled Vegas and re-downloaded and re-installed. Surprisingly The preview is working considerably better. Aaron, you mentioned I maybe wasn't working with the latest build - so maybe this re-install is a newer build and that's the solution.

I still feel that at some point I will upgrade to a 4790k (the best i7 I can get that remains compatible with my MB) as the preview still drops to 20ish at times (whichis generally enough to work with - certainly better than 3!)

The r9 390 is still not giving me any rendering acceleration though and so I'm returning it and reverting to my gtx570.

I've run all sorts of monitoring and can see nothing causing a bottleneck. My RAM, CPU threads, GPU etc were all still well short of 100% usage. So it looks like something in the software itself had been causing the lag.

Strange. I had seen a guy's Youtube video at a point where he mentioned re-installing and then disabling updates. At that time, there may have been a bad update. ANd perhaps now the newest builds have rectified it.

When rendering try going into task manager and seeing how much of your resources are being used to and that might give you an idea of where your bottleneck is.

Here is a short video I shot to test the DJI Osmo It was shot in 4K and rendered down to 1920 by 1080 in less than 2 min. I also did the same output in Full 4K and that also rendered out in less than 2.min. I also Toshiba 4K P55tB52 laptop as a mobile device and find it does a very good job of rendering and playing back 4K footage. The only problem I have with the 15 in screen is the scaling issues with on the 4K screen.

Well you can go back to the 570, but you are trading a GPU cabable of 5000 GFLOPs vs 1400 GFLOPs. OpenCL in Vegas would like the 5000 over the 1400 for sure. But if your only goal is to use MC mp4 encoder with GPU, it makes sense.

For detailed help, posting a screen shot of Speccy overview, and details of your workflow.

Workflow would mean

Project settings screen shots.
"Media info" (its an app) details on your source material
Then render settings, along with what your final purpose is for the output.

Redering to Sony AVC, or what others have been doing the frame sever thing to handbrake would be aternatives. Mainly with SonyAVC, your GPU will help with decoding source material, timeline frame work, and then a very light amount of calculations with the encoder. The source frame decoding depends on the source codec however.

So I tried a render to Sony AVC and found significantly better render speeds. Which is good. But are there any drawbacks from rendering to this format rather than MainConcept AVC?
But, it's not that the Sony AVC utilises the GPU better, because the render time is equally as fast when I choose CPU only.

I've requested a returns to Amazon a few days ago, and still haven't posted it back. I want to experiment. Wats the point in returning just to blindly choose another one.

I decided to check performance again since I re-installed Vegas. Now I get full playback in 'Preview' but the CPU maxes at any 'Good' range and playback rate drops dramatically.
CPU upgrade would certainly help that, but in fairness 'Preview' is adequate to edit to.

So it's really just the render speeds now. I was happy with the results on my GTX 570, and the R9 390 will probably go back.

I've been testing with the exact same timeline scetion and other components. I've uninstalled old drivers. I've re-installed drivers.

I checked my parts performance during render testing. Tested at CPU only, and OpenCL, and CUDA. On no occassions did we see the GPU usage rise above 1%, but in every case the CPU usage was in mid to high 90s.

Update: On a longer render, I could see that the GPU render had no added effect over 'CPU only' in MainConcept, but on Sony/AVC, it had (very) marginal improvement over 'CPU only'. But nowhere near the GPU acceleration I was enjoying on my GTX 570.

So if anybody comes across this post while researching GPU, it's a thumbs down for the R9 390 if you are a Vegas Pro user.

John Laird posted earlier than his setup was lightning quick and included an R9 390. I'd like to see his comparisons between CPU only and GPU render. I suspect he may just be enjoying life with a superior CPU and his graphics card has little to do with it.

That statement is based on what? You offer no insight into your testing process, media types or project settings, so there is no way for anyone to know what you are doing.

I can assure you that the 390 will out perform the 570 with the proper use case. A use case like AVC, prores, cineform, xdcam, or HDCAM content with layers of timeline text, compositing, or effects like GBlur, Rays, SoftCon, Glow, cookie cutter (power windows,) min&max.

A go to test for OpenCL enhancement is to use a 1 minute clip of AVC, prores, cineform, xdcam, or HDCAM encoded content, then apply a "Sony Min and Max" with a "Sony Defocus" effect. Test play back speed, and render times using both CPU only and Auto settings in the render template. Monitor GPU utilization with GPUz or "AMD System Monitor." If you get any "..." after your Frame: counter, then your system is dropping massive amounts of frames to keep up even with the low speed it is showing.

Most people also will render their finals in 32-bit Video Levels, this is another case where the enhanced Floating Point calculation speed will be noticed.

There is more to the rendering pipeline than just MC encoder acceleration.

I spent most my time editing and not rendering, so I would not give up an improvement in calculation speed that the GPU offers. If your use case for Vegas is just an elaborate video file converter, then you probably will not see much improvement with OpenCL.

I decided to check performance again since I re-installed Vegas. Now I get full playback in 'Preview' but the CPU maxes at any 'Good' range and playback rate drops dramatically.
CPU upgrade would certainly help that, but in fairness 'Preview' is adequate to edit to."

You may already know this ...
I assume you are using proxy files for 4K ...

Check out page 102 in the manual. Only draft and preview uses the proxy file, if you use good or best then you are using the 4K file alone for preview, obviously more difficult for your PC.