Page 1 of 1
(9 posts)

There are some files that playback painfully slow while utilizing 150-200% CPU in kdenlive. If I play them in mplayer or avidemux with VDPAU, the quadro in my machine takes over and it plays back smoothly. I can even perform other CPU intensive tasks while editing video in avidemux with VDPAU, and it doesn't skip a beat.

The problem is avidemux is very limited, and for some intended uses, not at all usable.

Is there a way to get kdenlive to work with VDPAU so the graphics card can do some video decoding? This isn't for exporting final videos, this is just for playback of video I am editing within the software.

I'm not using any effects whatsoever. Just straightforward playback, scissors cutting of points, and that's it. Is there anyway to get the GPU to do some of this?

Do you have an idea when it's expected to be completed? Would any donation to developers speed it up? There aren't any options on Linux that do everything well, even paid ones. I wouldn't mind paying, good software that does what you want isn't always free, so I am happy to put some money where my mouth is to get what I want! Please do let me know.

MLT can use VDPAU when libavcodec and MLT are explicitly configured to build with that. However, I strongly discourage the distro packagers from enabling in MLT packages because the combination is not stable enough in a multitrack scenario. Quite some effort was put into stabilizing it, but it still problematic enough to create support headaches. And when it does work, MLT must pull the video images from GPU RAM back to CPU RAM in order to apply some effects or encode the result negating many of the performance gains when compared to pure media playback applications. Ultimately, I cannot justify continuing to give much of my attention to a single OS+vendor technology.

Does this mean hardware accelerated video is most likely never coming to kdenlive? It would kind of make it impossible to use going forward if we ever go to higher resolution projects. Encoding proxy clips wastes time and they don't look that good to work with. Is it a hardware issue, or is Linux just a bad platform for professional video editing?

For the heck of it, I tried a different machine. Ivy bridge, core i7 3770, 16 gb of ram. I feel like I was getting greedy with the GPU decoding thing. This is a fairly powerful machine. It does good, you apply one affine affect and it chokes completely. It tops out at 75% CPU usage on a quad core processor and becomes laggy as hell.

Is there a way to get multicore support for effects, if not GPU decoding? I don't mind using the other machine for this instead of the laptop if I can get use of more than one core. Even with proxy clips it's awfully laggy, most likely from using just one core and no GPU for everything.

This probably would cost 20k in donations instead of the paltry $500-$1K I had in mind, so feel free to tell me this is not realistic and to STFU with the peanut gallery suggestions.

Thanks for the explanations again, and for entertaining my silly comments.

Ah, you are using Affine transition. I checked your posts before, but there was no hint. The Affine transition is well known to cause choppy playback for some reason. Maybe you can move to Composite transitions, which have less switches, but run much faster in preview.

There was another recent thread enquiring about multi-core: viewtopic.php?f=265&t=122140The Affine effect is quite CPU intensive partly because there is no MMX/SIMD/SSE assembler for it whereas Composite does use SSE2. The bulk of affine is thread-safe and parallel if you could enable that. There is also a library for GPU-based effects called Movit that has a good start at being integrated into Kdenlive; however, that is not yet ready. GPU effects are much more portable across OSes and chipset vendors since it is using OpenGL, providing more value and less fragmentation than GPU decoding, but it does suffer from interoperability and complexity of integration issues..

I hear you on not wanting to support a closed architecture, and I have been following MOVIT over the past day.

I am new to video editing. What I need is a way to have a window in a window, and not much else. My goal is to do educational how-to videos of professional repair work that is not documented anywhere online in video format, with a microscope camera image sitting either on top of(after being shrunk) or alongside, another video from a standard camcorder. This way there is a split view of what I am doing from a macro perspective from the camcorder, and a view of what I am doing inside the microscope. I am too new to video editing to know if there is a better way to do this.

I did enable multithreaded decoding and selected 4 threads in clip properties but that didn't do much. I will tyr the composite transition tomorrow. Thank you very much for the advice, again!