I am trying to dig through the code and find the section for where streaming out to to the device (Sony Bravia in this case) is. I am suspecting a bandwidth limitation somewhere. Can anyone give me a pointer?

If I play a 720p video very busy scenes will stutter. The buffer is full, CPU usage is low, and the network interface is not saturated, or even close to it. If I pause for just a second it seems the buffer in the TV is re-filled and it plays fine for a few more seconds and then begins to stutter again.

Perhaps I am looking in the wrong direction though... I would expect, during busy scenes (often large outdoor scenes with lots of texture) to see network utilization increase to something closer to what the network itself is able to handle or at least what is necessary to keep the TV supplied with what it needs.

Interestingly if I set PMS to scale the video down to 800x-1 or so, the stuttering decreases, network usage average decreases correspondingly, but when the stuttering starts I do not see usage reach the average when not scaling the video. Or in other words, while stuttering scaled down the network usage is significantly lower than stuttering while not scaled. Regular streaming not scaled will average higher bandwidth than scaled does while stuttering. I would think the network usage should approach the not scaled usage before stuttering begins.

And one last thing: The PMS.sh as of the latest SVN checkout I did (Revision 710) still has what I assume to be a bug. It has been there since I started using PMS a year or more ago. When starting the JVM at the very bottom:exec "$JAVA" $JAVA_OPTS -Xmx768M -Xss16M -Dfile.encoding=U....

I think -Xss16M should be -Xms16M to set the initial heap size to 16M as opposed to setting the thread stack size to 16M. Maybe some people need a 16M thread stack size, but it has caused problems for me.

Not to change my own topic, but should 'ss' be that large? Given the original value of 16M it makes sense to me that the intention was to reduce the minimum heap size and leave the stack size at the default.

At least that was my thinking when I changed it to 'ms' on my system...

Like I said in the commit message, it's backported from pms-mlx, which spanks concurrency much harder than the official build (when querying TMDb for media library data). That's the smallest size that doesn't cause "java.lang.OutOfMemoryError: unable to create new native thread" errors on my system (Ubuntu 10.10; Sun Java 1.6.0_26).

Given the original value of 16M it makes sense to me that the intention was to reduce the minimum heap size and leave the stack size at the default.

According to this, the default (for the Sun JDK (on Redhat Linux (in 2007))) is 512 KB. According to this, it's 8192 KB.

Yes. Like I said in the commit message, it's backported from pms-mlx, which spanks concurrency much harder than the official build (when querying TMDb for media library data). That's the smallest size that doesn't cause "java.lang.OutOfMemoryError: unable to create new native thread" errors on my system (Ubuntu 10.10; Sun Java 1.6.0_26).

Many thanks for the informative reply. I saw the note about pms-mlx but I have not heard of it before now. I'll have to take a look at those different builds.

chocolateboy wrote:

Given the original value of 16M it makes sense to me that the intention was to reduce the minimum heap size and leave the stack size at the default.

mazey wrote:the new buffer code needs a bigger buffer getting 50mb only isnt enuff for 1080hd transcodes on some scenes.

Frankly, I don't believe that.

In fact, I think that the code in "BufferedOutputFile.java" simply picks some random big figures with plenty of zeroes, hoping that more zeroes will lead to better buffering throughput.

While that may sound true to the layman, it really is not.Adding more zeroes to your buffer will not increase your network throughput!

There are a couple of things to consider here:

1) The transcoder reads a file to transcode (possibly over the network)2) The transcoder fills up the pipe with transcoded movie3) PMS reads the pipe data into a buffer4) PMS sends the buffer data to the DLNA client via the network5) The DLNA client decodes and displays the movie

The slowest bits in this chain are probably steps 2 and 4.This has nothing to do with the size of the buffer, but with the speed of transcoding and the speed of the network.

For comparison: TCP/IP packets typically have an MTU (maximum transmission unit) of 1500 bytes.Yes, that's right. That 50000000 bytes buffer is chopped up in 1500 byte chunks during transportation when it is sent to the DLNA client. The channel that HttpServerPipelineFactory.java is using uses 8192 bytes as default chunk size if I remember correctly.

I'm not saying that having a buffer is useless. The buffer can even out transmission lag and it can compensate for when the transcoder is having a hard time. Or, it can make sure all data for one chunk of movie is complete before sending it to the client. Or it can be used to do bit magic like the methods in the lower regions of "BufferedOutputFile.java".

But seeing PMS request a contiguous block of 491 MB worth of memory just to buffer piped data is plain ridiculous.

IMHO buffering needs an overhaul badly. The code should be able to work correctly with much smaller buffer sizes - like 10K or something - and it should work correctly for multiple devices streaming concurrently. Right now it cannot and watching movies will break horribly if you try.

yeah the problem i am having is with the buffer draining during scenes with alot of info i notice cuz most the time it is full then a whole bunch of trees and bright sunlight and action comes on and the buffer starts to drain till it hits 0 the cpu is working its ass off to keep the buffer up, i do agree anything that can improved upon is a great step.

on abit of a side note im just wondering how effecient is mencoder and is it getting more or less effecient as the builds go on, i know its abit of a patch and fix job for many years now will there come the time when we could replace it with something that just eats through the transcodes and displays it with very little buffer size, i hope that day comes..

the main reason the buffer exists is so if something ties up or bottlenecks abit it has that extra information, i dont think a 10k buffer would ever work would be nice if it did though..