For problems related to the HTML 5 media elements (<video> and <audio>) -- including WebM, MP4, MSE and EME issues. This would also typically include decoding problems in the codecs themselves (e.g. VP8, VP9, H.264, AAC) experienced during playback.

Couple of things:
1. I'm fairly certain the data types I'm using the the attribute itself are screwed up. As far as I could tell in the spec it wasn't too specific on this front.
2. As you will notice in the patch, I left a little "thisWillNotPass" hint for the Ogg stuff as I wasn't sure how to approach this from that end since it makes calls to this same function.
Anyway, any suggestions/criticisms are welcome! Let me know how badly I screwed up :p

Comment on attachment 616157[details][diff][review]
Initial PlaybackJitter Patch
Review of attachment 616157[details][diff][review]:
-----------------------------------------------------------------
You've made a good start, but there's still plenty to do, keep it up! :)
Are you going to attempt to implement the other metrics (like bytesReceived etc)? You'd be best to tackle one at a time though.
::: content/media/VideoFrameContainer.cpp
@@ +17,5 @@
>
> void VideoFrameContainer::SetCurrentFrame(const gfxIntSize& aIntrinsicSize,
> Image* aImage,
> + TimeStamp aTargetTime,
> + PRInt64 aIntendedDuration)
aIntendedDuration should be a TimeDuration.
@@ +38,5 @@
> mImageSizeChanged = true;
> }
>
> + if (!lastPaintTime.IsNull() && !aTargetTime.IsNull()) {
> + mPlaybackJitter += (aIntendedDuration - (aTargetTime - lastPaintTime).ToMilliseconds());
So playback jitter is defined at http://wiki.whatwg.org/wiki/Video_Metrics#playbackJitter as:
playbackJitter = sum(abs(Ei - Ai))
where:
Ei = Desired duration of frame i spent on the screen (to nearest microsecond?)
Ai = Actual duration frame i spent on the screen (if the frame is never presented to the user, then Ai == 0).
(On most platforms we care about the timer resolution are limited to millisecond precision, so don't worry about microsecond position).
Also aTargetTime is the TimeStamp at which we should paint aImage, whereas lastPaintTime is the time at which the ImageContainer's *previous* image was painted. So the duration which the *previous* image was on screen for is (TimeStamp::Now() - lastPaintTime), and the previous image's duration was passed in last time this function was called.
So I think you should:
1. Add a new field to VideoFrameContainer, TimeDuration mLastFrameIntendedDuration.
2. When you enter SetCurrentFrame(), if we have a non-null mLastFrameIntendedDuration, then the jitter for the previous frame was:
TimeDuration frameJitter = (TimeStamp::Now() - lastPaintTime) - mLastFrameIntendedDuration;
Calculate this and add it to mPlaybackJitter (which can stay as a double, we return it to JS as a double).
3. Set mLastFrameIntendedDuration to aIntendedDuration.
::: content/media/VideoFrameContainer.h
@@ +43,5 @@
> NS_ASSERTION(mImageContainer, "aContainer must not be null");
> }
> // Call on any thread
> void SetCurrentFrame(const gfxIntSize& aIntrinsicSize, Image* aImage,
> + TimeStamp aTargetTime, PRInt64 aIntendedDuration);
aIntendedDuration should be a TimeDuration, not an PRInt64.
@@ +48,5 @@
> // Time in seconds by which the last painted video frame was late by.
> // E.g. if the last painted frame should have been painted at time t,
> // but was actually painted at t+n, this returns n in seconds. Threadsafe.
> double GetFrameDelay();
> + //
Need a meaningful comment.
@@ +76,5 @@
> // The delay between the last video frame being presented and it being
> // painted. This is time elapsed after mPaintTarget until the most recently
> // painted frame appeared on screen.
> TimeDuration mPaintDelay;
> + // TODO: Come up with a good comment.
Yes, you totally should. ;)
A link to the wiki/spec would be good to include.
::: content/media/nsBuiltinDecoderStateMachine.cpp
@@ +1879,5 @@
> return;
> }
>
> + // Assuming that the times retrieved from the VideoData is the intended times for the frame
> + PRInt64 intendedDuration = aData->mEndTime - aData->mTime;
I'd rather you used TimeDuration.
::: content/media/ogg/nsOggReader.cpp
@@ +272,4 @@
> container->SetCurrentFrame(gfxIntSize(displaySize.width, displaySize.height),
> nsnull,
> + TimeStamp::Now(),
> + thisWillNotPass);
Hmm, we'll need a way to not include video frame durations from frames that are displayed while we're not playing (paused, seeking etc). Otherwise they'll skew the jitter.
How about when we call SetCurrentFrame we pass in a boolean parameter to designate whether we paused since the last paint. We know if we were playing for the entire duration the previous frame was displayed; if we weren't, we shouldn't include its jitter in the jitter sum. You'll then need to figure out how to track this in the nsBuiltinDecoderStateMachine.

Comment on attachment 616157[details][diff][review]
Initial PlaybackJitter Patch
Review of attachment 616157[details][diff][review]:
-----------------------------------------------------------------
::: content/media/ogg/nsOggReader.cpp
@@ +272,4 @@
> container->SetCurrentFrame(gfxIntSize(displaySize.width, displaySize.height),
> nsnull,
> + TimeStamp::Now(),
> + thisWillNotPass);
Argh, was a but trigger happy on the review for this. What I meant to say is that you need to figure out if we were paused at some point during the last frame, and I think tracking whether we paused (whether nsBuiltinDecoderStateMachine::StopPlayback() was called) since the last call to SetCurrentFrame() is an easy way to achieve this.

> Are you going to attempt to implement the other metrics (like bytesReceived
> etc)? You'd be best to tackle one at a time though.
Yeap we plan on finishing them all :) I have bytesDecoded, bytesReceived, and droppedFrames ( this one only sort of ) working. Do you want me to put those up for review or wait until I get the others done as well?