At PLASA 2011 I was talking with some of the manufacturers of our industry and received complaints about the resolution of CITP MSEX video streams not being adequate.

I made a posting on the LightNetwork quite recently regarding this resolution and concluded that streaming JPG frames of 50% quality should allow a resolution of about 600 x 400 pixels (http://en.wikipedia.org/wiki/JPEG) as it should fit in the StreamFrame message limitation of 65K.

I realize now that these companies may have missed the ability to stream as JPG, but I would like to know if anybody feels the need of resolutions higher than that?

An update here - MA lighting suggested to me during ProLight+Sound in Frankfurt this year to send fragmented StFr messages, as a solution to two problems:

Being able to send stream frames larger than 65K

Being able to send a several smaller (than 65K) frame fragments for traffic shaping

I think this should go straight into the spec, there is only one thing to decide and that it whether it's an addendum to MSEX 1.1 and/or 1.2, or as part of a new 1.3. MA lighting are in favour of version 1.3. Opinions?

Hey Lars, I've been smashing several streams of 720x576 and higher @25fps with JPEG 70-99%+/- quality into MA2 within the 65K limit, by recompressing frames that exceed the 65K limit dynamically to maintain best quality (and using short stream names!). I'm using MSexLord - part of the TimeLord media player (http://timelord-mtc.com). But my point is why would anyone want something greater than SD video on their console? MA2 for example, 1280x800 UI, with a smaller window space for CITP video viewers, which scale it anyway (yes, you can have bigger external displays etc...). I think the biggest limitation on the system is MA's 30Mbit/s limiter, whilst I realise it's for traffic management, but it's a bit restrictive if your not in network sessions. Also, they could accept CITP connections on the eth1.

My idea, and here is a rough spec for your 1.3 version, is a message type "TcFb", or Touch Feedback. It's to facilitate realtime manipulation of the stream source from the client, ie. pointer information. Warp or move a video, or interact with video effects live etc.

Sent from controller to server (vis clients are optional), messages are sent on pointer information change only. Client may wish to center the pointer with no LeftState or RightState information when stream is first requested. no mutual exclusion is needed, you simply yell at the person on another station to stop changing the stream, or the source stream generates a padlock icon or similar to prevent changes. Stream server is responsible for doing whatever it likes, if anything, with this information.

Hippy wrote:But my point is why would anyone want something greater than SD video on their console?

That's a valid point and I don't disagree. However the discussion of higher resolutions is not for consoles, but for visualizers. As for the specifics of bandwidth limitations of MA, that's for them to handle.

The touch feedback message is quite interesting though! Are you visiting PLASA in London this fall? If so, I'd love to meet up and have a chat about it.

Rather than adding a new message type to any of the existing MSEX versions or adding yet another version only for the purpose of accomodating fragmented streams, here is what I will move forward with unless there are objections:

- We introduce two new stream formats "fJPG" and "fPNG", short for "fragmented JPG/PNG". This can be done easily and in line with all current MSEX versions without breaking any compatibility or having to introduce new messages. - "fJPG" and "fPNG" stream frame buffer data is defined as today, but with the addition of a fragment preamble:

I had kind of assumed following the spec that MSEX like any CITP message would use the CITP header to sequence fragmented message transmission.

So I wrote my CITP/MSEX stack to handle any fragmented CITP messages, that just used the CITP message count /parts /length fields for any oversized frames. I pack as much as is possible into every packet within the UDP limitations, if a message is needs multiple UDP frames, the message count and part number and length kick in.

With MA2 I get part of a beautiful nice HD picture, then blocks of grey where the jpeg data of the subsequent sequenced CITP messages are being lost.

Version wise, this shouldn't break anything in current clients since they either already drop it, or can already handle it?

I don't mind either way really, new types could do it too, I just wonder if anyone else built their stacks to operate with fragmented CITP messages from protocol headers?

Hippy wrote:I had kind of assumed following the spec that MSEX like any CITP message would use the CITP header to sequence fragmented message transmission...I don't mind either way really, new types could do it too, I just wonder if anyone else built their stacks to operate with fragmented CITP messages from protocol headers?

Unfortunately that construction is very old and .. weird. It would have worked over MIDI or TCP (where it's not needed anyway) where message order is guaranteed, but over multicast you really want that counter as well so that you know you're assembling the fragments of the same message.