In bigger installations we use Squeezeboxes and CAT5 matrix switches to achieve this. 4x8 or 8x16 is the most used combination for us. This allows us to route any or all of 4 or 8 inputs to any of 8 or 16 outs. Each output then would drive a zone of a multi-Zone amp or localised in-ceiling amplifiers in each room. We use either rs232 or IP controllable Matrixes. This allows us to route one source to all outputs/zones and gives us perfect sync too. Conversely we can control the Matrix to allow for say several zones on the ground floor to receive the same source while each zone upstairs receives separate sources.

Matrixes are expensive but they do provide an enormous amount of flexibility and do keep video/audio perfectly in sync. We'd rather do all this internally, and digitally, inside the system...but for now this seems a little out of reach.

In some installs we also use Matrixes for switching video sources too... but thats another story.

All the best

Andrew

Sounds interesting. Can you give some real example of using Martix, please, if it isn't secret of course?

Well my earlier post was referring to 'real' customer installation projects...so this is real and not hypothetical for us. "Matrix' switch is not a brand its a technical term for a hardware device that can route multiple inputs to multiple outputs either independently or simultaneously. These matrixes can switch audio/video or both depending on your application and can be controlled by LinuxMCE using either IR, rs232 or IP interfaces. They can be very expensive as you get to larger units with many inputs and outputs... and currently HDMI Matrix Switches are the most expensive of all. We use units we have built direct from Taiwan/China but you can purchase similar units easily from many suppliers - google 'HDMI Matrix Switch rs232' and you will get plenty of hits ;-)

There is no secret at all...I have written about it here in the forum openly numerous times. I will attempt to write it up as an article on the Wiki over this Easter...should have the time then ;-)

when you setup a matrix like that, do you wire each MD to an input and then send each output back to specific devices? Im assuming you must do this for video to sync, or is it done another way?

If done that way:--Wouldnt you loose the respective MD functionality on the MD's in different rooms unless you switch the respective orbitor to the originating MD for control?--Would then be a user training issue so they understand this (like my wife walks into a room and the orbiter\remote sitting there is not responseive because video from elswhere is being shown, etc)

I probably wouldnt implement anything like this for a while if at all, but just interested in the knowledge incase i do.

when you setup a matrix like that, do you wire each MD to an input and then send each output back to specific devices? Im assuming you must do this for video to sync, or is it done another way?

In a pure Video Matrix installation where you might have say an 8x8 matrix at the centre with each MD located centrally too, then each MD is hooked up to an input on the matrix (HDMI/USB/) with local connections to local amplification possibly. Then we would use integrated HDMI-CAT5 & usb-CAT5 converters (you can get these as separate units too) to get the Video signal and USB connection to the remote displays location (say in the bed room). You might have say 4 x MD's and 4 x Squeeze boxes centrally located and routes in this manner (the squeezeboxes would be audio only of course).

Then any of these 'sources' could be routed to any output by controlling the Matrix appropriately...if you wanted Squeeze box No.1's output delivered to all rooms you could do so. For more flexibility you might need additional outputs so that you could have a separate Audio only and MD fed to each room (you'd then probably have an assymetric Matrix with say an 8 x 24 config).

In a simpler installation you might locate all the MD's locally behind/under the displays in each Room abd then only use a Matrix to centrall manage the Squeezeboxes... giving you totally sync'd audio all over the property (in most cases our research has shown us that sync'd audio house wide is the most common requirement).

Quote

If done that way:--Wouldnt you loose the respective MD functionality on the MD's in different rooms unless you switch the respective orbitor to the originating MD for control?

No... all of the MD's have their own integrated Orbiters. You could in addition use separate Orbiters like Nokia N810's, ASUS Eee Top's or other touch screen based devices. In this case the additional Orbiters would only be 'remote controls' and would not 'play' media directly themselves.

Quote

--Would then be a user training issue so they understand this (like my wife walks into a room and the orbiter\remote sitting there is not responseive because video from elswhere is being shown, etc)

Each mobile/fixed Orbiter has to be told which room it should be controlling...alternatively use a plain old IR remote and it just controls whatever MD ist IR signal reaches.

Quote

I probably wouldnt implement anything like this for a while if at all, but just interested in the knowledge incase i do.

Well an installation like this would need a lot of pre-planning and understanding of the concepts behind LinuxMCE... and also how you can 'bend & shape' those concepts to achieve what you or your customer requires.

I think on some of the questions I worded it badly to get the info I was trying to get, but I was able to figure out what I wanted based on the details in your answers.

I've setup a vga to cat5 converter at work in a conference room, but i never really considered that type of approach at home for various things. In retrospect, I wish I would have looked at\discovered LMCE about 7 months ago. I purchased a home and gutted it, rewired all electrical and ran Cat6\RG6 to every room. Now I wish I would have run 2-4 cat6 wires to each room. Would have been easier with the walls open, o well. Still workable with a little wire fishing.

In bigger installations we use Squeezeboxes and CAT5 matrix switches to achieve this. 4x8 or 8x16 is the most used combination for us. This allows us to route any or all of 4 or 8 inputs to any of 8 or 16 outs. Each output then would drive a zone of a multi-Zone amp or localised in-ceiling amplifiers in each room. We use either rs232 or IP controllable Matrixes. This allows us to route one source to all outputs/zones and gives us perfect sync too. Conversely we can control the Matrix to allow for say several zones on the ground floor to receive the same source while each zone upstairs receives separate sources.

Matrixes are expensive but they do provide an enormous amount of flexibility and do keep video/audio perfectly in sync. We'd rather do all this internally, and digitally, inside the system...but for now this seems a little out of reach.

In some installs we also use Matrixes for switching video sources too... but thats another story.

All the best

Andrew

Have you guys thought about the suggestion I made of revamping the xine/DCE into 2 separate xine DCE devices - client and server? In another topic http://forum.linuxmce.org/index.php?topic=7657.0 this would increase the flexibility of the system, make it more modular, but most importantly ensure that all video and audio was always perfectly in sync and not need to go through that process of starting from the beginning and jumping to the right spot each time you split/moved the stream....

We're always thinking about streaming media and keeping it in sync ;-)

...and yes i have thought about that discussion. But I am still of the belief that, and this is particularly true of video, without special hardware it will be near impossible to achieve full quality and sync that scales from low-res up to full 1080p (Blueray quality) playback. Audio is definitely more achievable overall as even the highest quality streams are not stressful to any of the software we already have access to...so I guess it would audio where this might be worth some effort.

However for now we can get 'off the shelf' audio/video switching with full control from inside the system that delivers full quality for both audio & video... and the costs scales nicely for both Forum members who want to build down to a tight budget and for those who have a more 'money is less of an object' approach.

All the best

Andrew

Andrew - I understand your concern about the scalability. At the same time, a few points...

Whatever else is true, the method I propose would keep all media (audio/video/hd video) vastly closer to in-sync than the current method of relying on 2 or more streams happening to coincide based on them both being started based on the 1 second timestamp markers! Especially, with varying amounts of buffer sizes used in the playback hardware/software. Coincidental is somewhat the key word here

The other really critical point here is I am talking about a fundamental change in the concept of how you deliver media. We currently "stream" using a reliable TCP session. The alternative approach is using "real time" communications, and that is often misunderstood by people. Generally, people use a "reliable" TCP session, and deliver media into a remote buffer, and consume the data non-real-time for a specific purpose. ie to combat packet loss and variations in available bandwidth over, potentially, a very long, routed network path where these parameters are not guaranteed... eg the Internet. The buffer deals with bandwidth variations and TCP retransmit deals with packet loss.

Real Time communications usually uses UDP and 0 buffers for a simple reason, the traffic is delivered "just in time" to be consumed, ie real time. There is no point suffering the extra over head of "realiable" communications through a TCP connection because if a packet is lost, it becomes useless anyway, as its time has been an gone, so these packets simply get dropped. There is no point using a buffer, because the data is consumed immediately so the buffer would be permanently starved, plus the buffer introduces unnecessary latency on the playback.

The concepts involved in real-time communications are critical and necessary to forms of real time communications such as digital voice and video circuits, when these are interactive (telephone calls and video conferences) latency beyond the simple propagation time through the length of the circuit, is intollerable. Consequently, the technologies used are QoS/CoS/ToS marking/enforcing, prioritisation, Low Latency Queuing, Strict Priority Queues, allocated queue bandwidth, UDP and other real time protocols, etc. And obviously all this works very well indeed. In fact an analogy in video delivery could be terestrial TV broadcasts... we don't worry about the transmission time or retransmits on the RF broadcast, do we it is "real time" in the truest sense!

But is all that necessary? No! Remember, they are trying to get audio/video real time streams to be reliable over very long distances, through very many network segments, routers, etc... sometimes even over the vaguaries of the Internet, and are still able to achieve this reasonably well. We are talking about a single, local, layer 2, switched network, direct connected and with a single subnet segment.

We don't need QoS/etc, queuing, prioritisation, etc. We certainly don't need buffers when sending, say, a broadcasted/multicasted UDP real time stream. None of this is even an issue until the tx-rings of the NIC start to experience network congestion, which with real time streams, even on a 100M ethernet will take some doing (many simultaneous, different streams). And we need to remember that under the same networking conditions, the current approach wouldn't work either, no matter how big the buffer... indeed with the additional TCP overheads and retransmits, you get even more congestion.

What is the upshot? Well, transmitting a real time stream on the Internal network, to be played real-time, without buffers means that 1) there is sub-millisecond propagation latency as it is a local, switched network, 2) the serialisation latency is identical for all recipients, 3) there is no additional latency caused deliberately by buffering, yet delivery is still completely reliable, due to it being a local, switched network, right up until the network is saturated/congested, at which point our current approach would also fail! Result - even high bandwidth video would always be in-sync, across multiple MDs to within a handful of milliseconds, compared with in-sync to within a second or so!!

The option is definitely there, and far from unconventional in the wider technology landscape... it just requires us not to be scared of real-time, unbuffered media streams on a local LAN. BTW, I'm not saying that the communication necessarily needs to be over UDP. In a local LAN environment, like we have, TCP effectively almost behaves like UDP for real-time traffic anyway... we just aren't using the reliability/retransmit features of the protocol. Only the ACK remains different, and there are plenty of real-time technologies out there that still use TCP without being overly concerned about the ACK latency, particularly, on a LAN.

Is this not something we should seriously be discussing, if the Xine libraries have this ability? Are we only turned off because of the unfamiliar real-time territory?

Collin,That all sounds great, I agree it should be looked into as it would solve a lot of my problems. I imagine that this could also lead to wireless video/audio streaming as all you would need to do is point your wireless laptop/cellphone to the existing stream.

Colin's points are very correct. What he is essentially talking about is a revisitation of IP multicast distribution techniques, essentially taking the RTP approach, but i think in this case something much simpler.

Thx guys! Now I just wish I had the capabilities to see this through! I am, however, doing some research on Xine --broadcast-port <port> at the least I would like to see xine (just in a terminal session on KDE desktop) broadcasting to xine on an MD

ooo! I just noticed that in broadcast mode the xine (command line, dunno about libs but assume it must implement it too!) accepts novideo and noaudio as options. Novideo means ignore the video part of the stream and play audio, and vice versa. This would play well into another feature I have talked about before... being able to tell a media director to play video from one source and audio from another...

Edit: another thought, the novideo option could also be used in conjunction with updates to the slimserver/squeezecenter device... this device could subscribe to an AV stream with the #novideo option, then relay that stream to a SqueezeBox to play the audio component only...

Thx guys! Now I just wish I had the capabilities to see this through! I am, however, doing some research on Xine --broadcast-port <port> at the least I would like to see xine (just in a terminal session on KDE desktop) broadcasting to xine on an MD

ooo! I just noticed that in broadcast mode the xine (command line, dunno about libs but assume it must implement it too!) accepts novideo and noaudio as options. Novideo means ignore the video part of the stream and play audio, and vice versa. This would play well into another feature I have talked about before... being able to tell a media director to play video from one source and audio from another...

Edit: another thought, the novideo option could also be used in conjunction with updates to the slimserver/squeezecenter device... this device could subscribe to an AV stream with the #novideo option, then relay that stream to a SqueezeBox to play the audio component only...

Look I would love to see fully sync'd IP based media streaming...we worked damn hard with Aaron and his devs on this when we first started to work with Pluto. We wanted and still want that big time. We looked at using libvlc and libxine at the core of this research/effort by the way (you can see this legacy even today...the original vlc based multi-room solution was one of these).

Now I dont claim to be any kind of 'down to the metal' expert in this area, but Aaron is no slouch in this area, and he tried damn hard to make sync'd streams work...but it just never quite delivered what was needed. In the end the streams were not in sync 'enough' for them to be useful or enjoyable when compared to doing it with multi-zone amps and or a Matrix - and at the end of the day unless it sounds & looks good enough there is no point in it. After all the whole point is to have better quality media delivery than we have now internally in the system and at the same time to at least replicate the quality we could achieve by a more 'traditional' approach. If we cant do that then what is the point? Having multiple sound sources even marginally out of sync is no better than we have now.

I recently had some chats with some Sonos techies and...even they cant quite compete in this area they admitted when compared to traditional systems...the real audiophile market will not accept a 'close miss' on this.

I agree for your purposes this could turn out to be useless, but remember that not everyone has the luxury of multi-zone or matrix systems ... and for them I think that even going from 1000-2000ms down to 20-200ms sync would be a significant improvement they would like. Plus, as I said there are other side benfits!

But unfortunately the client side fails with a Floating Point Exception irrespective of the media file I use... dunno why. Done some searches and can't find anything for that for my hardware so I have logged a ticket with the Xine Project... hopefully they can advise.....

IMHO in some cases using the Matrix switch is the best solution. For example, to have audio zone in the bathroom or outside area. It's difficult a bit to install Squeezebox in the bathroom. But have an audio signal from the switch will be enough.

Colin,I would like to see this through because I agree that for the average Joe this would be quite an upgrade from the 2-3 second delays that can exist now. Maybe start a wiki with what you have learned so far and we can all pitch in to try and work a solution out.-Krys

Could LinuxMCE use PulseAudio sinks? PulseAudio was incorporated into Ubuntu and implements a 'glitch-free' synchronous multi-output network sound server system. It is the best sync for a networked sound system that I have heard. It's still not perfect. Just a thought.

I'll certainly keep looking into it, but until I can fix my xine-ui issue I can't do a POC (personally I think it is just my setup, this is a standard feature of xine-ui, so I can't imagine that it is fundamentally flawed, yet no one else has reported it!)