Author
Topic: MPD integration (Read 2324 times)

I've been after integrating spotify into linuxmce for ages, even trying to write a full dce/ spotify component (failed miserably).

I've come across an MPD implementation -> http://www.mopidy.com/ that gives an MPD interface to spotify, soundcloud, icecast etc and also local files as per normal.

I've got it all set up on my 10.04 core (it was a royal PITA with its old version of python, but I got it running eventually), and it's now lovely, I can use any MPD client to control and query spotify streaming music, alongside local files.

I'm wondering if anyone has considered MPD at all for a more general integration? Seems like it might be a good thing to delegate music handling too, especially something plugin based like mopidy that would allow leveraging all the work they've put in.

Maybe, a mopidy server per user for them to control that can then be streamed out tom the various EAs? dunno.

Go for it. My general plan is to integrate squeezeserver/squeezeslave for audio playback, as it is very adept at audio synchronization (needed for proper multi-zone audio playback), but this system is modular, you can drop in support for whatever you feel like.

I do know you tried to do a JAVA implementation of DCE, but perhaps you should try using the C++ tools, to see what's there first?

Yeah, I'd like to get that java/ dce finished and running at some point. I did get basic demo devices implemented and running, just needs some polishing and proper testing. TBH, actually finding a real problem that it would suit better than C++ is the real issue, which is why I never really went anywhere with it in the end. Something web integrated would suit well I think.

It appears that mopidy has a nice modular system of its own, so I strongly suspect that it could be turned into a standalone bi-directional DCE device via a couple of python/ c module extensions that interface with libdce.

Gives me a reason to brush up my python, and also might be a useful contribution to be able to write DCE devices in python as well .. I'll see how it goes, might not be feasible.

As a first cut, I'll make a simple dce device that accepts the media control commands, maybe issues events if I can track down the right ones.

Once I get there I expect to be lost for how to integrate into the system properly. I'll ask more questions then.

Also, I think it'd be ideal to have mopidy as a logical audio source in its own right, so that it could then be streamed at the the squeeze etc devices and have them handle the audio distribution, rather than outputting audio directly itself. That strikes me as a more natural integration with the lmce audio network as it stands right now, rather than just adding something thats useful but not totally compatible.

If you have any thoughts on that direction, that would be nice to understand. If it's feasible, possible, a good idea etc...

Via its plugins, it appears to give access to the majority of the music streaming services via a single API, so it'd be a nice win to be able to have LMCE have all of those integrated, and be able to take advantage of any others being added to mopidy in the future.

There is a bug in mopidy that prevents it working properly as a shoutcast stream server via gstreamer, the stream gets closed at the wrong time. Using gstreamer to generate a plain audio source does seem to work fine though, and if the meta data is being distributed via DCE, thats good enough...

Have you made a device in C++ yet? I would do that, before making anything else, so you can understand the API that we have fully. Why are you avoiding this?

-Thom

:-)

I have done some experiments with C++ devices yes. I do understand, mostly, the API and how it maps to the device definition and message passing, I did a fair amount of research to understand it all while building the java dce library.

On why I did that, and why I'm proposing python now? A few answers are possible.

I love to experiment and fiddle, re-implementing something is a nice way for me to understand how something works, so I enjoy the play time.

The potentially most contentious one is that C++ has a relatively higher barrier to entry as compared to java, python and ruby on a language level. The ruby integration is fairly basic, single threaded and unsuitable for building full devices.

The java dce lib isn't really suitable for much as it turns out, as the java environment is just too decoupled from any kind of automation for it to be worth it.

Python has quite tight integration via its C/C++ modules, so it is feasible to use it to write some real devices.

After all that though, the real answer is that mopidy is written in python. Any extensions to it will also be in python. Adding a DCE interface to it would be one of, write a seperate DCE device to manage it, or write a DCE layer within it. Having a dce device in it would mean interfacing C++ and python.

After a little playing, I wonder if I could ask a little advice on how to lay out the various bits.

So, my ideal world.

I'd like to have an MPD server for each user in the system and have the audio from that routed to whichever speakers they choose.

Control for MPD can be via the normal MPD interface/ clients for now without any issue, ideally this could then be embedded into the orbiters and web interface, but that isn't something I'd like to tackle quite yet.

The audio handling/ routing is something I'm not sure about.

I have a few sqeezeslaves running that work well for LMCE based radio, I'd like to push the audio from mopidy through those too. Any suggestions on where I should start looking to implement that?

Also, after thinking a little more, it kind of feels like there would only be a single device in charge of spinning up the mopidy servers, probably on the core. Then the audio would be streamed from there. Does that sound reasonable?In that case, it may not be necessary for the mopidy servers to be DCE devices at all and renders the python/ DCE .. discussion.. somewhat moot.

Any new features go into the development trunk, which right now is 12.04, full stop. 10.04 will not receive any new features, and you shouldn't develop any new features on it.

Since this is a media plugin and player pair, you'll need to develop these in C++, full stop. No, don't argue. You'll have to, because the media plugins need to have access to class pointers of the other plugins in the router memory space to do infrastructure work.

The player portion seems expected to run on a media director. If there were to be only one of these per user, rather than per MD, with the audio routed to where they were, what do you think the best approach would be?

You need to do some serious study of the media plugin, and of the Xine Plugin and Player, as this is the most feature complete media player we have, which uses ALL of the functionality of the system.

You're overthinking things, this is what happens when you try to think things through without actually digging your hands into the code. stop it.

the Media Plugin concerns itself with instances of MediaStream. Which is merely a container for a given instance of media throughout the house. Right now, the media handlers look for media devices in an entertainment area (in the DeviceTemplate_MediaType table), and then cross reference this with a vector that is populated when the different media plugins run their ::Register() methods. The two ends meet, and the relevant media plugin's CreateMediaStream is called, which, in the end, will take the subclassed media stream (e.g. MPDMediaStream), and return it back to the Media Plugin.

Once this is done, StartMedia is called, which does all the logic to figure out WHERE a stream needs to go. You're given a LOT of data in the MediaStream object (look in src/Media_Plugin/MediaStream.h), and you use this to cross reference with the device tree, to figure out ultimately where things need to go, either ending in a CMD_Play_Media() call, or a CMD_Start_Streaming() call for sending to multiple destinations.

These individual calls are merely declarative control, you're not actually sending the streams down these DCE calls, you're just coordinating signalling to the target media player as need be. Your Player's job, is to either EMBED mpd, or to control it, depending on the approach.

The Media Plugin does a _LOT_ of abstraction for you, including figuring out whether the media stream needs to be bifurcated or not (if you're sending an audio stream to a couple of media directors running Xine, a couple of squeezeboxes, and a couple of MPD endpoints, then THREE separate media streams will be created, the media streams themselves will not talk to each other.)

The Squeezebox support, for example, relies on a Slim Server Streamer, that runs on the core, and talks to the CLI interface that Logitech Media Server exposes for integration. The Squeezeboxes themselves are children of the Slim Server Streamer, and their configuration script merely sets the Controlled Via to be the Slim Server Streamer on the core. The Entertainment areas for each device are set appropriately, and are placed in their appropriate rooms.

So, what you need to figure out now, is whether to either (on the core)(1) embed the mpd libraries and call them from C++ (spawn thread, etc.)(2) talk to mpd over a socket to your player.

I'm probably not fully understanding, but last question. Would it make sense for there to be a mopidy plugin that controls mopidy instances on the core, and for it to the instruct media directors to receive and play their streams?So there isn't a mopidy player per se.