i was actually looking at his site the other day. it has a fairly sane api (even if the juicy bits are undocumented ) so i might give it a go if nobody gets to it before me (might take a week or two as i am pretty busy at the moment so feel free to jump in!)

"use any network sniffer and download the mix from the correct URL"...

could have posted that as well but then only the experienced users would know what to do, my intention was to write something that all level experience users would be able to follow. also, if it is not apparent enough, that post was specifically written for people wanting to play mixcloud on the apple devices, rather then download them, for that I have posted this: http://technolux.blogspot.com/2011/02/download-mixcloud-mixes-as-mp3-m4a-aac.html

the rest of their api (for searching and stuff like that) is documented here.

the only annoying thing is that some of the links return a 404 (their own player just keeps trying until it finds a real link!) so you need to test that before passing the link to the xbmc player.

indeed, like you too have found out, not all the links are valid, I have no clue though why, if you know let me know please. what is the point in storing a useless link on the mixcloud sever, explain that to me please.

in order to avoid people finding useless links I have also included information on how to avoid those and with the use of filters in url snooper find the correct links. however now both posts are outdated and since I have not got the time to fiddle with sound or mixcloud I am not sure if still the files are being stored in several file formats and bitrates, (I have recnetly seen though the the path has changed from mediaXYZ to streamXZY) I assume they do and so last not least, giving the user such detailed options enables the user to choose what he/she wants to download, i.e. like a m4a file or the mp3 etc, depending on the devices these mixes should be played on.

t0mm0 Wrote:i will get to it eventually if nobody else cuts in, but i'm currently sidetracked with the urlresolver project. t0mm0

cutting in, cutting in, however I am more interested in a script (greasemonkey??) type addon/plugin that directly shows a download link on the mixcloud page, similar to what it does with the soundcloud page.

just recently checked what offliberty does with mixcloud and unfortunately it also presents useless links!

so ideally coming up with a script that would check the found links for them being alive or dead(404) and if possible giving the user the e.g. file extension and perhaps bitrate etc, now THAT would b a class plugin. naturally with such a thing the correct URL or correct list or URLs could be passed to xbmc et walla done.

so how can I help? where are you at this at the moment? writing/updating the network sniffer? so far I have not used xbmc since I have happy with mpchc and vlc but always up for trying out new things, especially if they accept changes to my needs. I assume you do all coding with python so far?

checked the documentation but could not find info on the player.. basically how did you find that URL above please? if you could explain that, thanks.

what you posted works for all other streams as well, (just tested it on various cloudcasts) always giving 16 different solutions for each stream, each is a set of 4 different server locations of which some are 404 and others are working for the various file formats and bitrates. so WHO gets to decide what bitrate and what format and what location a mix will be stored at? what is the procedure there?

Quote:Firstly, we need to know what has been listened to so that we can report usage, pay royalties and provide features such as "Suggested Cloudcasts" on the dashboard. Secondly, Mixcloud needs to pay the bills! We can't give away the audio for free outside of mixcloud.com simply because it costs us to host and stream the files.

That said, it's early days for Mixcloud and we will try to find ways in the future of opening up the audio streams so that we can start seeing new and exciting applications playing Cloudcasts.

So perhaps those 16 links might just become a few, perhaps only the working ones, once they open the streams?

t0mm0, any thoughts or ideas you have about all this, PLEASE, let me know, thanks..

xbbx Wrote:indeed, like you too have found out, not all the links are valid, I have no clue though why, if you know let me know please. what is the point in storing a useless link on the mixcloud sever, explain that to me please.

i can only guess the same as you i reckon they don't mirror all the files across all their servers, and for some bizarre reason they don't keep track of which files are on which servers so just spew out all possible links and let the client do the work to find the right url. seems like a weird thing to do, but maybe they have some historical architectural issue or something.

xbbx Wrote:cutting in, cutting in, however I am more interested in a script (greasemonkey??) type addon/plugin that directly shows a download link on the mixcloud page, similar to what it does with the soundcloud page.

this is an xbmc forum..... but i don't think it would be possible to check for 404 errors in javascript because of the same origin policy.

ah, a quick google suggests that greasemonkey somehow gets around the same origin policy. you'd want to send a HEAD http request (see bottom of that page) so as not to download the whole of each file just to see if they exist.

xbbx Wrote:so ideally coming up with a script that would check the found links for them being alive or dead(404) and if possible giving the user the e.g. file extension and perhaps bitrate etc, now THAT would b a class plugin. naturally with such a thing the correct URL or correct list or URLs could be passed to xbmc et walla done.

you can check the url exists in python too by sending a HEAD request (see here)

xbbx Wrote:so how can I help? where are you at this at the moment? writing/updating the network sniffer? so far I have not used xbmc since I have happy with mpchc and vlc but always up for trying out new things, especially if they accept changes to my needs. I assume you do all coding with python so far?

xbmc addons are written in python. as io said, i will get to this at some point, but at the moment i am working on the urlresolver stuff which has a higher priority. if someone else wants to write a mixcloud plugin then that would be great just let me know so we don't duplicate effort.

xbbx Wrote:checked the documentation but could not find info on the player.. basically how did you find that URL above please? if you could explain that, thanks.

just used chrome to sniff traffic from their player page as you did. i always look for calls to api functions before looking for the files themselves as that is often more useful.

they have only documented the api calls that they want people to use, but that doesn't mean that there are not other calls they use themselves that they just didn't document

xbbx Wrote:what you posted works for all other streams as well, (just tested it on various cloudcasts) always giving 16 different solutions for each stream, each is a set of 4 different server locations of which some are 404 and others are working for the various file formats and bitrates. so WHO gets to decide what bitrate and what format and what location a mix will be stored at? what is the procedure there?

Quote:Firstly, we need to know what has been listened to so that we can report usage, pay royalties and provide features such as "Suggested Cloudcasts" on the dashboard. Secondly, Mixcloud needs to pay the bills! We can't give away the audio for free outside of mixcloud.com simply because it costs us to host and stream the files.

That said, it's early days for Mixcloud and we will try to find ways in the future of opening up the audio streams so that we can start seeing new and exciting applications playing Cloudcasts.

So perhaps those 16 links might just become a few, perhaps only the working ones, once they open the streams?

3. then from the newly constructed URL we load that api call and the 16 urls are then extracted from the loaded document and perhaps put in a temp .txt file or something.

4. script scans those 16 links for 404 and alive links.

5. script then spits out all the working links sorted into file format and bitrate in a little window giving the user the possibility to download all files or just one.

done.

Does that sound like a doable procedure? All this done with python or perhaps a greasemonkey script or perhaps even autoit? I must tell you I am new to coding but happy to give it a go and see what can be done. So far I have done xhtml, css, js and starting with java, perhaps you can help me with the one or other thing when I get stuck?

But before I start I need to know with what language/tool this will be written and what is most useful for user across various browsers and platforms without installing too much extra stuff to run the script.

Then once done I guess parts of that script can surly be used to easily pass the working urls to xbmc I would assume very easily?

Questions, questions, questions.. so if you find time let me know what you think. Thanks this is much appreciated and I look forward to doing the work on it.

turns out it's even simpler - check out the data-url attribute on the span tag for every play button on the site, it has the api call url in it!

you need to decide whether you want to make an xbmc addon or a greasemonkey script. if you are making a greasemonkey script we should probably take this elsewhere as this forum is for xbmc.

if you want to make an xbmc addon check out the available tutorials (mine is here, and there are others available on the forums and wiki) and ask any questions in the addon development forum (this one is for user help/support).

hehe, looks like someone at Mixcloud read this thread, now the data-url does not reveal the api call url any more, either that or the change came automatically with the new design of the site, the Cloudcast player is completely gone, nevertheless the api call URL path is still the same..

that blog is already discussed in this thread (in fact it was updated to use the api stuff i suggested in this thread rather than the other way around - check the dates )

i also suggested how to fix the problem of the files that don't exist.

unfortunately i still don't have time to work on this as i am busy with urlresolver at the moment which is a huge task but i've explained enough in this thread to build this if anyone else wants to try before i get to it (which could be a long time).