Yes, the releases of the hosted version are just snapshots of the code that is used to run the public site (usually when there have been enough changes to warrant it and when it is stable). If you feel like living on the bleeding edge the code is live in SVN here: http://code.webpagetest.org/svn/webpagetest

I'll probably cut a new release just after Christmas.

There's no formal roadmap. Most of the enhancements/planning is kept in TRAC here: http://dev.webpagetest.org/webpagetest/report/6 but a lot of the development is driven by ad-hoc needs (and whatever I feel like working on at the time :-) ).

Yeah, that's what i'm doing since the begining but I would like to have a method do get the xml results as an *.xml file
I tried to parse the *.txt in the folders but with recursive search it takes too much time

I thought about a timer (about 5 minutes) but i don't have any way to get the results as a file on the hdd

Gatting everything in the same directory will make it much easier to parse

Is there a reason you don't want to use the http api to get the xml files? It shouldn't be any harder than crawling the local directories. You also shouldn't need to do a recursive search (if you want to go the way of scanning for the files) - the test ID can be decoded directly to a file path where the results are stored on disk. You do still need to have the logic in place to wait for the test to complete before scraping the results.

If you want to move to more of a push, you can modify work/workdone.php to do whatever you want when the test is complete (write the cml out to a specific location for example or kick off a script).

I use the http API but I don't succeed to have the XML on the disk
Ho would you do in the workdone.php to export to a specific directory ?

Actually I would like to have the 2nd xml file when the test is done (the one where you can find all the data)
For the moment I use LogParser to ping the page and start a test and I get the URL that I want.
I collect all of them in a txt file.
Now I would like to parse all the txt file where each line is an URL where I can have my data.
I think that the best way is to download each URL as a xml file in one directory and parse them afer instead of requesting the server each time)
As my tests are done every 4 hour of the day (4 tests per day) I want the script to download each new test as an XML file in the directory ...

You can certainly modify workdone.php to run the code that is in xmlResult and spit out an xml file to disk instead of to the browser. It's just not designed to do that as things stand and it's expecting that if you're using the http API to submit tests you will also be using the http api to get the results. All of the information in the xml files is available in the raw csv files (probably even easier to parse) that are already written out to disk with the test results (*_IEWPG.txt and *_IEWTR.txt).

I tried to parse the *.txt files but the problem is that I use LogParser from Microsoft and there's a problem with the recursive parsing (it takes too many time).
The best way is to add a line in the workdone.php in order to put the *.xml results in one folder, it will be easier to parse
I'll try to develop a little bit of code to do that.

---
Edit :

I have done that, the process works locally but when I use it in the workdone.php, nothing works, do you have any idea about where it could be wrong?
I have pasted it at the end of the code after the KeepMedianVideo() Function.