Eh, It's an interesting format, having a synthetic voice read off space related news articles. Quick production and low overhead. But the execution is very annoying. Looks/sounds like it was done by a 12 yr. old.

Hmm, I didn't think it was that bad. Voice synthesisers are getting to be quite good, although you still have the odd squawks here and there. The editing and text writing was pretty well done. I do have the same issue with this as with all video though, it's slow. I could have read the same amount of information on a website with photos in half the time it took to watch the video. But there are plenty of people who seem to like video, and if this allows "video" news to be created for niche markets then more power to them. Meanwhile I'll keep reading .

Oh and by the way, the name sounds like the title of some Japanese cartoon series. Perhaps they can make it a bit less tacky...

_________________Say, can you feel the thunder in the air? Just like the moment ’fore it hits – then it’s everywhereWhat is this spell we’re under, do you care? The might to rise above it is now within your sphereMachinae Supremacy – Sid Icarus

Thanks Rob for asking this question!We are the producers of the 'Virtual SpaceTV 3D' show and the idea behind this boradcast is to let software create the show completely autonomously i.e. without human interaction. So it is not a 12-year old animating the characters, it is a software called 'Virtual Producer' which is doing all required jobs (analysing the scripts, voice synthesizing, facial/idle and other animations as well as the video encoding). Currently, only the story scripts are still written by humans but all the rest is already done automatically.In a second work package we will even add selected websites from where the sories will autmatically been taken from.

We are aware that this format will never reach Hollywood-alike quality but our team is constantly improving the 'Virtual Producer' to get better and better.

As a computer scientist in an entirely different field, that's pretty neat. Not cutting edge research I think, but just at the point where it's becoming possible to do practically as well. Nice job getting it to work like this!

Question: how is the syncing between the spoken text and the photos/animations done? Do the scripts include references to the media files at appropriate points, so that the system knows what to show with which bit of the story?

_________________Say, can you feel the thunder in the air? Just like the moment ’fore it hits – then it’s everywhereWhat is this spell we’re under, do you care? The might to rise above it is now within your sphereMachinae Supremacy – Sid Icarus

We have developed a XML-based language called 'Producer Slang' in which the story scripts are written. This language contains all stage directions necessary e.g. start a particular video on a specified surface, let the actors change their moods, change camera positions etc.For more details you can download a short description about the 'Virtual Producer' at:http://www.binary-space.com/virtualspac ... oducer.pdf

As mentioned in a previous post it is not the intention to create Hoolywood-alike animations; we will use this kind of technology to control planet probes via real-time 3D animations.

We have developed a XML-based language called 'Producer Slang' in which the story scripts are written. This language contains all stage directions necessary e.g. start a particular video on a specified surface, let the actors change their moods, change camera positions etc.For more details you can download a short description about the 'Virtual Producer' at:http://www.binary-space.com/virtualspac ... oducer.pdf

Interesting. Why did you invent your own tag format? Now it's got two different styles of tags, and it's no longer XML, which makes it impossible to interface with all the XML stuff that's around. Was this easier to implement somehow? But then why not do it all this way?

WebshowDevelopment wrote:

As mentioned in a previous post it is not the intention to create Hoolywood-alike animations; we will use this kind of technology to control planet probes via real-time 3D animations.

That doesn't make sense to me at all. At least, if I were the owner of something worth hundreds of millions of dollars, I would want a very precise user interface that allows me to define exactly what it should do. Not some 3D game. Or is this intended to be part of the reporting component, sort of a visual report?

More generally, how important is eye candy in this market anyway? I was expecting "not at all", but now I'm not sure.

Thanks by the way, space news is almost always about Major Tom. It's interesting to have a look at Ground Control for a change .

_________________Say, can you feel the thunder in the air? Just like the moment ’fore it hits – then it’s everywhereWhat is this spell we’re under, do you care? The might to rise above it is now within your sphereMachinae Supremacy – Sid Icarus

Anyway, I'm looking forward to the next episode, and to seeing the next generation of rover controls appear .

_________________Say, can you feel the thunder in the air? Just like the moment ’fore it hits – then it’s everywhereWhat is this spell we’re under, do you care? The might to rise above it is now within your sphereMachinae Supremacy – Sid Icarus