Blender and Makehuman to make your 3D mesh, then just pre-render video that delivers a video scene for every use case,... Ought to take no more than 50 to 60 GB of hard drive space to store all the animations... And maybe 9 months working 16 hour days to do all the animation.

Sorry for the sarcasm... But, I'm just not all that impressed by this kind of tech. Now, a neural network AI (minus the Final Fantasy-esque digital sex doll) to interface with LinuxMCE,... complete with visual and speech recognition, with predictive capabilities, and maybe a personality,... That would impress me. But we'll have to wait another 15-20 years for that kind of thing.

« Last Edit: June 17, 2012, 03:28:47 am by JaseP »

Logged

See my User page on the LinuxMCE Wiki for a description of my system configuration (click the little globe under my profile pic).

I have actually considered doing a similiar idea in my own set up. My ideas though are much more simple and use looped or very long video files with previously recorded text-to-speech statements. It would be ugly but just nerdy enough for me to love. The other idea besides a human persona was a robotic face or HAL glowing orb on-screen as the audio is played.

My roadblock to this was time and experience to develop it, the slight load time for xine after receiving a play command and most critical, I couldn't play a "blank" video file and an audio file at the same time with the amount of time I put into this.

Implementation idea if anyone is interested in putting time into this for themselves:

Event = press a category button on an orbiterResponse = loop interface video on local on-screen orbiter, play audio file "Here is a list of your videos. Which would you like me to retrieve for you?", stop looped video

Event = selecting a video file from data grid and then pressing playResponse = loop interface video on local on-screen orbiter, play audio file "Now loading [filename] please wait while I prepare it for you", stop looped video, play selected video file.Note that this would not function with UI2 as the selections are not separate screens, I am sure with enough effort it could be incorporated but for the short-term a way to play video and audio at the same time would be a huge step for this.

I've started learning blender some time ago to be able to put together a 3D head for that purpose. It turns out blender being a damn complex program, so I'm still doing babysteps and am not even close to modeling a head but I continue trying.

The plan for the begin was like this: first learn blenderbuild a 3D head use rigging to make the lips move (blender can record my actual lips with a webcam and sync the models lips - once I find out how that works...)sync to already recorded audio render animations.start replacing Sarah in the first step of AVWizard and see if that makes senceIf I ever get to this point, we can think about integrating it into the whole wizard and popup screens like the "you have added a new device..."

I know this is a very hard task and maybe I will give up some time, but at least I'm trying.What are your thoughts....doable?

It creates the mesh, etc. for the whole body, but you can export to Blender to do the animation, just focusing on the head. It also creates controls for animating the face (and body), but those can be challenging to use (like operating a marionette). The hair typically does not export (you probably would need to do the hair yourself, or find a mesh you can import).

Logged

See my User page on the LinuxMCE Wiki for a description of my system configuration (click the little globe under my profile pic).

I hadn't thought about the Sarah wizard since its already pulling from pre-rendered video and audio files. Let me know how you are coming on this. It is not something I am still actively working on but it is a cool idea.

the whole model may totally change and the lipsync will be improved, as well as the hair (I know you can still see the short hair).I'm learing blender more and more and will keep improving the whole model when I find time.