Hi Folks!
We’ll try to do a bi-weekly installement of Addons for Empathy (until we run out of addons). This one is a two-parter: Our main installment is about working with proxies in Blender, the second is about a bold new experiment in Rig UI.

Proxy Workflow and Transparent Proxies Addons:

The video is about two addons, both making proxy editing in the sequencer more friendly to our project. A quick explanation:

Blender’s Video Sequence Editor or VSE for short has a feature called proxies. This basically allows an in-place replacement of strips by 25%, 50%, 75% or 100% versions, in a fast format (.jpg or motion jpg) This is especially useful when:

Editing Large format files that are too slow to be realtime – either in resolution (2K or 4K) or in type (.EXR!!!)

Editing over the network, especially files of the previous types

Working with complex and multiple effects that could be benefit from being cached

So Proxies in Blender work a bit like a combination of proxies and caches. I prefer them as the former, since it skips having to recalculate every single you change some timing – instead they only need to be recalculated when the sources change.

However, working with proxies in Blender can be painful by default, and this is where Proxy Workflow Addon comes in:

Editing Proxy settings must be done strip by strip: Proxy Workflow lets you set them for all selected strips at once

Default location is in the same folder as the originals, which is bad in the case of network shares; Proxy Workflow automatically sets them to a local directory “TProxy” that contains all the proxies for the edit, and can be moved around like a scratch disk

Sometimes Blender tries looking for the original files even when it is using proxies. If you are trying to use proxies to avoid using the network/internet, this becomes a problem. Proxy workflow allows ‘Offlining’ strips, and then ‘Onlining’ them again when you can reconnect to the network

Blender doesn’t know when the source files are ‘stale’ and need to be re-proxied – for instance if you rerender. Proxy workflow timestamps as it makes proxies, allowing you to select a bunch of strips and re-proxify only the changed ones.

Proxy workflow is designed to work with movies and image strips only for now, as I’m interested in true proxies, not caching effects.

A seperate addon is called ‘Transparent Proxies’ and does what it says on the tin (and no more): It allows making proxies of image sequences that preserve the alpha channel for alpha over effects. It does this by cheating: It uses Imagemagick on the commandline to make a .tga proxy, and just renames to .jpg to satisfy Blender. You need to install imagemagick first for it to work.

Bonus: Rig UI Experiment:

Code is at gitorious
This brings us to the bonus round- the Rig Selection UI. I’m continuing my round of experimentation with BGL and modal addons, to make the kind of ‘typical’ rig ui where animators can select or act on a rig by clicking on an image. This ui is using an SVG file to define the hotspots, and a PNG to actually draw the image. It already works, though I’m still going to refine it and add more options/ easier rig customizability. The end goal is to be able to do Rig UIs without writing code, simply by drawing them in Inkscape and pressing a few buttons in Blender. Stay tuned!!!

Hello all, long time no post!
As we’re getting closer and closer to releasing our files, I’m noticing that we have a huge (and I mean huge) trove of Python code that is largely undocumented. Some of it is pretty specific to this project, And other bits are useful in general. Even the specific stuff could be adapted, so it’s worth going over.

To address this we’ve thought of doing an ‘Addons for Empathy’ video series, quickly explaining what some of the addons do, in addition to more traditional docs. The first I’ll do in this way is the Floating Sliders Addon: In short, this pops up small, keyframable Open GL sliders for any Floating point Pose-bone properties. The code is on gitorious, and following is a simple video explanation of what it does and how to use it:

As always, the video is licensed CC-BY, while the addon itself is GPL.
You can also download this video as a high resolution .webm or .mp4 file, or watch it on youtube

The screencast itself was edited in Pitivi, with Inkscape titles. Video was captured via the Gnome screencast feature, and audio with Audacity

Big thanks to Campbell Barton for help getting min/max of custom properties, and explaining some of the finer points of keymaps, and to Dalai Felinto for showing a possible hack to make a popup menu (I ended up using a slightly different way)

Blender’s Video Sequence Editor (or VSE for short) is a small non-linear video editor cozily tucked in to Blender, with the purpose of quickly editing Blender renders. It is ideal for working with rendered output (makes sense) and I’ve used it on many an animation project with confidence. Tube is being edited with VSE, as a 12 minute ‘live’ edit that gets updated with new versions of each shot and render. I’ve been trying out the Python API to streamline the process even further. So… what are the advantages of the Video Sequence Editor. Other than being Free Software, and right there, it turns out there are quite a few:

familiar interface for blender users: follows the same interface conventions for selecting, scrubbing, moving, etc. Makes it very easy to use for even beginning to intermediate users.

tracks are super nice: there are a lot of them, and they are *not* restricted: you can put audio, effects, transitions, videos or images on any track. Way to go Blender for not copying the skeuomorphic conventions that makes so many video editors a nightmare in usability.

Since Blender splits selection and action, scrubbing vs. selection is never a problem, you scrub with one mouse button, select with the other, and there is never a problem of having to scrub in a tiny target, or selecting when you want to scrub. I’ve never had this ease of use in any other editor.

simple ui, not super cluttered with options

covers most of the basics of what you would need from a video editor: cutting, transitions, simple grading, transformations, sound, some effects, alpha over, blending modes, etc.

has surprisingly advanced features buried in there too: Speed control, Multicam editing, Proxies for offline editing, histograms and waveform views, ‘meta sequences’ which are basically groups of anything (movies , images, transitions , etc) bundled together in one editable strip on the timeline.

as in the rest of Blender, everything is keyframable.

you can add 3D Scenes as clips (blender calls them strips) making Blender into a ‘live’ title / effects generator for the editor. They can be previewed in openGL, and render out according to the scene settings.

it treats image sequences as first class citizens, a must!!!

Python scriptable!!!! big feature IMO. (uses the same api as the rest of Blender)

Disadvantages are also present, I should mention a few:

UI is blender centric! so if you are not a blender user, it does not resemble $FAVORITEVIDEOEDITOR at all. Also, you have to expose it in the UI (only a drop down away, but most people don’t even realize it is there)

no ‘bin’ of clips, no thumbnail previews on the video files, though waveform previewing is supported.

lacks some UI niceties for really fast editing, though that can be fixed with python operators, and also is getting improvements over time.

could be faster: we lost frame prefetching in the 2.5 transition, however, it is not much slower than some other editors I’ve used.

not a huge amount of codec support: Since Blender is primarily not a video editor, supporting a bajillion codecs is not really a goal. I believe this differs slightly cross platform.

bad codec support unfortunately means not only that some codecs don’t work, but that some of the codecs work imperfectly.

needs more import/export features (EDL is supported, but afk only one way)

some features could use a bit of polish. This is hampered by the fact that this is old code, a bit messy, and not many developers like to work with it.

Needless to say this is all ‘at the time of writing’. Things may improve, or the whole thing gets thrown into the canal

So what have I been up to with Blender’s video editor? Quite a bit! Some of it may end up not-so-useful in the end, but experimentation could yield some refinements. The really good thing about using Python, is that I can ‘rip up’ complex things and rearrange / redo them. So the experiments don’t result in a huge waste. Lets have a peak.

We are getting really excited to show all the incredible animation and amazing render tests coming off the farm. And even though we don’t want to let *too* much slip before time, I know Bassam is planning an update with some teaser images and production notes pretty soon.

Today I’m happy to share news of MediaGoblin, a libre software “publishing system” for images, video, audio and more that friends of Bassam’s and mine are building. It’s a single replacement for Flickr, YouTube, SoundCloud, and similar that anyone can run (like WordPress), but federated to keep files under user control. It’s very extensible, with support just added for 3D models now suggesting an alternative to Thingiverse. But I’m especially excited about MediaGoblin because it will establish the core functionality we can use to implement a lot of cool ideas we’ve had during Tube production for a collaborative platform that also fills the huge need for a solid asset management pipeline, a kind of super-Helga with some interesting properties. We’ve been talking to a bunch of developers about putting together a free software project after Tube, in which there’s been a lot of interest, and I have a thought that we could get studios to pool resources instead of each rolling their own and occasionally making a dead-end free software release.

A few weeks ago at the Blender Conference, we were talking with the renderfarm.fi developers about how, together with their distributed rendering, and these fairly near-future pipeline/collab possibilities, it seems like a lot of big pieces falling into place. MediaGoblin is worthy in its primary goals, but of especial interest for providing much of the functionality we’d need, plus perks like federation that we’ve dreamed about. Coding with Will Kahn-Greene, formerly of the Participatory Culture Foundation, the project lead is Chris Webber, until recently a developer at Creative Commons, and also a Blender user who did the anim in the excellent Patent Absurdity doc. And as part of the Tube Open Movie, Chris helped build pipeline scripts as well as our Reference Desk tool, one of the programs inspiring the new asset branch in Blender.

If you are concerned about having full control over images, videos and audio records that you put online, you have just a few days left to support development of MediaGoblin — an awesome free software project that decentralizes media storage.

If you are a VFX or animation studio, or even a 3D printing company, you have even more reasons to support the project. With initial support for 3D models (STL and OBJ) MediaGoblin has a great chance to grow into a scalable digital asset management solution that is free to use and modify.

Finally, if you are a developer who’s good at Python, MediaGoblin could do with your contributions.

** Donations are tax-deductible in the US and also support the Free Software Foundation, which hosts the campaign.

And thanks for anything you can do to help this awesome project by passing the word!

Made a test from Bassam’s tutorial on using cycles and internal together. The result is not terrible, but not perfect. Used another model cause cycles crashes during render (too many objects) and doesn’t work with multiple UV layers like internal does. Cycles is good to use but it has a lot of limits so far. Not suitable for animation yet – too slow and crashes often. Maybe for statics and interiors. Difficult to work with lights – it seems only be possible to change intensity. Internal also has problems with SSS : for instance, no shadows in ShadowPass. Anyway we won’t use cycles for characters yet, maybe for environments. Now to make some more tests! I’m looking forward for further developments in cycles.

Note from Bassam: If this is Dimetrii’s ‘not perfect’ result, I think his perfect one would give me a heart attack brilliant as usual.

I recently received a digital copy of Blender 2.5 Character Animation Cookbook from Packt publishing. This book is written by Virgilio Vasconcelos, a blender animator and rigger who is currently animating shot ‘a1s38′ on this project
The target of this book I feel is strong beginners or intermediate level artists/learners, who are either new to rigging, animation, or to blender itself. Advanced users could benefit from it but more sporadically (ooh, I didn’t realize you could do that!, or as a reference, and students who are absolute beginners may get lost in some terms, or not yet know why you would want to do certain things.
Virgilio’s past experience both as a professional animator and as an animation professor is evident in this book. He writes in a clear, concise fashion, and has a knack of excluding super-complex detail while still taking things to a production level in a surprisingly simple seeming step by step way.
The first part of the book focuses on character rigging, and I really appreciate that he starts from the basics- setting good bone orientations, shapes etc., rather than leave these things as an uexplained step for later on. The rigging lessons build on each other, so after some basic lessons they quickly ramp up to a level where students must really be diligent and pay attention to learn. By the end of the section students should be confident rigging cartoony biped characters, and have enough experience that they can start experimenting with ‘invention’, creating new setups for new situations, or their own personalized ones for improving common ones. I really love that Virgilio shows some of the very strong production techniques in Blender, such as using sculpting for creating corrective shapes.

In the second part of the book, the focus is all on animation, starting with a simple ball exercise, and rapidly ramping up into character animation. The first chapter is mainly technical (like the rigging section) in it’s setup: that is, he starts with workflow, then with things like IK/FK switching, etc. This book introduces workflow and technique first, so the focus at start is learning animation in blender, not learning animation in general yet. This chapter is basically an introduction to blender for animators, and I think maya or even 2D animators picking up Blender will spend most of their time here.

After the technically-heavy blender intro, the rest of the animation chapters return to the basics a bit, with lessons in timing, spacing, anticipation, squash and stretch, etc… All those basic animation principles we know and love. The book is good at using blender features to enable animators to get what they need done efficiently, using Blender’s path-drawing features to adjust their arcs, or using the Open GL preview to better see their timing. As in the rigging sections, the downloads for the book contain Blend files that make it easy for students to get right in with each chapter working on the exercises with no fuss.

The book ends with an appendix with some useful tips on planning, organisation, and terms.

Some criticisms: Even in a good book such as this, I can find some things to crit ;), but they are mainly small things. In the rigging section, Virgilio fails to warn his audience about the (current) fragility of one setup, when talking about the corrective shapes (an otherwise excellent segment). Luckily, a current summer of code project fixes this problem, so it’s likely that any such warning will be unneeded in the next release of Blender! Another tiny nitpick is that Virgilio uses the term spacing in two different ways, the first time unconventionally (referring to actual physical locations) the second more like the usual way for animators. I feel that he could have picked a better word for the first time. Finally, in the rigging section, I think that a tiny introduction to Python for creating interfaces would be quite good, and give riggers an alternative to the object/bone based sliders in the 3D view.

Conclusion: These are really tiny nitpicks. This book really is good, in fact, I’d say it’s the best animation and rigging reference for Blender yet, and even as a general reference for riggers and animators in 3D applications (since most techniques will be similar in different programs). While I read through linearly from beginning to end, the book also has ‘See also’ segments at the end of each section, that allow students focusing on a particular track, to follow a different path of learning in the book, something I thought was a good idea. I would put this on my ‘recommend’ list, as a book for intermediate/strong beginners, as a Blender reference for riggers/animators from other software, or as a book for teachers to use as a textbook.

I am currently working on the character Gilgamesh. I had to make a completely new model and sculpt the face. It was difficult to do the UV unwrapping. But! It’s time to paint textures! Many tests with Projection Painting and layers (diffuse, bump, specular ….).

We built Cycles from Source using Brecht’s excellent instructions and with help from DingTo on blendercoders irc. After turning off Desktop effects and the second monitor we were able to use the GPU acceleration option (using Cuda) for really fast rendering. Just for fun, we brought in the Gilgamesh model and sculpt and set up some simple renders using Сycles, mainly relying on emitting cubes as light sources. Working in this renderer is very different from blender internal- you set up physically accurate materials using a node tree to combine shaders, quite unlike the diffuse-specular-etc model that internal uses. The render never stops, rather it continues to get better as long as you have patience to wait, but it’s really fast if you use simple shaders and GPU rendering.
For the final render of Tube we will most likely use Blender Internal, as Cycles is unlikely to become production ready before the project is over, but this definitely feels like ‘the way of the future’.
Small tip: to use Cuda, you need a new version of the nvidia driver (270 or greater) which is not in the Ubuntu 10.10 repo. You have to use a PPA or get it directly from nvidia.
Read More

I’m a huge fan of Blenderaid, a great way to manage your blender projects. You run a small server that is capable of crunching through your project, finding all objects, dependencies, etc., then point your browser to it and get a graphical overview. You can look at individual files, see the names of objects/materials/etc., rename them, view dependencies, fix broken links, and now check and update SVN status etc. etc., all from the comfort of your browser window. I’m using the Python 3 version, which for me necessitated installing PySVN from source, since the Ubuntu modules are Python 2 only. Other than that, I had a smooth install; I’m looking forward to continuing to use this version and further goodies in the future.

Some cool things you can do with it:

Find errors in your project globally without having to check each file one by one in blender- and fix them (could benefit from batch tools so you can do multiple at a time)

Create ‘bundles’ of files, e.g. to send to an off-site animator who doesn’t have SVN access, by quickly seeing all the dependencies of a given scene file. This can be done by hand right now, but I’m pretty sure it could be scripted fairly easily.

Make sure your files are up to date, track problems with SVN visually

Rename models/assets, find out where they get used, etc.

Probably a lot more

Blenderaid could change the way we work with SVN for projects – instead of checking out several gigabytes of production data, each artist need only check out exactly what they need– saving time, local disk space, bandwidth. We could use it also to have versions of assets and switch (optionally) some scenes to use newer versions or to continue working with the old.

I’m hoping to have time after tube to experiment with blenderaid in conjunction with helga, or alone, and to have server-side installation as well as the local one. This could be the key for large-scale projects in blender, big thanks to Jeroen and Monique for writing it, and I look forward to seeing how it evolves.

Quick note from Jeroen: the python2 version saves time by removing the need for additional compiling, and should work without any problem. (I was under the mistaken notion that Blenderaid’s python version had to match Blender’s).

I’ve often wanted to have lines for ‘rule of thirds’ in the Blender Camera as a composition aid – I’ve got countless blend files with little no-face meshes parented to cameras (that have to be moved or scaled whenever I change the camera view angle). Granted this problem could be solved with a driver (That might not update – driving on camera angle is not dependable yet), but I got tired of ad-hoc solutions.

I don’t use the Title safe option that much or at all, so with the help of a trusty text editor (gedit in my case) I hacked a couple of files and now I have ‘Thirds’ instead of title safe for the camera. The internal property is still the same, it just displays differently, so no messing with RNA happened.

If you want the same functionality and are comfortable building blender/applying patches, you can get it here . Usual disclaimers about baby eating and such apply.

As always, the conference was awesome- an intense three days of talking, listening, meeting, blending, eating the traditional conference sandwiches, drinking coffee, beer and mojitos, not-enough-sleeping, more blending, etc.
After a sleepless but uneventful flight to Amsterdam I walked into the Blender Institute the day before the conference, only to have Andypressgang recruit Pablo and me into making the Suzanne festival and award interstitial animations with him. We had a (very sleepy) blast working till the wee hours, and more in the next morning, and I got to go up in the projection booth once again and play the festival off my laptop, thanks to the power of totem/gstreamer and python (for making the playlist). I apologize for the one or two glitches- a couple of the videos needed to be re-encoded for smooth playback, but we somehow missed that in the studio.

Jeroen Bakker showed me his awesome openCL nodes in the compositor on his laptop, running 20!zoom!! times faster than the CPU equivalent. When this stuff hits it’s going to make a mini-revolution for Blender. I’m no longer a sceptic about GPU computing I guess Wolfgang Draxinger did a fantastic job making the stereoscopic version of Elephants Dream. Great choices, hard work and technical precision- I’m blown away both by the result, which rivals the best stereo work from major studios, and by the amount of work he put into it. He’s planning Big Buck Bunny next, but in the meantime, some snaps of us removing (the unfortunately crumpled) screen after the show:

I met with Josh, Henri, Francesco, Jason, Jonathan, Jean Sebastian, Heather, and recruited Dolf, Tal, and perhaps Luciano, Andy and Pablo for our project. We had a meeting the second day of the conference, which gave me a chance to finally pitch the story and current animatic to the team in person, talk about where we are at in the project and assign some short-term tasks. We also had a presentation on Sunday, mainly about technical issues: rigging, though I did not demo rigamarule- turns out auto-registration of operators had somewhat broken the UI while I wasn’t looking (it’s fixed in current tube SVN). Josh showed off his work on procedural animation, and Henri demoed building scene layouts from library models using our LODing system and the landmark-snapping system created by Pablo Lizardo.

As Fateh has blogged, Tube member Jarred De Beer won the Suzanne Animation award, congrats dude!

The presentation had an unexpected benefit; it introduced the project to new contributors- Thanks Tal

Sadly I missed some people- Malefico has too many conferences on his plate to make it to Blender conference this year, and I was too swamped to meet up with Stani, Python coder and artist extraordinaire.

Finally, I had the honor of working for a bit on Andy and Eva’s awesome stopmotion animation project- Omega- which has some CG elements. I spent a large part of Monday (the day after the conference) rigging an amazingly designed and detailed character Andy built for the movie.

Big thanks to Ton, Anja, Anna, Nathan and everyone who made the conference possible and enjoyable.

THE TUBE OPEN MOVIE

A 3D animated short film based in free/libre software, Tube is also a new experiment in distributed collaboration. It plays on the ancient Gilgamesh poem, in a variant of the hero's progress that becomes the animation's own frames.