We recently updated Dragonframe at NUA to 4.1.2 and I hadn’t tested our eMotimo motion control rig since the update. I’m scheduled to do an introductory session to motion control and Advanced Camera techniques with the 3rd years next week, so thought it judicious to test it out before then.

Leaving the wireless remote off and using the 8Way switch, Go to Settings menu, ensure on page 2, Dragonframe is selected, then then right (pushing back on the 8Way Hat), two times until the following screen appears – this is live motor feedback and mode where Dragonframe positions and commands are processed.

I could not get to this screen, on page 2 I cannot go further into the menu.

on the bottom of the help page it even states that

Known Issues with our firmware or their software:

(5) Must be on feedback screen shown below to function correctly (Settings, right twice on joystick). If you aren’t on this screen, the integration will not work as designed.

ha…

I’m not going to go through how many times I tried to go right twice from the settings page, or from the 3 page, or the 4th page, all of them didn’t work.

So I tried to hook it up to Dragonframe, just in case it was a bug and it might still work…

but I only saw the Not Connected greyed out notification.

I tried connecting in different ways, plugged in first, plugged in after starting dragonframe, all of the different ways you can get hardware to see software.

3 days I tried different things…

Then I tried putting the USB into another port, and I got a pop-up saying it failed to install, looked at the device manager and saw this

an error on TTL232R-3V3, which was what it had popped up as I put the USB into a different port.

But this was for Macs? I thought I’d have a bit more of a dig, and on the weird web page there are drivers for Windows 10, so after consulting with our tech team, we deemed it safe to try – the website looks well, odd

I love checking out new software, so when the opportunity came up to be involved with some freelance animation involving lip-sync, I decided to check out the latest version of Character Animator, part of the Adobe Suite.

I knew that it automated some aspects of animation, and after a bit of investigation, discovered that it will directly try (I say try, as the lip-sync is a bit hit and miss, but it’s very easy to swap another mouth shape in to help get the right feel to the visual) and interpret a live feed from a web cam into your drawn character, which is amazing, and gives a lovely natural feel to movement, eye blinks and head tilts/turns etc.

Then I saw that Adobe Edex is running a Character Animator short course, so I signed up, to learn from the pros, and get an insight into proper workflow etc.

So far I’m up to week 4 and it’s been really useful, especially to see how you add behaviours to folders or layers of items, which is not as straightforward as I initially thought.

Week 1 – template character

Using a template character we just needed to get the face, eye and lip sync working – which is all built into some of the template puppets. I got Einstein from here – http://headsofcurriculum.com/

but you can also download other puppets from okaysamurai and the adobe Character Animator page (pic above)

Week 2 – own character

We were given the Character ‘Chloe’ (below) in photoshop with the correct layer orders and tasked with changing it slightly to our character, and then because the puppet in photoshop is dynamically linked to the scene in Character Animator, as soon as you save in photoshop, it updates the scene in Character Animator, so you can record your new person and work with it straight away.

Chloe

My Alien

Week 3 – head turn

Growing our knowledge of behaviours/triggers week 3 adds a head turn task, you need to duplicate the Folder ‘Frontal’ in photoshop and make 4 extra folders with corresponding features inside to represent, left profile, left quarter, right profile, right quarter, altering the eyes and mouth, plus the head shape, as obviously you can’t see his ears when he’s looking to the right or left. (Although writing this I realise you can see an outline of the ear, it just doesn’t stick out – oops)

I also decided to add a background and use a dragger to make him point at the planets for the short video piece that we need to submit each week. Adding the background is easy, just import the image you want to use and drag it onto the layer below your puppet.

Week 4 – with sequence trigger

Once you’ve got your head around the layer/folder structure and the naming conventions, Character Animator starts to make a bit more sense and in week 4 we need to add a small animation in a new folder in photoshop that will use the cycle layers behaviour. Newer versions of CA let you choose either top to bottom, or bottom to top as making new layers in Photoshop tends to stack them above, which is the wrong way round for the animation to work, but of course, using the dynamic link you can easily re-order those layers and then your scene gets updated.

I went totally freehand and drew directly in Photoshop a small spaceship moving across the sky, and even though it’s pretty low quality, works effectively. It ended up being 11 layers, and still looks really quick. I don’t know if there is a way of slowing it down slightly, or you could just duplicate the layers.

All in all I quite like Character Animator, but it’s a real pain having to set up all the layers, all the mouth movements, and any little thing you want the character to do. Rigging can go really wrong, heads come off, it all goes wobbly, but once you’ve done all of that prep, it makes it super easy to get your character talking for you. So it’s not a quick process, it doesn’t make it any speedier to make lip sync and that was what I was really testing it for.

To conclude, as with all animation projects it will have it’s time and place and I’m enjoying the Adobe course, but it’s still a learning curve and has both pros and cons, but the natural movement of the talking is great, you can record each movement either separately or together and the triggers are really fun. I’ve even read that you can use it as an avatar for online meetings, now that’s cool!

Working at NUA as the Animation and sound technician, this week’s process test was to go through greenscreen, from beginning to end.

Through this I would be able to test out the new dragonframe to see what features had been updated, and perhaps changed, to make sure that I am always up-to-date.

To also ensure that my green screen setup was as good as possible for an upcoming project with the first years, and then brush-up on using After Effects for the post production.

So I grabbed one of our walkcycle armatures, borrowed some doll’s clothes from my children and went into the depths on Animation Studio 1.

Destined for stardom!

The key points for green screen are to light the background and foreground almost separately, obviously in the reduced space of an animation studio this is a little more difficult as you can’t get a lot of space in between but, starting with the two basic lights, a flo-light (floodlight) and a kick light to pick out the model from the background, that’s a good place to start.

A flo-light (floodlight) at the top to try and light the background evenly, then two dedolights and a kick from the back to try and distinguish foreground from background

As you can see the result has harsh lights from the spot, which you need, but adding diffusion will soften the harsh shadows, because we want as little of those as possible.

The fabulous dedolights let you easily attach some diffusion material (or gels) directly onto the barn doors with an easy to use tiny clamp

This lessened the shadows and gave me a result I was fairly happy with, although in an ideal world the Green screen would have maybe 2 flo-lights on, to be more even.

Softer shadows with diffusion, but I did have to tun up the dedolight a little to compensate

Ready to film, I then turned to the new dragonframe, and to be honest there’s not a lot of difference from version 3, the interface is slightly smarter, but for the students, it will mean an easy transition to the latest version. Which was a must as we had new cameras waiting to be installed, but they would only work with DragonFrame 4. (Canon 1300D’s)

A short jerky walkcycle later – it’s been a while – and I had my character in the middle of the stage, ready to react with a blue polystyrene box that the students have been using, so that my armature (and the action) could stay in the middle.

Disaster fell at this point in the proceedings too…

ouch!

His ankle joint broke, but as with all good English actors, we carried on!

The resulting video, is not my finest work, the clamp rig is really too big and heavy for this small armature character, there’s a terrible jerk where his ankle breaks , but the reaction works well, and I like the character that the little blue box has… In my head it’s a very lively puppy, that growled to stop my man in his tracks, then once beckoned turns into a slobbering excited mess when he gets a hug and a kiss…

It’s amazing what my imagination adds, now to see if I can add a little post-production magic to help anyone else see it too!

When using DragonFrame, you can either export video or stills, but you must remember to conform your take if you want to discard any re-shot frames, or deleted frames, as when you bring in an image sequence into AE, it can pick up those dud frames.

Also make sure your frame rate is correct, again if you lengthen or hold frames on the Xsheet, you will need to conform your take for those changes to take effect and your image sequence to reflect your timed animation from dragonframe.

Leaving the animation studio behind I headed up to the Media Lab to get started in After Effects.

Once you’ve set up a regular 1080p workspace and composition bringing in an image sequence is really simple, click on your first image and after effects will pick up all of the tiff’s in that folder, in sequence, and ‘pre-comp’ them together as a single piece of media, so for animation from dragonframe, that’s exactly what you want.

Then drag this tiff sequence down onto your pre-set composition timeline, and resize them to fit – this is why you should always setup the comp first, not just plonk your content onto the timeline as it will take it’s size from the media and who knows what size it might end up, which then leads to rendering/processing problems.

I like to use a garbage matte before applying the keylight effect, as it cuts down how much green the effect is trying to process, and with my small setup I knew the corners were going to need taking out. So, although it’s a laborious process I step through all of the frames, altering the mask slightly to allow for model movement. It is lovely when you don’t have to move it for a few frames!

Then I could move onto adding the keylight 1.2 effect… it does a fab job, and this is where you can really see any shortfall in your green screen technique – and there were some very particular areas in this test! The best tips I would give are clipping the black and white points (in the settings area of the effect) and using the alpha preview to see exactly what is black and white. I had a bit of spill both on the box and the white clothing which I couldn’t seem to sort out which left parts of my characters with slightly see-through areas, a bit more subtle tweaking of the advanced settings with the blacks and whites, got it beautifully crisp.

Now to put a simple background in to see how it was all doing.

Et voila, it’s ok, it’s nice to see it in a situation away from green, or black, really good exercise to go through before the next first years project, dragonframe 4 is still as easy to use and after effects has many different and powerful ways of keying.

To add to the ways stated above, you could also; clone stamp in AE to remove the pins, which I did do a bit, but it makes a crazy amount of layers; Add some 3D lighting to perk up the character; Colour correct the background and animation to make them feel more cohesive; Track eyes/features onto the characters in 3D space and use layers more cleverly to give a sense of perspective.

However what I wanted to do was give my little bluebox puppy a bit of life, but I didn’t want perfect, which would be my normal style, something in Illustrator with beautiful clarity of line, I was after a more Mr Messy feel and although I don’t use it a lot, I knew that TV Paint would get me a really nice organic free feel to it.

So I rendered a low res copy out of AE, and used this for a background layer in TV paint, then got my wacom tablet out and let my imagination go a little wild!

I can practically hear the excited slobbering doggy noises, so at some point, I will return to this project and add some sound…

As part of my Eaton park project I want to also include video sections and time-lapse elements, to this end I need to record some video – well there’s a shock!

Up until now I have been really happy with my lovely Nikon D750 it is a wonderful full frame camera and has taken some beautiful photographs for me.

But, I found it lacking when I took some video, and having used the 5D series for work, and knowing peers who use it for film-making, it seemed like the obvious choice. So I took in all of my Nikon gear and traded up (@Wex).

I’ve only had the camera for two weeks, but it doesn’t disappoint in photography stakes and I have had some beautiful results on the video side.

I have read and read about the problems with the file sizes and no slower framerate at 4k, etc, etc, but for me, the way I work, the visual results are king, and the 4k quality is incredible. The extra reach from the crop has only benefitted me, but it’s a hell of crop so I can see why people are complaining.

Below are screengrabs so you can quickly see the visual differences when working on what I consider a ‘standard’ timeline in Premiere, 1920×1080, 25fps.

All I did was drop the different files onto the timeline, keeping the sequence settings.

You can see the incredible crop factor (1.74) working here

Here is the video, note that I left the focus on auto to see how quickly it can cope especially as the rose was moving in the wind, I also purposely moved in and out to push it a little further. When running the 100fps I needed to prompt it to focus, but that’s okay if you’re keeping an eye on it, or if you are manually focussing it. But, any change of focus was incredibly noisy, and on the video you can clearly hear the mechanics working. Again, another difference at 100fps is that no sound is recorded.

And of course it has all been output to a high quality mp4 to add to the mix.

Just lastly for me, the Canon is a little heavier, adjusting to the different buttons isn’t too big a deal and I am loving the 4K and have had no problems aside from file sizes being on the large side for the 4k. Now I expected all sorts of issues after reading quite a few reviews, but haven’t come across any yet.

Also you might ask why drop 4k onto a HD timeline, or why bother filming in 4k if you’re only going to downsample it, these are just tests, I don’t even have a 4k TV to watch it on (I’m not even sure I know anyone who does), so am just playing with the format and testing everything until I get a happy workflow, and for some of the work I do it will give me extra visual creativity, plus it’s just beautiful!

I have used both Nikon and Canon in the past, my own cameras being Nikon, but work cameras tending to be Canon, and for me it’s a little bit like the MAC vs PC argument, where I choose the software over the operating system, ie, if it has Photoshop on I’m happy, but with Cameras I find it’s the glass that makes it.

All of the above are Nikon images and they are beautiful…

Below are just a couple from my incredibly newly upgraded to Canon… my question is this, would you know which was which if I didn’t tell you?

As mentioned in an earlier post, I am currently taking daily photographs of Eaton Park.

I ride through everyday and one time I just stopped to take a breath and realised what a lovely place it is. I know that sounds silly in a way, but I was quite stressed out at the time and decided that instead of head down rushing to work, I should actually open my eyes and take a look around, at how lucky I was to need to go through this beautiful park.

So much goes on here, the park run that half of my family do, the community events, the friendly spaces, the visually stunning pond that prompted me to start taking photos for me again, allowing me to pause for a moment at the start of a hectic day.

There and then I decided to undertake a year long commitment to taking a photo in exactly the same spot everyday, the idea of a year long timelapse project wasn’t daunting, but all sorts of problems popped up.

How can I make sure it’s the same place, the same angle, all of the different exposures, what about days I miss, what’s my focus, what will change, will it be visually interesting enough…

But I started taking a photo of the pond, it usually has a glorious reflection in it, the pretty little building at the end of the boating pond can be seen in the distance and the angles are pleasing, plus the trees lining it will give us seasonal interest.

November 10th 2016, the first official Eaton Park Project image.

I tried to use the columns as markers and keep the building in the distance central, 5 months later and my most recent image below

I have also very recently added another view to my daily capture, I wanted to add in another aspect in the park, and I remembered how spectacularly the trees in the centre ring looked when they came into full blossom, so in readiness for that, Ive added this view.

A surprising thing to come out of this project is now I’m noticing more and more of the Council workers looking after the park day after day whatever the weather, I’d never realised how much they put into making it such a lovely place to visit.

I’m starting to think about extra shots I’ll need and what will bring the final film together, so am researching the Park, and looking at the community events that take place now.

My Kodak Ektra arrived on Friday and I have been trying to get to know this camera first, phone second, device as much as possible.

Kodak Ektra top buttons, including strap attachment

Ektra leather case and packing

Ektra – fresh out of the box

Ektra’s on-screen keyboard

Ektra’s proprietary charging lead…

Apart from the proprietary lead (why, why, why) it’s a fairly nice device, it’s lighter than it looks and much lighter than my Sony Z5.

It has a very thick and chunky form factor and whilst the divine luxury case also adds to the bulk it really focuses on the camera aspect, but is also part of the drawback, because as soon as you put the case on, it becomes unwieldy as a phone…

Taking a phone call with this case becomes a farce – see above – I’m not sure what to do, talk through the leather, speak into the case, undo the other clip and let the case dangle… I really don’t know, my best fix – use bluetooth headphones!

In Use

In use, it’s really quick and snappy, opening apps is fairly quick, I currently have a Sony Z5 which is no slouch in the data stakes, but this seems slightly faster.

I miss not having finger print recognition and also the security of having a little waterproof protection is becoming standard these days, but not on the Kodak (?)

One big bug with the case though – it covers up the charging port, this is really, really annoying as the case is a snug fit – you really need to force the phone out and quite a few times I’ve wondered if my phone is just going to fly across the room as I exert so much pressure to extract it from the confines of it’s cosy case… and the more I have to pull it in and out of the case the less safe my phone will be in the case…

Using the Camera

The big deal on this phone is the camera… With a sharp double click on the beautifully styled ‘K’ button on the top side, the camera app pops open from any situation.

I like the on screen dial and the amount of control you can have in manual, but, on this first look review, I have barely touched on the Manual control, and have been playing with all of the other Auto options.

I like what I see, but I have reservations about the lack of feedback when I press either the on screen virtual button, or the dedicated shutter button – did the picture take, can I move? – only when the preview popped up, did I feel secure that the photo actually saved. On my Z5 as soon as you click, a wheel appears, to show it’s saving then when that’s gone you know it’s saved.

Maybe this is something I will get used too..

Also why can I only take 21mp images in the 4:3 aspect ratio? If I want to take a 16:9 it drops to only 16Mp… (Obviously cutting into the image)

Comparison Images from flickr

All images are taken with Auto settings.

Ektra Lake view (below)

Z5 Lake View (below)

– Kodak Ektra – Tree (below)

z5 Comparison tree (below)

Kodak Ektra Homepride man (below)

Z5 Homepride Man (below)

Kodak Ektra – Wreath (below)

Z5 Wreath (below)

Kodak Ektra Lake Landscape taken in 16:9 ratio (below)

Kodak Ektra Lake Landscape 4:3 (below)

Z5 Landscape (below)

In Conclusion

I do like the images that the Ektra is producing, and the pin sharpness just edges it in some of the demo images above, so it is better than my Z5 in that respect. But, it’s thicker, bulkier and the case spoils the phone aspect… hmmm…

I miss the fingerprint recognition, but is it forgivable for better images…

I have had this phone for just 3 days, but will be moving more in-depth over the coming days and moving to Manual mode, whilst trying to get along with the awkward case and the charging problems that brings…

I have been following the development of this new camera phone from Kodak, and today I got my pre-order notification.

I already have a fantastic camera on my phone (it’s a Sony Z5) but really wanted to check out this innovative and rather cool looking kodak offering.

Pre-ordering gives me the free case, and I chose the natty Tan version, it should arrive by mid-december and I will be giving it a thorough workout as I have started a project cataloguing the seasons in my local park that I just happen to pass through every morning.

So here I am, in my new position at UEA – Media Learning Technologist – and I’m in charge of the new media provision over in the Music Building, currently only partially finished, but with a lot of changes happening over the summer.

This is what the spaces are now and I’ve got a lot of cataloguing and recycling to get on with before it’s completely remodelled, but it’s a hacker heaven with the older tech that we have accumulated…

The Strode room, which will not be changing but the Balcony will be filled in.