Philip Hodgetts’ unique blend of business and production knowledge gives him insight into the current state of the industry, and a remarkably accurate look forward. Here he shares his thinking, and points to articles of interest from other sites, with context as to why they're interesting.

At Lumberjack System, we frequently get push back that it’s “too hard” to log during the shoot. If you’re manipulating a camera (reframing etc) then sure you can’t log. And if you’re holding a boom, ditto. But if you’re monitoring audio during recording, or running the interview, then it is totally possible to log during the shoot with no added stress.

I do it all the time. For my family history project I set up cameras, mics and audio records and run the interview and log. Lunch with Philip and Greg is much the same, with the added complication of eating.

The best approach is to work in back-time mode and relax. Back-time eliminates the stress of anticipating when an answer starts because you’re logging “in the past”

I typically work with the 5 second back time on by default. This means I can be fully engaged with the subject while asking the question, and continue to be engaged with them as they start their answer. I can glance down after a few seconds (fewer than five!) and tap on the keyword start.

This takes the stress and tension out of having to “get it right on the moment.”

Back time also allows us to add a new keyword and log it from up to 90 seconds in the past. Keyword range end is always the current time.

In a comment on my career disasters’ post (on Facebook) it was pointed out that ageism is playing a big part in why so many highly skilled people have difficulty regaining employment. It’s absolutely not fair, but it’s also incredibly short sighted.

In not retaining highly experienced people in our industry – letting them “age out” so to speak – we lose their combined knowledge and insight. Yes, new people coming into the industry might be able to learn the day-to-day tasks they need, but with years of experience comes an insight that is hard to describe.

When you’ve been through multiple changes of technology; hundreds of hours of troubleshooting and bug reporting; and years of experience dealing with all kinds of people you bring an insight that the new employee simply will not have.

You will be quicker at finding – and fixing – problems that arise. You’ll be MUCH BETTER at making sure those problems do not arise, simply because you’ve been there and got the T shirt. You’ve learnt from your own and other people’s mistakes and know how to avoid them.

And obsolete knowledge can still be insightful. For example, very technique I used tweaking animations on my old Amiga so they’d fit in memory, was useless just five years later when memory became abundant. But when it came to making low bandwidth animation for the early Internet, I had a bunch of techniques at hand to work with.

The only reason to “let” (i.e. force) people to “age out” is if they have failed to keep current with technology and technique. I like to think those people are rare.

I have zero idea how to solve the short sightedness of employers who won’t even grant an interview because of birth year. I’ve had exactly one employer that wasn’t a company I controlled, across my entire career. Even that – Head Technician in a touring theater venue – was without direct supervision. I have gone my entire career without adult supervision, so it’s perhaps a wise employer that would shy clear!

That was the first job I applied for against a competitive field. I have not applied for a second job, so I have zero advice to offer.

I am a big believer in the need for failure in innovation. If I’d been more successful at some of my earlier career directions, I certainly wouldn’t have needed to push forward.

I wish that was a rhetorical question and I was about to propose an answer. Sadly I’m not. At best we have an illusion of permanence, but our business lives can change in an instant. Usually without us being involved in the decision!

There are the obvious examples. The other cast and crew on Rosanne had their livelihood jerked out from below, through no fault of their own.

The production crew on Parts Unknown who face a very uncertain future, as do many at Zero Point Zero Productions.

One acquaintance lost business and home in quick succession and has left LA. Another had a decent, well paying job at a major studio until downsizing eliminated the position. An unfortunate bout of ill health without the cover of employer insurance, and within 2 years he was effectively homeless. Another laid off from another studio job is finding a home for their many talents and abilities.

Combined with other research that allows us to literally “put words in people’s mouths” by typing them and having them created in a person’s voice that never said the words. Completely synthesized and indistinguishable from the person saying it.

Transferred facial movements plus created words in that person’s voice and it will be a forensic operation to determine if the results are “genuine” or created.

This is the first time I’ve taken a deep look at a TV show and worked out what I think would be the perfect metadata workflow from shoot to edit bay. I chose to look at Pie Town’s House Hunters franchise because it is so built on a (obviously winning) formulae, and I thought that might make it easier for automation or Artificial Intelligence approaches.

But first a disclaimer. I am in no way associated with Pie Town Productions. I know for certain they are not a Lumberjack System customer and am also pretty sure they – like the rest of Hollywood – build their post on Avid Media Composer (and apparently Media Central as well). This is purely a thought exercise built around a readily available example and our Lumberjack System’s capabilities.

In some way I guess this is another example of Artificial Intelligence (by which we mean Machine Learning) taking work away from skilled technicians, but human recall has been replaced with facial identification at the recent Royal Wedding in the UK, where Amazon’s facial recognition technology was used to identify guests arriving sat the wedding.

Users of Sky News’ livestream were able to use a “Who’s Who Live” function:

As guests arrived at St. George’s Chapel at Windsor Castle, the function identified royals and other notable guests through on-screen captions, interesting information about each celebrity and how they are connected to Prince Harry and Meghan Markle.

The function was made possible by Amazon Rekognition, a cloud-based technology that uses AI to recognize and analyze faces, as well as objects, scenes and activities in images and video. And Sky News isn’t the first to use it: C-SPAN utilizes Rekognition to tag people speaking on camera.

Rekognition is also being used by law enforcement.

Facial recognition and identification would obviously be useful for logging in reality and documentary production.

In a new Terence and Philip Show we start with the question “Should Apple be present at Trade Shows like NAB?” and then extend discussion to question whether there is still a role for big trade shows like NAB and IBC.