Archive for the ‘Adobe’ Category

Transforms progressive video (e.g. HDp25 frames/sec) into spatio-temporal interlaced video (e.g. SDi50 fields/sec). It achieves this by estimating the fields that would have been shot (had the original video itself been shot as interlaced) between each frame of the progressive video, via a process of motion estimation.

Most NLEs do not use this “perfectionist” method, instead they at best simply combine (ghost-blur) successive frames, with no compensation for time/motion.

On an interlaced display, such as an old analog TV or projector,

The “NLE-simple” approach may lead to dynamic (changing e.g. moving) scenes and objects appearing flickery.

The “perfectionist” approach will instead typically avoid such flicker.

Configuration of FieldsKit ReInterlacer:

Field Order: [Lower First]

Output Type: [= Create motion estimated fields]

This is not the default (oddly). But it is the only proper way to get the expected “perfectionist” reinterlacing to happen!

Windows 7’s “My Documents” library folder is by default mapped to the system drive, e.g. as [C:\Users\<username>\Documents]. However it is also possible to map it elsewhere, e.g. to another volume. A broadly equivalent situation exists in Mac OS. One might for example use this option to move the Documents library/folder to a thumb/flash drive when using several computers (one at a time) or to put it on a non-system drive, e.g. to free up space on the system drive, exclude it from system backups (thus freeing up both space and time) or to put it on something like a server, possibly on “The Cloud”.

I found the following explanation by accident, while attempting to find a way to prevent Adobe Media Encoder (AME) from storing its own “preview files” (sic), which are huge, in a sub-folder of “My Documents”, which itself on typical Windows systems is to be found on the System Drive. It seems that AME has no Preferences setting to store these preview files elsewhere, so a workaround is needed, e.g. to move the “My Documents” library folder itself to another volume.

Click the Start button Picture of the Start button, and then click your user name.

Right-click the folder that you want to redirect, and then click Properties.

Click the Location tab, and then click Move.

Browse to the location where you want to redirect this folder. You can select another location on this computer, another drive attached to this computer, or another computer on the network. To find a network location, type two backslashes (\\) into the address bar followed by the name of the location where you want to redirect the folder (for example, \\mylaptop), and then press Enter.

Click the folder where you want to store the files, click Select Folder, and then click OK.

In the dialog that appears, click Yes to move all the files to the new location.

Mac OS (Mavericks & previous):

To restore a folder to its original location

Click the Start button Picture of the Start button, and then click your user name.

Right-click the folder that you previously redirected and want to restore to its original location, and then click Properties.

Click the Location tab, click Restore Default, and then click OK.

Click Yes to recreate the original folder, and then click Yes again to move all the files back to the original folder.

(Ignore the initial links, which are merely about changing names, e.g. when migrating a laptop from one person to another)

John Galt, 25-Oct-2013

The procedure was unchanged in Mavericks from previous OS X versions.

What I did was create a new User in System Preferences, after which I logged out and logged in to that new User.

I performed basic configuration, created some documents, etc.

After that I logged out, logged in under my usual account, and dragged that User’s folder to another volume.

Then, I used Users & Groups “Advanced Options” to point to the new Home folder’s location.

After that, I restarted the Mac using OS X Recovery to reset that user’s Home Folder Permissions and ACLs since permissions problems with the copied Home folder would otherwise result.

After quitting OS X Recovery I was able to log in to the User account established on the USB flash drive, and was able to use it more or less the same way without any surprises. Safari, iTunes, iPhoto all worked, no problems.

The original User account (home folder) remained on the boot volume, so I dragged it to the Trash. I verified that I could still log in to the account on the flash drive, confirming the one created on the boot volume was no longer required.

Attempting to log in to the account with the flash drive disconnected resulted in an expected error (below) and obviously you wouldn’t want to do that while using the account.

Reconnecting the flash drive restored the ability to log in as expected.

Today I received information from Adobe’s CC (Creative Cloud) control-panel that there was an update for Premiere (among other apps). Accompanying information states that it fixes some issues with audio and sub (nested) sequences. The latter is most reassuring. On the other hand I am mid-project(s) and don’t want to impede my current projects.

A good solution would be to do a system backup prior to updating Premiere. Could do that at end of the day, so as not to impede project.

On the other hand, it seems that Adobe supports (kind-of) a way to roll-back to a previous versions:

H264 supports chapter markers (in some form) in principle, but Adobe Premiere is unable to utilise this (at least as of 2012, and I can’t see a way of doing it in February 2014).

If the H264 is encoded into a QuickTime [.mov] wrapper/file (as opposed to a [.mp4] one), and that [.mov] file is played in a QuickTime player, then those chapter markers will appear in (the bottom-right corner) of that player.

Given a simple 3-minute dramatic scene with footage from BMCC (as DNxHD 185 of HD 1920×1080 at 25fps) and a Windows-7 system:

From Adobe Premiere CC (latest version) I exported AAF. Then in AVid I imported that AAF. Result: Bin created, containing what appeared to be (from brief glance) all relevant Media and Sequence objects (now in Avid’s representation), but the Media objects were offline/unlinked and various “cryptic” popup error messages appeared from Avid.

I had naively assumed that the Media objects would have been AMA-linked to the source footage, which by the way included DNxHD recorded by BlackMagic Cinema Camera. However, not only were they not linked, but Avid’s Relink function failed to recognize them.

I had previously succeeded in exporting AAF from Avid to Adobe.

A forum post says Adobe can read Avid but not vice-versa – confirming my (limited) experience. One can only guess at which company is at fault here, but one poster blames Adobe. Regardless, I wasn’t impressed by Avid’s programmer-level “cryptic” error messages.

I tried Bin:[Select Clip > RightClick] but the [Relink to AMA File)s)] option was greyed-out. So I tried the next-best (RightClick) option, namely [Import]. The Import process took significant time, because (as I later confirmed) it was doing a transcode (to DNxHD 120) rather than a re-wrap. Surprising, given it was already DNxHD in the right format and better quality… And this import didn’t replace the right-clicked clip, it just added the import to the bin as an additional clip.

An existing project, just a 3-minute multi-angle (single camera) dramatic scene, used to export without problems, but following just the addition of some audio clips (as “patches” on additional tracks), export stalled on “Reading XMP”.

Previously in this project, when it still exported ok, there was an audio glitch which only happened when a crossfade transition was applied to the beginning of an isolated audio clip (to make it fade-in). In this case the clip was for a short sound effect. The glitch sounded like a woodpecker. Removing the transition removed the “woodpecker”. The reason I attempted that was that I had encountered transition-triggered audio issues in the past (on other projects, Adobe versions and machines). It seems that Premiere gets confused/over-complicated over audio especially in the context of nested sequences. That is a real pain, because nested sequences are really useful and I structure most of my projects that way.

Adobe Premiere seems to have some vulnerabilities with respect to audio and/or nested sequences, and these vulnerabilities seem to have been around for years. Others have encountered similar or related issues, as listed below:

I took an Avid Media Composer (7.0.2) Sequence built from AMA-linked XDCAM-EX footage and transferred that Sequence via AAF to Adobe Premiere (CC 7.2.1)

It worked, even for my AMA-linked footage (Sony XDCAM EX / BPAV) – though it wasn’t as straightforward as I expected – due to “a known issue with AAF in Premiere Pro CC (7.2.1)”. It did succeed with Premiere CS6 (6.0.5), though even then some clunky wrangling was found necessary. Thereafter I opened an existing Premiere CC project and Imported the CS6 sequence successfully. Again I had to double-check the Sequence (this time in Premiere) matched the footage (clips).

Suppose you have timecoded footage etc. from an intermittent shoot of a long event. Perhaps there were also multiple cameras, but for whatever reason (e.g. huge outdoor site) there is no common audio with which to synchronize them. Wouldn’t it be nice if the NLE (or whatever) could auto-populate a Sequence with clips placed appropriately in (timecode-) time on it?

As noted in an earlier post, Adobe Premiere can’t do this, but Avid and Edius can. I already use Avid, so that will be my auto-arranging tool of choice.

In Avid (Media Composer 7.0.2):

Set Project Settings for media type as per source footage

Unlike Premiere, Avid doesn’t have such Sequence-specific settings.

Import the footage

I found it ok to use AMA – no need to Ingest to MXF etc.

And yes, at the end of all this, it transferred (by AAF) from Avid to Premiere ok.

Menu:[Windows > Workspaces > Source/Record Editing]

To reinstate the Timeline – after it closed when I deleted the bad seq

Bin:

Sort the clips into order by Timecod

Shouldn’t matter in principle but it did appear to in practice…

Select all required clips

Do [Bin > AutoSequence]

A new sequence gets created, with the clips placed in time.

The sequence gets auto-named as per the last clip in the selection.

The sequence’s starting-timecode is auto-set to that of the earliest clip in timecode-time (among the selection)

Tip:

Timeline Zoom in/out = Ctrl-] / Ctrl-[ respectively.

I will post separately on how to Export from Avid and Import to Premiere via AAF (Advanced Authoring Format). It worked, even for my AMA-linked footage (Sony XDCAM EX / BPAV) – though it wasn’t as straightforward as I expected – due to “a known issue with AAF in Premiere Pro CC (7.2.1)“. It did succeed with Premiere CS6 (6.0.5), though even then some clunky wrangling was found necessary. Thereafter I opened an existing Premiere CC project and Imported the CS6 sequence successfully. Again I had to double-check the Sequence (this time in Premiere) matched the footage (clips).

Adobe Premiere has a speech-to-text translator, as part of its content-analysis capability. At best it is 80% or so correct in its interpretations, though in my experience only 20-30% reliable. But to optimize its chances, one must select the (spoken) language appropriate to the media (content) being analyzed. But by default, only one language, US-English is available. So how do you get further options?

Summary:

By default, the only language model (sic) installed is that for US-English.

Optionally, one can download (free) Installers for other language modules.

In principle, it is possible to auto-arrange multiple clips on a timeline according to their timecode, e.g. from a camera that was recording time-of-day timecode automatically.

For example, if so-arranged, a timeline might look something like this:

[clip1] [ clip2 ] [clip3] [ clip4 ] [clip5] [clip6]

I haven’t used FCP7 all that much, but I have a faint recollection that it did this somehow – though some other people say not. Regardless, Avid does it, and also Edius reportedly does it, and these could be used as preprocessors in advance of Premiere, just to align the clips in tracks and time.

But (as far as I can tell) Premiere can’t do this, there are currently no add-ons for it to accomplish this conceptually simple task. Not even PluralEyes 3, that can only sync based on audio, which is impractical in some situations e.g. large scale industrial area with different sounds in every corner… People do it manually, e.g. by typing timecode into timeline and adding markers then placing each clip at its associated marker…

Whaaaat!!!

The nearest one can get, apparently, is to “pre-process” in an NLE that can arrange-by-timecode, such as Avid or Edius, then export an AAF for import to Premiere. Edius also (reportedly) auto assigns each camera to its own track(s).

Edius price:

In the UK, I see for example that DVC have a crossgrade offer for (just under) £240 or (just under) £450 for standard purchase.

If it works as expected, then the crossgrade would be worthwhile (in terms of time saved) even if only ever used as a preprocessor…

As I previously blogged, Premiere CC’s [Undo] does not undo media-replacement in Project pane. This was discussed on an Adobe Premiere forum thread. As part of that discussion, the “can’t please everyone” principle was apparent: one view was in favor of that Undo behaviour, another was against it.

Maybe-ideally, both viewpoints would be satisfied if, say, the History window would have a column of checkboxes for “Locked”, meaning all changes are recorded but [Undo] will skip over those having a check-mark (when they could also be greyed-out). The “Media Replace” action could have a default of “Locked”, so it behaves as at present, for those people who like it that way.

I wish such a change-lock feature existed in any case, e.g. if I have made a string of color corrections etc. to various clips on timeline and then afterwards realise there is some “obscure show-stopping issue most productively solved by undo-ing”. One could lock the simple color correction effects etc. prior to undoing as far back as necessary to fix the issue (such as some media link or interpretation or sync issue). I realise it is possible to achieve this by work-arounds, e.g. save to a Project copy then Import that copy and copy/paste attributes each effect across, or one could save Effect Presets and re-apply these after undoing. But such workaround would be cumbersome if there were a number of different effect tweaks on a variety of clips, and one would have to remember/note which clips these were (or else go through all clips). And then there are non-effect changes, like “ripple trim” cut-timing tweaks.

It would also be helpful if the History-window said more specific things than just “Apply Effect” (like which effect) and if the History-window automatically came to the fore when applying an Undo. Those things together would reduce the likelihood of unintended undo’s of any kind.

I discovered by accident that, although one can do ProjectPane:[aClip >RtClk> Replace Footage…], a subsequent Undo will not un-replace (restore previous) footage. I raised this topic at http://forums.adobe.com/message/5778585, and subsequent discussions resulted in a confirmation that indeed this is Premiere’s normal behavior but that there is a reasonable work-around.

So what was the work-around?

My footage happened to be XDCAM-EX, denying me the possibility of simply doing a further [Replace Footage…]. This is because the browser associated with [Replace Footage…] was only a File-Browser, not a Media Browser. Consequently it would list individual component files of the XDCAM-EX folder-structure, but not the single overall high-level sense of “Clip” represented by that structure.

XDCAM-EX footage needs special treatment because it is file-structure based and spanned, broadly like AVCHD. To get such footage properly into Premiere, it is necessary to use the Media Browser, and not simply to drag in the [.mp4] “essence” files within that file-structure. It is ok to drag from Media Browser to Project pane, because that operation recognizes all relevant information in the file-structure, displaying it as a single clip at the highest level, possibly spanning over more than one [.mp4] file. The Media Browser hides such detail from the user.

My next (unsuccessful) workaround-attempt did work but was clunky. This was to re-import the original footage via Media Browser, so it appeared in the Project pane, then select it, then go down to each relevant clip on the timeline and in each case do a [Replace with clip] using [From bin], i.e. the original footage in the Project pane. However, while any metadata (e.g. “Log Notes”) on the original item (prior to replacement) got transferred to the replacement footage, that metadata was not “inherited” by the fresh import of the same original footage, so it had to be copied across manually.

Ugh!

The best work-around was explained (by Jim Simon, in a thread on the Adobe Premiere forum) as follows:

In Project pane, do an offline-and-relink, e.g. via [aClip >RightClick> Make Offline] followed by [aClip >RightClick> Link Media…], which does give the option of using Media Browser.

NB: When I initially tried that, the Locate Media Browser (a fresh instance of Media Browser, in a pop-up window) opened in File mode. However, by clicking that browser’s “eye” button, it was possible to select XDCAM-EX mode (among others). This behavior is unlike that of the main Media Browser, which selects the camera-specific mode automatically.

There’s no magic option, each workstation needs a local storage volume with block-level data access (as opposed to simply file-level access) and formatted to a file system that is native (doesn’t require translation) to that workstation’s operating system. Migration and collaboration imply file copying/synchronization, which implies read-access to the “foreign” file-system. Mac OS can read NTFS, Winows can only read HFS+ via third-party add-on utilities. Furthermore, for speed and responsiveness appropriate to video editing, the local storage should ideally be RAID or SSD. In either case, it is possible to split the local storage (e.g. via partitioning) into more than one file-system. At least, that worked on the mutiple occasions I have taken that approach, and have not been aware of any issues.

In greater detail:

Consider the challenge of setting up a shared data storage volume (e.g. RAID array or SSD) for video editing, such that either Windows or Mac computers can connect to it, and a video project started on (and saved to) on one of those operating systems (OS) can be continued on the other (and vice versa).

My current solution is to split the drive into separate volumes, one for each OS. For example I have done this on RAIDs of various kinds and on an internal drive for Mac systems bootable to either Mac OS or (via Boot Camp) to Windows. In the case of RAIDs I was advised against this by my system supplier, but got the impression they were just being defensive, not knowing of any definite issues, and to my knowledge I did not experience any issues.

It is is not practical to have just one volume (necessarily in that case, one file-system format), because:

Mac OS on its own is able to read NTFS but cannot write to it.

This is a show-stopper. Some of the major video editing applications (e.g. NLEs), slightly disturbingly, may use (or for some functionality, even depend on) read/write access to source-files and the folders containing them.

I initially, naively, imagined that video editing systems etc. would only ever read source media files, not write to them, or to the folders containing them. However that proved very naive indeed…

In Apple/Mac’s (erstwhile) Final Cut Pro 7 I regularly used their (moving) image stabilization effect, SmoothCam. Its analysis phased was typically slow and heavy – not something one would wish to repeat. The result was a “sidecar” file of similar forename to the analyzed source file, but a different extension, placed in the same folder as the source file.

I’m not certain, but got the feeling that maybe the source file (or folder) meta data, such as permissions or somekind of interpretation-change to media files in the quicktime ([.mov] mmedia format.

Certainly, Adobe (on Windows and Mac) could adulterate both files (by appending XMP data – being an Adobe media metadata dialect in XML) and the folders they occurred in (depending on uder-configuration) in terms of sidecar-files.

Sony Vegas also generates sidecar-files, e.g. for audio peaks.

File system translation add-ons can add Windows read/write access to HFS+ (ordinarily it could not even read it) and add Mac OS write access to NTFS (ordinarily it could only read it), but not sufficiently transparent/seamless for big real-time data access as required for demanding video editing endeavours.

File system translation add-ons (to operating systems) exist, such as MacDrive, to allow Windows to read/write Mac OS, or Tuxera NTFS, Paragon NTFS or Parallels for Mac to enable it to read/write NTFS, but these (reportedly, and in part of my experience) only really work well for standard “Office” type applications, not so well for heavy (big andd real-time) data applications such as video editing, where they can impede the data throughput. Doh!

Some people have experienced obscure issues of application functionality, beyond data-movement speed issues.

{Also, I am concerned over the (unknown/imagined/potential) risk that the “alien” operating system and/or its translation utility might alter the file system in some way that upsets its appearance to the “home” operating system.}

FAT is universal but is a riskier option:

FAT is un-journaled, hence risks loss not only of individual files but of whole volume (integrity).

In video editing, corruption could be disastrous to a project, not only in terms of possible data-loss or time wasting and project delays on data recovery, but also in terms of “weird” effects during editing, such as poor responsiveness to commands, whose cause the user may not appreciate. or even an increased risk of unacceptable flaws in the final product.

Such devices only permit file-level access. Consequently, the client systems can e.g. create or retrieve folders and files, but cannot e.g. format the device or address it in terms of lower-level data structures.

A likely explanation for the “impedement” of a NAS (to data responsiveness and throughput) is that such devices store in a local format (typically they run linux) that is invisible to the client, then translate to an appropriate protocol for each operating system accessing it. They normally incorporate a bunch of such protocols. As always, translation => overhead.

Other options, such as SAN and iSCSI, instead of providing file-level access to the client systems, instead offer the lower level of data block access. Thus they appear to the client system as would any local storage device, and can be formatted as appropriate to the client system.

One suggestion I saw was to use a Seagate GoFlex drive, which can be used (read/write) with both Mac and Windows. But the supplier’s FAQ (about that drive) indicates that it depends upon a translator utility for the Mac:

If you would like to be able to “shuttle” data back and forth between a Mac and a PC, a special driver needs to be installed onto the Mac that allows it to access a Windows-formatted drive (i.e. NTFS). Time Machine will not work in this case, nor will Memeo Premium software for Mac. However, if you want your GoFlex solution to also work with TimeMachine, the drive will need to be reformatted to HFS+ journaled.

So I guess there is no “magic storage” option, my main work setup will have to remain based on separate volumes for each OS.

When transferring an editing project from one OS to another, the following actions will be necessary:

Ideally, it should be possible to globally disable all effects, or maybe all those effects (in a list of all effects used anywhere in a project) that a user has marked as being “disableable” (e.g. the cpu-heaviest ones such as Neat Video, which either reduce responsiveness or else ( to avoid this) require rendering.

Solutions:

Put all FX on an adjustment layer, that can itself be enabled/disabled.

In a nested sequence situation, I’m getting short audio repeats from a clip element just prior to a cut.

Solution: for the nested sequence, do menu:[Sequence > Render Audio].

That’s just the audio, not the clip/effects etc. It’s an extremely fast process.

Context:

Premiere Pro CC, latest version at time of writing (7.0.1 (105), under Windows 7 (64-bit).

Structure: I have a sync-sequence (multicam source sequence) consisting of XDCAM-EX (file structure broadly along the lines of AVCHD) and Z1 (plain m2t files). Derived from / dependent on that is a multicam edit sequence, where I cut between camera angles. Then that sequence is itself nested in a master sequence (showing selected extracts of the performance).

For reference purposes: Multicam edit sequence consists (among other things) of a rock band’s “big finish” followed by some applause. I made a cut in the audio part (only) of the nested sequence clip, to enable the audio for the applause to be normalized independently of the band performance. To smooth the join I added a crossfade transition over the cut. Nicer in principle than using volume envelopes.

When I play the original recording or the multicam sync-sequence or the multicam edit-sequence, all looks and sounds fine.

Problem:

When the Master sequence is played back (in preview or an exported/encoded clip) I hear the big finish, then applause starts, then after 2 seconds the “big finish” is heard once again, but at lower audio level.

This effect happens wherever I have used the same cut/normalize/crossfade technique in the (nested) multicam edit sequence. I have also encountered it in previous projects in Premiere CS6.

If I delete the crossfade then the problem disappears… Doesn’t matter what type of audio crossfade is used.

Solution:

Open (in timeline) the Cut-Sequence (where one cuts between various multicam angles etc)

While previewing a complete draft of the video, that had been Exported from Premiere CC, I noticed a repeat, after 2 seconds, of the “big finish” of one of the band’s songs. The repeat is quieter than the “real” (wanted) one.

Investigations:

The problem occurs when editing, but only at the Master_Sequence level. It does not occur at the Multicam_Sequence level.

In the Multicam_Sequence, near to the problem part of the audio. is a Crossfade transition. If I delete that Crossfade (leaving the audio transition to be a plain Cut) then the problem (at Master_Sequence level) no longer occurs.

The repeated element of audio is not that within the Crossfade transition, it is instead from a (short) clip (resulting from multicam editing) almost immediately preceding the transition.

This is suggestive of a memory issue, such as cache (RAM or file) or buffer (presumably RAM).

It feels to me like this is a bug in Premiere CC, broadly similar to something I once encountered (in a different project) in Premiere CS6.

I often encounter bugs when I go “off-piste” as compared to most people’s editing procedure, presumably due to programmers/testers not having thought similarly “off-piste”.

The only potentially (?) unusual thing I did in the edit of the Multicam Sequence was at certain places to cut just the audio track (via [C-Razor] tool, having selected only the audio part via [Alt-LeftClick].

The reason I did that was to separate the end of a song from the following applause etc., which was much quieter, to allow Clip:[Audio Gain > Normalize] to be carried out separately on that applause. Then I added [Crossfade > Constant Power] in order to smooth the join to the applause.

Possibly the 6dB limit might be configurable in Preferences (I just saw a setting suggestive of that but haven’t tried it),

It is very convenient and less “messy” than fiddling about with Envelopes and Track Width etc.

Experiments:

As stated earlier, if I delete that [Crossfade > Constant Power] (leaving the audio transition to be a plain Cut) then the problem (at Master_Sequence level) no longer occurs.

If I replace the crossfade with [Crossfade > Constant Gain] then it makes no difference (the problem remains).

If I delete the multicam sequence element (audio & video) penultimately preceding the transition, i.e. the element containing the “big finish”, leaving a gap (black silence) then when I play the Master Sequence, the gap faithfully appears as expected but then the “repeat” (of the “big finish”) nevertheless happens.

By “penultimately” I mean not the clip that is the left-hand part of the transition, but the clip before that (which is not therefore any part of the transition).

If instead I delete only the audio part and then drag the previous audio (only) part forwards (in time) to fill the gap, then when I play Master Sequence, the “repeat” now comes from the end of what is now a different “previous clip” (the one that was prior to the one I just deleted).

This tells us the repeat comes from whatever clip is penultimate to the Crossfade audio transition, it does not happen only for one clip in particular.

Problem: All material has been shot on the Sony FS 100 camera – imported into PP with the Media Browser. In one interview the last part of a clip has corrupted audio. At one point on the timeline the audio stops playing, and it sounds like a scratch on a vinyl record – two words repeating themselves to the end of the clip (See screenshot of timeline). The images are as they should.

Sounds very similar to my problem.

Solution: (Delete) everything within both the /Media Cache and /Media Cache Files folders…

I imported a few camera cards full of AVCAM / AVCHD footage from my HMC-150 and edited for a few days. Then I clicked on one imported clip and found that the audio was wrong. Glitches, skips, out of sync, weird things happening – all nice sounding, but not in the right places. I checked the original MTS files on my HD using VLC player. Sound was fine, everything was in sync.

Solution:

For each imported clip in .mts format, Premiere adds a file with the same name with .xmp as the extension in the same folder. Feeling bold, I quit Premiere then deleted all these the .xmp files for that card – though i didn’t empty my trash yet. I re-opened Premiere and double-clicked that file. It was dead silent, as clips often are when first imported to Premiere. It does some meta-data-ing… and then the sound was all back in proper order, problem solved.

The XMP files had been re-produced in that folder, although this time, apparently, without glitches.

{The poster of this solution appeared slightly concerned, at least initially, about the addition of [.xmp] (sidecar) files into the file-structure, as indeed I had reported e.g. at http://blog.davidesp.com/archives/901, but (like me) didn’t do anything about it, just bore the fact in mind}

{Doubts:

In my case, the file itself plays fine in Premiere, it’s only when nested that the problem arises, hence I doubt the same solution would fix my problem
}

I highlighted in http://blog.davidesp.com/archives/598 (10 months ago) that Adobe Premiere etc. can adulterate media files, in terms of metadata and/or sidecar-files (depending on user-configurations of these applications. I indicated that, regardless of the reasonableness of at least some of these actions, this could potentially cause problems to other applications.

…if sharing assets with FCPX and Adobe Premiere, Adobe ‘touches’ (resets the modification date) of each file without doing anything else to it, but also sprinkles sidecar files into directories of transcodable files for metadata, thus sending any returning FCPX activity into a tailspin, requiring a re-linking session. It’s oddities like these which haunt the implementation of FCPX in a wider system and make system managers wonder if FCPX is actually worth implementing in its current state.

That was over a year ago, and so the issue may or may not exist for the current version of FCPX.

As users, whether or not the actions of one application adhere to standards and another don’t, what we as users ultimately care about is workflow, which in this case translates to “does it connect up with my other tools/processes?”. So we have to maintain a “situational awareness” of potential interoperability pitfalls.

Incidentally, I recall that FCPX’s predecessor (in history at least, if not development-line) FCP7 could adulterate source directories with its own sidecar files, produced by its SmoothCam effect. Not knowing anything further for sure, I nevertheless wondered (at that time) what it might be doing “under the hood” of the QuickTime [.mov] wrapper.

When uploading to YouTube (or Vimeo or indeed most online video services), the uploaded video need not be in the format that will ultimately be served to the audience. Instead, it is essentially in an an archive role, and based on this archive, the services will (now and/or in the future) encode their own copies at various resolutions. The uploaded “archive” should therefore be of the best quality, and is not constrained to be in a format that plays well on most target devices.

YouTube defines two upload-formats: Standard (for typical enthusiast videos) and Enterprise (for serious matter such as movies or corporate productions). A 5-minute video in Standard format may be about 350 MB while in Enterprise format it may be around 2GB. So for practical purposes, Enterprise format requires an Enterprise internet-connection.

Standard-Level Encoding:

YouTube gave good results when the video was uploaded in H264 at 8 mean 16 max Mbps.

I (currently) believe this is a good practical upload-format to use in most cases.

It has given good results for general scenes (in the experience of others as well as myself).

These are essentially “BluRay-like” / “Gold Standard” formats, from which YouTube’s servers can derive multiple present-day play-formats. Their use should also result in good-quality archive material from which, in future, to derive further (as yet uninvented or not-yet-popular) formats. To “stand the test of time”…

Audio 320Kbps

Video:

Bitrate:

50 Mbps for 1080p (25 fps)

30 Mbps for 720p (25 and 50 fps?)

Level:

4.2

General H264 advice is use lowest Level that permits (includes as an option) your required bitrate.

Level 4.2 additionally has a reasonable number (hence density) of macro-blocks.

As opposed to a general rule of thumb (elsewhere) of three times the fps.

e.g. 75 frames in the case of 25 fps or 150 frames for 50 fps.

Scary numbers…

Various people report less smooth motion when shorter keyframe distances are used. But maybe that only applies to lower bitrates?

B-Frames:

This is the number of bi-directional (B) frames between I and P frames, e.g. a value of 3 would give: [IBBBPBBBPBBBPBBBP]

The recommended number is 2 for YouTube-Enterprise context (as opposed to 3 in some other contexts).

Details:

I had shot two videos on my trusty Sony EX3 camera, one at 1080p25 the other at 720p50.

Reason? The first one was a standard live entertainment event, demanding some run&gun, hence I shot it at highest definition. However the other event was a sporting one, and 50 fps provides more potential for handling fast action in various ways (smoother action or slow motion). On this camera, 50fps was only possible in 720p, not 1080p (the camera can also record 1080i50 (fields/second), from which one can generate motion-estimated full-frame 1080p50, but that is extra work, not conducive to productivity, hence best avoided).

On my Adobe CC editing system, I completed the 720p50 video first, then encoded that to 720p25 (Adobe Premiere CC’s YouTube preset, of 5Mbps, mean=max) for checking and eventual upload to YouTube. A day or two later I completed the (longer) 1080p50 video, then similarly encoded that to 720p25 for smaller file and faster upload for the draft/check process.

Then came time to upload the 1080p25 video to YouTube, initially with distribution set to Private. It was late and I forgot to change the encoder setting to 1080. Mistakes can happen, that’s why it was initially made Private and why a test-play or two at various resolutions was in order. When played (from YouTube), not only did this reveal the reduced resolution, unexpectedly there was also some very obvious blocking on fast action, especially when the YouTube video was played at lower resolutions.

…Which of course illustrates the exact purpose of Quality-Checking is for, in the workflow…

Naturally the first thing to so was re-encode at 1080 (duh!). Adobe’s YouTube-preset for this used a VBR bitrate 8 Mbps (mean=max). Then also I also increased the maximum bitrate to 16. I hadn’t time for experimenting, so I just made a best-guess. Result: Success! Following upload of the result to YouTube, test-plays of this looked far better in all respects at the various play-resolutions.

So I did some further web-research … which led me down a (finite) “rabbit-hole” wherein I discovered the existence of two kinds of upload-format standards: Standard (a few Mbps) and Enterprise (BluRay-ish, tens of Mbps). Aghast at the latter, I did further web-searching, which confirmed it.

The following explains the cause (in this instance) of what appears to be a very general catch-all error-message. It is a copy of my posting to an Adobe forum thread: http://forums.adobe.com/thread/1076893

Yet another potential cause:

Wrong (obsolete) Adobe Application Manager executed, of more than one present on the system. The latest one should be present in Windows 7’s QuickLaunch tray. Run it from there.

Cause of that situation: update from a state of over a year ago. I recently recovered a Windows 7 laptop back to over a year ago (from total system backup), due to partial disk failure and consequent corruption of operating system, and was in the process of updating everything. However in principle could the same thing have happened if it had been a laptop I simply hadn’t touched since then?

In that (historical) state, CS5.5 and CS6 were installed and there was an existing Adobe Application Manager (AAM) Shortcut from Desktop to [C:\Program Files (x86)\Common Files\Adobe\OOBE\PDApp\core\PDapp.exe]. Naively I ran that and (not surprisingly) it triggered a newer version of AAM to download. I let that happen, then (naively) I double-clicked the same shortcut (which I assumed now pointed at the just-now downloaded new AAM). That gave the infamous error:

“the remote server is not responding in a proper manner.” (etc.)

Following a day of re-trying – as the remainder of that error message suggested, I checked the Soft-Firewall settings, where I noticed two instances of AAM (both fully enabled to network). Consequently I went into “Detective Mode”. Maybe there was more than one AAM on the system, or uninstalling and reinstalling AAM would help. But looking in Windows’ “Programs and Features” Control Panel, I could find no instances of AAM. So maybe AAM was not a “Program” in that sense but some kind of background “Service”, the thought of which led me to look at Windows’ QuickLaunch tray.

AAM was indeed present in the QuickLaunch tray so I ran that instead. It initially opened but then failed to progress, because the error popup from the previous (wrong) AAM was still open (buried under some other windows). However, once I closed that popup, the new (QuickLaunch) version of AAM progressed as expected, listing applications to be updated. YAY!

A rare situation perhaps, but with Adobe’s popularity, maybe even “rare” is too big a big number of users, especially if the occasional VIP/deadlined/embedded user/suer 😉 could be embarrassed/frustrated by this. Regardless of technical definition, it could be perceived by such a person as a “Cloud Glitch”. Thus…

Suggestions to Adobe:

Make the new (QuickLaunch) version of AAM check for the presence of any obsolete ones and (prompt user to?) delete?

Or if it’s actually the same program [PDapp.exe] but it must only be executed from the QuickLaunch tray then could it detect that “state of misuse” and give a more helpful error nessage?

Would AAM benefit from more thorough development attention to its (direct or indirect) processes of error messaging? For example could it do simple diagnostics (broadly like ping) to check network connectivity and rule that in/out (and inform the user). Then maybe higher level protocol-tests (which might reveal that AAM version’s obsolescence or corruption)?

Today my Adobe updater reported that a bunch of new Apps, all with a “CC” suffix, were available. This naming confused me: was this the new version after CS6? Or was it some kind of collaborative bloatware I didn’t need right now? Such confusion arose because I had expected the new version to be called something like CS6.5 or CS7.

Premiere Pro CC is the new version after CS6, Adobe have chosen not to name it numerically e.g. CS7.

OK but then the recurring (every version) question: if I install this new version, will it coexist with the old one or will it wipe it out?

According to a websearch (as below), they should coexist. But I haven’t tried it yet.

To convert from [.flv] to another format, use VLC Media Player’s [Media > Convert/Save] option. Be sure to set the destination as well as the source. VLC can only convert to formats in its own internal container and codec sets, but e.g. can convert to [.mp4] containing H264.

Thereafter can use e.g. Sony Vegas to generate e.g. [.avi] containing CFHD, e.g. for onward use in applications that don’t recognize mp4-h264. Vegas is more accommodating and flexible than (straight use of) Adobe Media Encoder, as regards non (broadcast) standard frame sizes and proportions. Conveniently, Vegas automatically matches the Project to the footage on footage-import.

Prior to that, I tried installing and using Riga, the two-way FLV convertor, but it didn’t work on my Window 7 (64-bit) machine, opening only a blank window where a GUI was expected, and both the downloader and installer were both full of bloatware (NB needed to install in Advanced mode in order to avoid some of that). Pointless…

The ability to rearrange the order of video and audio (etc.) tracks in an editing-project in a Non-Linear Editing project.

It’s one of those basic things I assumed all NLEs would allow. But not so. Some have workarounds involving the creation of new Sequences and pasting in contents from original Sequences, in which case why haven’t they simply automated that workaround? Bizarre!

(This is actually an older post, from about a wek or so ago, but it was left languishing in “Draft” status. But rather than delete it, here it is, out-of-sequence, for posterity)

Nowadays for video editing I mainly use Adobe CS6. However I have still some old projects edited with Sony Vegas (10) which now have new clients. One such project was shot as HDV on a Z1, giving 1440×1080 interlaced, at 50 fields/second, which I call 50i (it doesn’t really make sense to think of it as 25 fps). The required new deliverable from this is a PAL-SD DVD, 720×5786 50i. In addition, I want to deliver high-quality progressive HD (not V) 1920×1080 progressive.

The PAL-SD frame size of 720×576 has exactly half the width of the HDV source and just over half its height. My naive initial thought was that the simple/cheap way to convert from the HDV source to the SD deliverable would be to merely allow each of the HDV fields to be downscaled to the equivalent SD field. This could be performed in Sony Vegas itself, to produce an SD intermediate file as media asset to Encore to produce a DVD.

Some potential complications (or paranoia) that come to mind in this approach are:

Levels-changes, through processes associated with the intermediate file. For example it might accidentally be written as 16-235 range and read at 0-255 range. In general, uncertainty can arise over the different conventions of different NLEs and also the different settings/options that can be set for some codecs, sometimes independently for write and for read.

HD (Rec 709) to SD (Rec 601) conversion: I think Vegas operates only in terms of RGB levels, the 601/709 issue is only relevant to the codec stage, where codec metadata defines how a given data should be encoded/decoded. The codec I intend to use is GoPro-Cineform, with consistent write/encode and read/decode settings. Provided Vegas and Encore respect those, there should be no issue. But there is the worry that either of these applications might impose their own “rules of thumb”, e.g. that small frames (like 720×576) should be interpreted as 601, overriding the codec’s other settings.

Interlace field order. HDV is UFF, whereas SD 50i (PAL) is LFF. Attention is needed to ensure the field order does not get swapped, as this would give an impression of juddery motion.

Just as a test, this was initially read into an Adobe Premiere project, set for PAL-SD-Wide. There, Premiere’s Reference Monitor’s YC Waveform revealed the levels range as 0.3 to 1 volts, which corresponds to NTSC’s 0-100% IRE on the 16-235 scale. No levels-clipping was observed.

So using the 0-255 levels in Vegas was the right thing to do in this instance.

The Configure Cineform Codec panel in Sony Vegas (v.10) was quite simple, offering no distinction between encode and decode, allowing only for various Quality levels and for the Encoded Format to be YUV or RGB. The latter was found to have no effect on the levels seen by Premiere, it only affected the file-size, YUV being half the size of RGB. Very simple – I like that!

In a previous experiment, involving a badly-produced DVD having swapped field-order, I found this (unlike WMP or VLC) reproduced the juddering effect I had seen on a proper TV-attached DVD player. So WinDVD is a good test.

Made a physical DVD via Encore.

The physical DVD played correctly on TV (no judder).

An alternative would be to deinterlace the original 50i to produce an intermediate file at 50p, ideally using best-quality motion/pixel based methods to estimate the “missing” lines in each of the original fields. But would the difference from this more sophisticated approach be noticeable?

There also exists an AviSynth script for HD to SD conversion (and maybe HDV to SD also?).

It is called HD2SD, and I report my use of it elsewhere in this blog. I found it not to be useful, producing a blurry result in comparison to that of Sony Vegas ‘s scaling (bicubic).

I am becoming less enthusiastic about the “Integrated Suite” philosophy or perhaps actuality of Adobe CS6, in favour of a “Best of Breed” approach, where I cherry-pick the best tool for each kind of job and then design or discover my own workflow for integrating them.

I reached this conclusion from the following experiences:

As regards editing itself:

For general A & B Roll” editing, I find Premiere is ok, though for improved usability, I’d prefer a Tag-based system (as in FCPX) to the traditional Bin-based one (as in Adobe & Avid).

For MultiCam editing, even in Adobe CS6, I find Premiere does the job but I find it clunky, frustrating and limited at times, like it has not yet been fully “baked” (though “getting there”)…

e.g. In the two such projects I have so far worked on, there has been an annoying 2-second delay from pressing the spacebar to actual playing. Maybe some kind of buffering?

I found a setting for “Pre-roll” in the Preferences but altering it made no difference.

e.g. It brings up a separate MultiCam Monitor instead of using the Source Monitor. You have to remember to activate this each time before playing. I find that a nuisance (and time-waster when I forget) especially because I tend to alternate multicam editing as such with tweaking the cut timings until they feel right, and sometimes that can only be done in retrospect.

A workaround given at that link: Before to stop the playback press the key 0 (zero) of the keyboard and then you can stop the play (with the Space bar) without the cut in the timeline.” Duh!

e.g Markers are really useful in multicam, but while Premiere’s are steadily improving with product version, they are way clunkier and more limited than those in Sony Vegas:

e.g. I put a marker at the start of an interesting section (of timeline), I select it and define its duration to be non-zero, so I can stretch it out to mark a region, then I drag the playhead to the find the end of that interest, I try to drag the marker’s right-hand end up to the playhead, but instead the playhead gets reset to the start of marker. Duh!

e.g. Markers cannot be promoted from clip (media or nested Sequence) to current Sequence.

e.g. waveform displays (assuming you can get them to appear in the first place) go blank when sliding clips around. Really annoying when trying to synchronise to music etc.

…so I will explore other options for multicam:

In the past (as will be apparent from the above) I have had more joy, as regards Multicam, with Sony Vegas.

I will check out what people think of other NLEs as potential “Best of Breed” for multicam editing. Thus far I have heard (from web-search) good things about FCPX and LightWorks.

For audio enhancement, such as denoising, I find iZotope’s RX2 far superior to the one in Adobe Audition.

For making a DVD:

I find Encore to be handy in some ways but limited and clunky in others.

e.g. can’t replace an asset with one of a different type (e.g. [.avi] and [.mpg]).

The advantage of using an integrated DVD-Maker such as Encore might be limited:

e.g. many people are not using the direct link, but exporting from Premiere/AME, in which case any third-party DVD Builder could be used.

The only significant advantage I am aware of is the ability to define Scene/Chapter points in Premiere and have them recognised/used by Encore.

But maybe some third-party DVD Builder applications can also recognise these? Or can be configured/helped to do so? Worth finding out.

In one Adobe CS6 Encore (a DVD constructor) project, the [Check Project…] feature found no problems, but on attempting to [Build] the project, the following error was reported: “Encore failed to encode”.

A web-search (further below) revealed that this error message could have reflected any of a number of potential problems.

In my specific project’s case, I found that shortening the filename name fixed the problem. Possibly the filename length was the issue, but it could have been any of the following (experimentation is needed to confirm what it was). Possibly Encore doesn’t like one or more of the following, as regards either filenames or, possibly, the total text representing the volume, folder-chain and file-name.

Long filenames

Possibly the limit is 80 characters.

Specific kinds of character in the filename, such as:

Spaces (it’s safer to use underscores instead).

Unusual (legal but not popularly used) characters, such as “&” (ampersand).

It is possible to configure Encore to use Adobe Media Encoder (AME) instead of its own internal one. Doesn’t work for Encore’s [Build] operation but does work for its [asset >RtClk> Transcode Now] operation. The advantages I expect of of using AME in this way:

It has been said (as of CS5) that AME is faster, being 64-bit as opposed to 32-bit for the encoder in Encore of CS5.

I suspect/hope that AME might also be more robust than Encore’s internal encoder.

…and also higher quality; indeed one post implied this may be true for CS6.

Consistency is a great thing; having used AME from Premiere etc. I expect any lessons gained will apply here.

AME has some nicer usability-features than Encore, such as a Pause button and the ability to queue a number of jobs.

These features could be handy for encoding multiple assets for a DVD or Blu-Ray Disk (BD).

For me, the learning-points about Adobe are:

Potentially (to be tested) the best workflow for Encore is:

Encode via AME:

Preferably from Premiere.

Or via AME directly

Or, if Encore is so configured (away from its default) then via its [asset >RtClk> Transcode Now] option

(doesn’t happen if you instead use the [Build] option, which always employs Encore’s internal encoder).

At http://forums.adobe.com/message/5297248 one poster recommends: << it is a good idea to use “transcode now” before building to separate the (usually longer) transcode of assets step from building the disk.>>

I’m guessing that the only “cost” of not using Encore’s internal encoder might be the “fit to disk” aspect, and that might be helpful for quick turn-around jobs.

(Though on the other hand, if that encoder is less robust (I don’t know, only suspect), then that factor would constitute a risk to that quick turn-around…)

Encore’s error-reporting (error message) system should be more informative, the current “Encore failed to encode” message is too general.

According to Adobe Community forum posts identified in the Web-Search (further below):

Others make this same point.

One post explains that <<Encore uses Sonic parts for some (most?) of the work… and since Sonic does not communicate well with Encore when there are errors… bad or no error messages are simply a way of life when using Encore>>>

Another refers to an underpinning software component by Roxio, namely pxengine, which required to be updated for Windows 7 (from the previous XP).

The post states (correctly or otherwise – I don’t know) that the file is [PxHlpa64.sys], located in [C:\windows\System32\drivers] and (as of CS5) the version should be [3.0.90.2].

A further post alleges that the specific subsystem is called Sonic AuthorCore, which is also used by Sonic Scenarist.

It would be simple for Adobe to trap filename-type errors in the front-end part of Encore, prior to sending that data to its (alleged) sub-system that is maintained by Sonic.

In the long term, the preferred fix would of course be for the sub-system developer to update that system to remove the limitations.

Encore currently has some kind of (hidden) limitation on the kind or length of text representing the filename or file-path-and-name, ideally this limitation should be removed or at least the maximum allowed length should be increased.

In Adobe CS6 Encore, suppose you have a timeline containing a clip, then (maybe after having added Scene/Chapter markers there) for some reason you need to replace the clip, e.g. due to a slight re-edit or tweak. All you want to do is substitute a new clip for the existing clip, one-for-one, keeping the markers (that you have only just added) in place (together with their links to DVD menu buttons you may also have just now created).

In Encore, media (“Asset”) replacement is not as straightforward or as flexible as in Premiere…

I discovered (the hard way) that:

You can’t replace an asset by another of different file extension.

e.g. It won’t let you replace an [.avi] file by a [.mpg] file.

If you manually delete an existing clip from a timeline, any chapter markers disappear along with it.

I guess therefore that such markers “belong” to the clip, not the timeline.

This is despite their superficial resemblance to markers appearing in a Premiere timeline, which do belong to the Sequence (of which the timeline is a view).

Consistency would be good to have among these suite products…

Also in Encore, it would help to have the ability to Copy/Paste markers from one asset to another.

What is the best workflow for going from a high-resolution footage, potentially either progressive or interlaced, possibly through an intermediate Master (definitely in progressive format) to a variety of target/deliverable/product formats, from the maximum down to lower resolution and/or interlaced formats such as SD-DVD ?

Here’s one big fundamental: Naively one might have hoped that long-established professional NLEs such as Premiere might provide high-quality optical processing based downscaling from HD to SD, but my less optimistic intuition, about the un-likelihood of that, proved correct. In my post http://blog.davidesp.com/archives/815 I note the BBC Technical standards for SD Programmes state: <<Most non linear editing packages do not produce acceptable down conversion and should not be used without the broadcaster’s permission>>.

Having only ever used Adobe (CS5.5 & CS6) for web-based video production, early experiences in attempting to produce a number of target/deliverable (product) formats proved more difficult and uncertain than I had imagined… For a current project, given historical footage shot in HDV (1440×1080, fat pixels), I wanted to generate various products from various flavors of HD (e.g. 1920x1080i50, 1280x720p50) down to SD-DVD (720×576). So I embarked on a combination of web-research and experimentation.

Ultimately, this is the workflow that worked (and satisfied my demands):

Resolution: The original footage/material could e.g. be HD or HDV resolution. What resolution should the Master be?

One argument, possibly the best one if only making a single format deliverable or if time is no object, might be to retain the original resolution, to avoid any loss of information through scaling.

However I took the view that HDV’s non-standard pixel shape (aspect ratio) was “tempting fate” when it came to reliability and possibly even quality in subsequent (downstream in the workflow) stages of scaling (down) to the various required formats (mostly square-pixel, apart from SD-Wide so-called “16:9” pixels, of 1.4568 aspect ratio (or other, depending where you read it).

So the Master resolution would be [1920×1080].

Progressive: The original footage/material could e.g. be interlaced or progressive, but the Master (derived from this) must be progressive.

If original footage was interlaced then the master should be derived so as to have one full progressive frame for each interlaced field (hence double the original frame-rate).

The concept of “doubling” the framerate is a moot point, since interlaced footage doesn’t really have a frame rate, only a field rate, because the fields are each shot at different moments in time. However among the various film/video industry/application conventions, some people refer to 50 fields/second interlaced as 50i (or i50) wile others refer to it as 25i (or i25). Context is all-important!

Quality-Deinterlacing: The best way to convert from interlaced fields-to-frames is via motion/pixel/optical -based tools/techniques:

I have observed the quality advantage in practice on numerous projects in the distant past, e.g. when going from HDV or SD (both 50i) to a variety of (lower) corporate web-resolutions.

This kind of computation is extremely slow and heavy, hence (for my current machines at least) more an overnight job than a real-time effect… In fact for processing continuously recorded live events of one or two hours, I have found 8 cores (fully utilised) to take a couple of 24-hour days or so – for [AviSynth-MultiThread + TDeint plugin] running on a [Mac Pro > Boot Camp > Windows 7].

But (as stated) this general technique observably results in the best quality, through least loss of information.

There are a number of easily-available software tools with features for achieving this, Adobe and otherwise:

e.g. AviSynth+TDeint, (free) After-Effects, Boris.

e.g. FieldsKit is a nice convenient deinterlacing plugin for Adobe (Premiere & After Effects), and is very friendly and useful should you want to convert to a standard progressive video (e.g. 25fps), but (at this time) it can only convert from field-pairs to frames, not from fields to frames.

I submitted a Feature Request to FieldsKit’s developers.

Intermediate-File Format: A good format for an Intermediate file or a Master file is the “visually lossless” wavelet-based 10-bit 422 (or more) codec GoPro-Cineform (CFHD) Neo

Visually lossless (such as CFHD) codecs save considerable amounts of space as compared to uncompressed or mathematically lossless codecs like HuffYUV and Lagarith.

I like Cineform in particular because:

It is application-agnostic.

It is available in both VFW [.avi] and QuickTime [.mov] varieties (which is good because I have found that it can be “tempting fate” to give [.mov] files to certain Windows apps, and indeed not to give it to others). The Windows version of CFHD comes with a [.avi] <-> [.mov] rewrapper (called HDLink).

Another advantage is that CFHD can encode/decode not only the standard broadcast formats (and not only HD) but also specialized “off-piste” formats. I have found that great for corporate work. It’s as if it always had “GoPro spirit”!

CHFD Encoder Settings from within Sony Vegas 10:

These settings worked for me in the context of this “Sony-Vegas-10-Initially-then-Adobe-CS6-centric” workflow:

Technical Production History of a Master for an Actual Project:

This is merely for my own reference purposes, to document some “project forensics” (while I still remember them and/or where they’re documented):

This was a “Shake-Down” experience, not exactly straightforward, due to an unexpected “hiccup” between Sony Vegas 10 and AviSynth-WAVSource. Hiccups are definitely worth documenting too…

Modified date minus creation date is about 3.5 hours, which I guess accounts for the render-time (on a 2-core MacBook Pro of 2009 vintage winning Windows 7 under Boot Camp).

The next stage of processing was to be by AviSynth.

However AviSynth had problems reading the audio out of this file (it sounded like crazy buzzes).

To expedite the project, and guessing that Vegas 10 had produced a slightly malformed result (maybe related to the audio setting bug?), and hoping that it was just a container-level “audio framing” issue, I “Mended” it by passing it through VirtualDub, in [Direct Stream Copy] mode, so that it was merely rewrapping the data as opposed to decompressing and recompressing it. The resulting file was:

This was processed into full HD progressive (one frame per field, “double-framerate”) by an AViSynth script as follows, its results being drawn through VirtualDub into a further AVI-CFHD file, constituting the required Master.

I used AvsP to develop the script. It provides helpful help of various kinds and can immediately show the result in its preview-pane.

Multi-threaded:

To make best use of the multiple cores in my machine, I used the AviSynth-MT variant of AviSynth. It’s a (much larger) version of the [avisynth.dll] file. For a system where AviSynth (ordinaire) is already installed, you simply replace the [avisynth.dll] file in the system folder with this one. Of course its sensible to keep the old one as a backup (e.g. rename it as [avisynth.dll.original]).

Audio Issue:

This particular script, using function [AVISource] to get the video and and [WavSource] to get the audio, only gave audio for about the first half of the movie, with silence thereafter.

Initially, as a workaround, I went back to VirtualDub and rendered-out the audio as a separate WAV file, then changed the script to read its [WAVSource] from this.

That worked fine, “good enough for the job” (that I wanted to expedite)

However afterwards I found a cleaner solution: Instead of functions [AVISource] and [WAVSource], use the single function [DirectShowSource]. No audio issues. So use that in future. And maybe avoid Vegas 10?

The script was processed by “pulling” its output video stream through VirtualDub which saved it as a video file, again AVI-CFHD. Since no filters (video processing) was to be performed in VirtualDub, I used it in [Fast Recompress] mode. In this mode, it leaves the video data in YUV (doesn’t convert it into RGB), making it both fast and information-preserving. Possibly (not tested) I could have simply have rendered straight from AvsP:[Tools > Save to AVI]. When I first tried that, I got audio issues, as reported above, hence I switched to rendering via VirtualDub, but in retrospect (having identified a source, perhaps the only source, of those audio issues) that (switch) might have been unnecessary.

I have read expert postings on Adobe forums stating that as of Adobe CS6, this is the best route.

This appears to be the main kind of workflow the software designers had in mind, hence a CS6 user is well-advised to follow it.

It represents a “well-trodden path” (of attention in CS6’s overall development and testing).

Consequently, (it is only in this mode that) high-quality (and demanding, hence CUDA-based) algorithms get used for any required scaling.

Not knowing the application in detail, hence having to adopt the speculative approach to decision-making, it feels likely that this workflow would have a greater chance of reliability and quality than other, relatively off-piste ones.

Premiere is the best stage at which to add Chapter Markers etc.

Chapter markers etc. get stored as ??XMP?? and are thereby visible to Encore (Adobe’s DVD-Builder)

Better to place such markers in Premiere rather than in Encore, since:

In Encore, Chapter markers act as if they are properties of Assets, not Timelines.

If you delete an asset from a timeline, the chapter markers disappear also.

Encore (CS6) Replace Asset has some foibles.

In Encore, if you were to put an [.avi] file asset on a timeline, then add markers then try to replace that asset with a [.mpg] file, you would be in for a disappointment; if the file extension differs then the markers disappear. If required, then the markers would have to be re-created from scratch. Same again if you subsequently replaced back to a new [.avi] file.

The Foibles of Encore (CS6)’s Replace Asset function, in more detail:

Good news: If the new asset has the same file extension then any existing markers are retained.

This possibly suggests that they are transferred from the old asset to the new one.

Bad news: If the new asset file extension differs from the old one, then:

You get an error (popup): ???

e.g. it refused my attempt to replace an [.avi] file by a [.m2v] file).

Partial-workaround:

You can instead delete the existing asset from the timeline, prior to dragging another asset there..

..BUT as a side-effect that deletes any of the old asset’s markers also…

…and furthermore Encore has no way to copy a set of markers from one asset to another

…which would otherwise have been a nice work-around for the above side-effect.

Premiere Export: Export / Render to Target Format.

You may wish to render to a number of formats, e.g. SD-Wide DVD, Blu-Ray Disk (BD), YouTube upload format, mobile phone or tablet.

The most efficient strategy is to Queue a number of jobs from Premiere onto Adobe Media Encoder (AME.

AME can run some things in parallel (I think).

AME has a [Pause] button, very useful for overnight silence or prior to travel (Windows Sleep/Hibernate).

Click the [Queue] button, to send the job to the Adobe Media Encoder (AME)

Quality Inspection of Result (intermediate or target file):

Check the quality of the encodes via VirtualDub, e.g. for DVD-compatible video media, the correctness of interlacing and for progressive media the quality of deinterlacing.

For interlaced downscaled material derived from higher resolution interlaced, the combs should be fine-toothed (one pixel in height). A poor quality result (as expected for straight downscaling by any typical NLE such as Premiere, from HD interlaced to SD interlaced) would instead exhibit combing with thick blurry teeth.

VirtualDub is great tool for a a close-inspection role because its Preview can zoom well beyond 100% and, vitally, it displays the video as-is, with no deinterlacing etc. of its own.

In the past I have searched for and experimented with a number of candidate tools to be effective and convenient in this role. VirtualDub was the best I could find.

e.g. zoom to 200% to make the teeth easily visible.

Plain VirtualDub is unable to read MPEG2 video, but a plugin is available to add that ability:

On one occasion, it failed at this stage, with a “Encode Failed” or “Transcode Failed” (depending where I looked) error. Solution: Shorten the file name.

Ok it was long-ish but I didn’t realize Encore would be so intolerant to that. The suggestion of it only struck me later (the appearance of this guess was thanks to years of experience with computing etc.).

Quality Inspection of the DVD

I have found Corel WInDVD to show results representative of a standard TV with a DVD Player.

I have found popular media player such as VLC and Windows Media Player (WMP) to behave differently to this, hence not useful for quality-checking. Problems I found included:

False Alarm: Playing went straight to the main video, didn’t stop at the Main Menu (as had been intended). However it worked fine on a standard physical DVD player.

Hidden Problem: In one case I deinterlaced improperly, resulting in “judder” on movements when played on TV (via physical DVD player). However it appeared fine on both VLC and WMP.

Metadata

In the case of WMV files, just use Windows Explorer:[aFile >RtClk> Properties > Details] and edit the main items of metadata directly.

For DVD generated by Adobe Encore, the Disk label (data) is the same as the Project name.

ImgBurn, a popular alternative to Encore as regards actually burning a disk, provides a way of changing this disk-label.

Foreword
I get the main gist of Adobe’s CS6 Cloud concept, which is not as flexible as the Kindle model, but I am nevertheless slightly “cloudy” or at least hazy over practical details like how to seamlessly transfer from one machine to another, and I have concerns such as what would happen if my main machine became unavailable, for example due to loss or damage/corruption. Or what would happen if I forget to exit or deactivate (whatever) on one machine (e.g. at a work location) then would it still be possible to work on another machine (e.g. at home or remote location)? I am also concerned whether there is any potential for serious hiccups and delays to a project in progress, resulting from any unknown (to me) intricacies of Adobe’s license control system. So I set forth (on the web) to find out:

Failed to Deactivate before Unuinstalling: Uninstalled your software without first deactivating it? Reinstall the software (presumably any one application) and then in that application do Menu:[Help > Deactivate].

Forgot to Exit the application on a works machine? Tough (I guess), or maybe ask really nicely (and hope)? A risky situation to be steered clear of…

Suppose one has a short HD video,and wishes to burn it to disk for playing (at full resolution) on an HD television via a Blu-Ray player. How vital is it to burn it onto an actual Blu-Ray disk? Would it suffice instead to burn the Blu-Ray format (folder structure etc.) to a standard DVD?

I suspected that (latter suggestion) might work, but needed confirmation. I found it by accident, at the following.

The YouTube tutorial video “Adobe Encore Essentials 14. Tips and Tricks”, at http://www.youtube.com/watch?v=gVqvsgbxn2E , explains that this is indeed possible, but that when using Adobe Encore (at least), a work-around is necessary:

First export in Blu-Ray format to a local folder, then use a third-party DVD burner such as ImgBurn (that I already have) to burn this to a standard DVD.

I will try that! The proof will be when it has been demonstrated to play ok on a typical Blu-Ray player and HD TV. Though of course that would only constitute proof for that particular viewing system…

My current project, a live rock performance (at an offshore radio party on a ship) involved significant quantities of handheld footage. I’m editing this one in Adobe Premiere, A lot of the handheld footage benefited from stabilization. The easiest stabilization to hand is the Warp Stabilizer effect within Premiere.

It is really handy, and worked for me maybe 75% of the time, other situations gave unrealistic or unusable results. So there is still a role for keeping back-up options such as Gunnar Thalin’s excellent Deshaker (and rendering to intermediates e.g. in GoPro-Cineform).

It is SLOW, a real time-loser, the main delay being its Motion Analysis stage.

This stage is computationally intensive in principle, a fundamental issue for any such device.

However it seems that Premiere CS6 does not employ GPU here, and from Windows’ Task Manager, I infer that it is not even using multithreading (though I don’t know that for a fact).

Other than that, it does apparently use GPU/CUDA for its subsequent stabilize/deshake stage, and indeed that stage is very quick indeed, facilitating experimentation with settings (e.g. Subspace Warp or Position mode) to obtain the desired effect.

Incidentally, I found the default Subspace Warp mode to be “fragile”, so I use Position instead:

It often makes things in the background flap or wobble in unrealistic manner.

I therefore use Position mode, the simplest mode, as my default, then only advance to “Rotation” (etc.) if there is camera rotation.

It didn’t work well with very noisy footage, e.g. Sony Z1 in Hyper-Gain mode, even if when denoising was applied earlier in the effects-chain.

Lastly, it’s a shame there’s no way/settings for:

Defining a mask, rectangular or otherwise, for region(s) to focus on or to avoid. For example to prevent it locking onto a singer’s head instead of the stage.

Telling it to definitely not try to compensate for rolling-shutter. When I know the camera is CCD, I ought to be able to tell the software not to consider rolling-shutter. I never fully trust “Auto”, not in any application or context…

XDCAM-EX Metadata in Adobe (Premiere etc.): it works a bit but in my experience I can’t say it’s easy all the way, and it seems that, in the past (2009-2011) at least, others were also finding issues. For example it doesn’t (as far as I can tell) display the camera settings data such as exposure.

What’s the matter with metadata? Avid seem to have made a better go at joining it up, and I wonder if this has any connection with their policy of leaving it to the camera manufacturers to produce corresponding media-reading (“AMA”) plugins.

As remarked in an earlier blog entry, I was concerned about how best to import/use XDCAM-EX footage in an After Effects CS, especially when that footage could be spanned across more than one [.mp4] file, especially given that their contents can overlap. In Premiere this is not an issue, because its (new) Media Browser feature provides instead a higher-level view, of clips rather than lower-level [.mp4] essence-files.

Sadly, as yet, AE CS6 has no equivalent of the Media Browser.

Best workaround:

In Premiere, use Media Browser to import an XDCAM-EX clip, then copy it and paste that “virtual” clip into AE.

Workflows involving Adobe Prelude:

The web-search record (below) not only provides the foundation for the above statements, it also contains an explanation of the different workflows (e.g. whether or not to sort/trim/rename clips in Prelude). Some workflows are best for short-form (typically involving tens of footage-clips) while other workflows may be more appropriate for long-form (hundreds or thousands of clips).

While editing an Adobe Premiere CS6 project based on XDCAM-EX footage (from an EX3), I thought I’d enhance the footage in After Effects (where more sophisticated enhancement effects than in Premiere are available). Should be easy I thought, taking advantage of the CS6 suite’s Dynamic Link feature.

In Premiere, I selected the relevant clip and did [RightClick > Replace With After Effects Composition]. As expected, this opened After Effects, with the appropriate dynamic link to Premiere…

…BUT…

All I got on the Preview in After Effects, and indeed back in Premiere, was Color Bars. I assumed this indicated some kind of failure in After Effects.

Naively, I concluded that, on my system at least, After Effects CS6 could not read XDCAM-EX. A brief web-search (further below) revealed user experiences and video convertor article-adverts implying that I was not alone with this problem. But an Adobe blog entry suggested that no such problem existed in AE CS6 and some and Adobe documentation (pdf) said so explicitly. For the moment then, I was confused…

Then I rebooted and tried again. This time it worked. I succeeded in making AE projects both by directly importing the footage (as mp4 files) in AE and via Dynamic Link from Premier.

The direct import dialog was slightly weird though: it claimed it was listing “All Acceptable Files” but these included not only [.mp4] files but also e.g. [.smi] files, which, when I selected one of these it complained: “…unsupported filetype or extension”. Incidentally, the reason I tried it at all was that XDCAM-EX is a spanned format, where a single recording can be spanned/split/spread over multiple [.mp4] files. Furthermore, there can be an overlap of content from one [.mp4] to the next (in a span), so in principle (I haven’t tried it), simply placing one [.mp4] after another on the timeline would give rise to a (short) repetition at each transition (from one [.mp4] to the next).

But this is already over-long for a single blog-post, so I’ll deal with that issue in a separate post.

Based on Adobe’s workaround-advice regarding broadly similar problems with long hence spanned AVCHD footage. My footage is not AVCHD, but the main clip is Sony XDCAM-EX, which has some features (like spanning) in common with AVCHD. Worth a shot!

On a 4-Core i7 PC with GPU, it encoded at about real-time, which in my case was about an hour. CPU was only 25% i.e. equivalent to a single core

Replaced the relevant clip in seqA.

To my delight, the clip-markers (in that clip in seqA) were retained/applied in that replacement footage.

However, the sluggish [Play]-start remained, though possibly shortened, from about 6 seconds to 4 seconds.

Further Workaround:

Duplicate seqA

Nest it in a separate multicam sequence (seqE)

Do multicam edits on further segments of the event in that (seqE)

Intend later to nest/sequence usable bits of each multicam edit-sequence in a Master sequence.

Where there’s a will, there’s a workaround…

Still, I expect better of Adobe…

I lost about 3 hours to this (including web-searching, waiting transcoding and general experimentation).

Further gripes:

God it’s clunky!

Every time I stop multicam-preview to tweak the multicam cut timings, then return to multicam editing, I have to remember to activate the multicam monitor, not the timeline (where the tweaks are done). Unfortunately my reflex is simply to hit the spacebar. It is a nuisance to have to fight that reflex…

Every time I stop multicam-preview, it leaves a cut at the final position of the playhead. Not useful and simply clutters the timeline, distracting from real cuts.

Zoom [+] only affects the Timeline, not the multicam monitor. As a result, I tend to set the playhead position using the timeline. Doh! must remember to click (activate) back to the multicam monitor once more…

Ranged (duration not zero) markers are great but adjusting their right-hand end can be tricky, since this can change the playhead and/or timeline-display. Things snatch and interact that shouldn’t (I feel).

Sony Vegas is far better in these respects, though not in some others, so I’m sticking with Adobe…

Unexpected Preview-Rendering is happening…!? How come?

In principle, that shouldn’t be happening? I have a state-of -the-art (4-core i7 & GPU) laptop specifically for CS6, no effects applied, just cutting between two cameras, some plain dissolves (between segments of the multicam sequence) – but surely the Mercury Engine should take them in its stride? (or can’t it cope yet with multicam?).

A Sequence played with good audio, but when I nested it (inside another sequence), all went silent. This turned out to be the latest incarnation of a crazy historical feature of Adobe Premiere. It wasted a good part of an hour of my time experimenting and finally Googling to find the (simple, once you know) way out.

The problem:

A simple straightforward sequence consisting of video recordings from two cameras, each arranged in their own tracks, some audio tracks enabled, others disabled.

MBR Color Corrector, by Matt Roberts, is a plugin Effect for Adobe Premiere and After Effects, to automatically color correct movie clips / footage featuring a Gretag MacBeth / X-Rite ColorChecker chart or card in-shot, e.g. at the beginning or end of a scene or take.

This provides an alternative to manual (hence subjective and probably iterative) color adjustments in conventional Effects in the editing application (Premiere or After Effects).

When applied appropriately, the workflow-result can be improved productivity and quality, with reduced (as opposed to avoided) dependency/demand on Colorist expertise and accurate color monitors etc. It not only handles typical color temperature issues but also, to a useful degree, non-linear luminance and color twisting inherent in certain cameras and lighting conditions.

In addition to color correction, MBR Color Corrector can also be used for color matching, e.g. to match a mood, as previously established in an example prepared by a Colorist, provided that example likewise contains some frames featuring a Gretag Macbeth / X-Rite color chart or card.

The new version (v.2) features:

Mac support.

An improved, more intuitive, user interface.

Keyframes on everything that effects the output.

The free (gratuit) functionality is almost complete (no watermarks etc.) and in my experience has certainly been useful on real projects. The paid version has greater efficiency and functionality, and encourages the developer to keep developing. See the product web-page for more details.

It’s not directly possible to render from Adobe CS6 to Windows Media (doh!).

The best we can do on Mac OS is to render to an intermediate file, such as ProRes or DNxHD or Cineform. These formats are not bundled with Adobe, they are third-party, to be obtained and installed independently of Adobe.

Having rendered to an intermediate file, it is then possible to render from this on to Windows Media via the following:

Having created my own additional presets for encoding formats on one system, I want to copy them to another. As it happens, these “systems” are the Boot Camp Windows and the Mac OS sides of the same MacBook.

So how do I copy them?

As it happens, Adobe Media Encoder has menu options:

Preset > Export

Preset > Import

Nevertheless, looking behind the scenes…

Each Preset is stored as an [.epr] file. So where are the [.epr] files kept?

I was using Adobe Premiere, this time on Mac OS, and wished to render something like ProRes or something suitable for an iPad. Aware of Larry Jordan’s post on this (from my earlier post), I nevertheless searched afresh, finding the following Adobe blog post. Very helpful.

In each case (folder of presets), just drill down to the lowest level, select all the [.epr] files and import. Each [.epr] file “knows” its appropriate folder internal to Adobe Media Encoder. And yes, I did first check the presets were not already there. Weird really, that I had to discover these by accident – surely should have been part of an Update?

IMPORTANT: We do not distribute the ProRes encoders or decoders (codecs). You must get those from Apple. The ProRes encoders are included with various Apple video software, such as Final Cut Pro and Motion.

To install the encoding presets in Adobe Media Encoder CS6, do the following:

In Adobe Media Encoder CS6, choose Preset > Import and navigate to the encoding preset(s) to import. You can choose multiple encoding presets at a time; it is most convenient to select all of the presets in a folder at once.

This video demonstrates the use of the Preset Browser to apply and manage encoding presets.

Whenever I boot up Mac OS, there are recovered Adobe files in the Trash. Even if I did not use Adobe in the previous session! Of course I can [Empty Trash] but why do they keep cropping up there in the first place? Is this symptomatic of some error or malware in my Mac OS system? Last time it happened, these were the files concerned:

The file [com.adobe.dynamiclinkmanagerCS6]

A [.prmdc] file, with prefix as per one of my project names.

Maybe [pr] indicates association with Premiere?

A bunch of files named as [I-Frame Only MPEG~xxxx.epr], where [xxxx] represents a pseudorandom hex value (of more than 4 characters).

I guess these are preview-accelerating renders. But I thought such renders were retained, not temporary.

The best advice I could find on the web was that this kind of thing, while not generally expected, is of no importance, so “just keep emptying the trash”. Concerning and irritating though…

Presumably Adobe is not cleaning-up when I close it, but in that case why and what else is it not doing? Could this be associated with the Kaspersky issue I recorded in my previous blog-post? Like had the Kaspersky-augmented kernel shutdown been methodically waiting for some Adobe clean-up process that never terminated, whereas un-augmented kernel shutdown simply (and silently) forced-killed that process? Just guessing with my overactive imagination, no supporting knowledge/information/evidence.

The fact (I have observed) that recovered Adobe files can appear in Trash even when Adobe has not been used in the current or the previous Mac session (between machine boot-ups) tends to suggest that some independent Adobe clean-up process is always happening in the background, as a result of normal system start-up, regardless of whether Adobe has explicitly been run by the user. Gives me some “gut feeling” that my “imagination” might be on-track…

I had Adobe Production Premium CS6 installed, but when in Adobe Premiere I tried to make a Title, there were no Title Templates present.

A Google search on [adobe premiere cs6 title templates download] produced the answer, as follows, in the form of a downloadable installer. In addition to Premiere Title templates, the add-on also includes Encore templates.

With this release, Prelude now provides new transcoding options that are optimized for editing.

While the ideal option for Mac users is to transcode into ProRes, this isnt a viable option for Windows users. Since Prelude is cross-platform, Adobe provides two other options: MXF OP1a and P2 Movie. Of the two, I prefer P2 Movie > AVC Intra 100. This Panasonic codec is 10-bit, uses I-frame compression, and creates file sizes somewhat smaller than ProRes 422. For most editors, it should provide excellent quality.

For Mac users wanting the best quality, I recommend creating a custom preset in Adobe Media Encoder using ProRes. For editors needing to support files in a cross-platform environment, I recommend AVC-Intra 100. (The 100 version has a higher bit rate, and generally higher quality than the 50 version.)

Sometimes Adobe Premiere may write to a source media file or proprietary folder-structure. This may be considered a non-problem in most situations, but it is nevertheless worth being aware of.

This is nothing hidden, surreptitious or unheard-of, it’s explained in Adobe’s Help text and documentation. However the potential consequences may not be obvious to a new user. It may arise at various points of what we may regard as the greater process (workflow/manual) of ingesting media, consisting not only of Premiere’s Import of media but also subsequent manual updating of metadata or indeed automatic analysis such as speech recognition. As of CS6 it can also occur as a result of adding Markers in Adobe Prelude.

Premiere likes to add and manage metadata for each media file.

The good side of this is that it value-enhances these files, making them easier to locate, navigate and use, potentially increasing workflow productivity and asset usage.

But there’s also a dark side – not necessarily Adobe’s fault (e.g. their approaches may well adhere to official media specifications) – but it may be that so-adulterated media files may cause difficulties to other applications (e.g. that may not fully take on board such standards).

In my experience, in the past, some (possibly poorly-written, but nevertheless useful) applications have refused to work with metadata-augmented files, again holding up productivity, in this case while the user figures out the issue and works out how to strip this data out, in order to progress.

Technically a non-problem, but potentially consequential to a workflow, backup software will (rightly, from its point of view) see the metadata-change as a file-change (e.g. as a consequent file-size change) and consider that the files have been updated. Left to itself, the backup process (depending how it works/configured) will overwrite any previous copy of the files (e.g. the original files). Even if the backup process prompts the user to confirm this, the naive user may be uncertain what to do,

Also, the user has the option at their discretion for Premiere to automatically store additional files (such as cache files and metadata sidecar files) alongside source media files.

In the case of media represented as a straightforward single file (like a .jpg or .mpg file) this does not affect that media.

However some media (e.g. TV-playable DVDs or XDCAM-EX video media) are stored as proprietary folder structures with defined contents, part of these contents being essence files (e.g. .vob files or .mp4 files) while other files alongside them etc. in that structure (e.g. DVD’s .IFO files or XDVCAM-EX’s .SMI files) contain metadata or index into them etc. In this case, the consequence of adding further files into the structure will (in my experience) be acceptable to some applications and media players but not to others, which regard it as “pollution”, and may then reject such structures. Certainly in the past I have seen this happen in some software applications and also even some (mostly old) TV DVD players.

This is a case for “situational awareness”: if one is aware of the nature and potential consequences of the adulteration (be it regarded as pollution or enhancement, depending on the workflow situation), one is then in a better position to be able to avoid or fix any asociated issues. (more…)

Just as I’m starting to get used to Adobe Premiere CS5.5, I notice that its audio effects listing (in menus etc.) does not my system’s VST collection. Most annoyingly, because of that, my iZotope Ozone effects are excluded from Premiere. Seems unreasonable, given my long track record of employing such plugins in Sony Vegas.

I spent a good hour or two trying to understand and solve this, including much googling. At the end of that, I’m not sure what the problem is exactly, but it does look to me like Premiere is slightly lacking with regard to its ability to interface to VST effects. For a start, one of its assumed registry entries appears inappropriate to Windows 7 64-bit. Having hacked that into shape, Premiere at least noticed the existence of Ozone (and other VST effects on my system) then found itself unable to load it.

The best solution I found was really a work-around. Prom Premiere timeline, [aClip >RtClk> Edit in Adobe Audition]. That application has no trouble recognising iZotope plugins. However before getting too blinkered, try the native Audition effects first, including Noise Reduction, because they are pretty-good.

When I start-up any application, I like to understand at least the main side-effects it’s having on my system. In the case of Adobe’s primary video-editing apps, Premiere and After Effects, my experience (on Windows 7) is that they save intermediate preview-renders to the system volume. This causes me the following concerns:

System Volume may serve poorly as a media drive.

Larry Jordan, at least in the recent past, advises against using the system drive for media read/write. On the upside, such drives may have high-bandwidth to the system, but on the downside, the system can interrupt their use with highest priority, whuch may (I guess) pose a risk to smooth playback (though I am aware that buffering may possibly reduce this risk, I haven’t done or seen any such calculations). Cache files are indeed media files that are written and read.

On the other hand, an informed representative of a well-known UK supplier of video editing laptops advised me that in his experience, most users of laptops with only a single internal drive (as system drive) do use that drive in this way (for portability).

System drive can become “clogged up”

System drive can become clogged-up by many or large video files of which that the user is only partially aware, their creation having happened implicitly during their use of the NLE etc. Like temporary files only worse!

Ultimately the system drive can even become full, making the operating system itself sluggish or even less stable (and video playback less smooth.

Backup of a system drive that includes media files will typically require significantly greater archive space and will take significantly greater time (than a clean system).

Migrate-ability is reduced

I like the idea of a video project being a free-floating data-object. That is, it should not be tied to any particular instance of a data storage volume, let alone a particular computer (system). It should be possible for all files relevant to a project to be stored on any volume, migrated to any other volume, plugged into any computer having appropriate installed applications, and everything to work the same way as when the project was on its original volume being edited on the original system. That includes not only the source media files etc. but also the intermediate rendered files.

So what do the Adobe editing applications provide to enable my preferred working arrangement?

Premiere:

[Edit > Preferences > Media]

This defines the location of the folder [Media Cache Files], which contains pseudorandomly-named files. Example Files:

When Premiere Pro imports video and audio in some formats, it processes and caches versions of these items that it can readily access when generating previews. Imported audio files are each conformed to a new .cfa file, and MPEG files are indexed to a new .mpgindex file. The media cache greatly improves performance for previews, because the video and audio items do not need to be reprocessed for each preview.

When you first import a file, you may experience a delay while the media is being processed and cached.
A database retains links to each of the cached media files. This media cache database is shared with Adobe Media Encoder, After Effects, Premiere Pro, Encore, and Soundbooth, so each of these applications can each read from and write to the same set of cached media files.

Location: [c:\Users\…\AppData\roaming\Adobe\Common]

[Browse]

If you change the location of the database from within any of these applications, the location is updated for the other applications, too.

Each application can use its own cache folder, but the same database keeps track of them.

Example Experience:

I clicked the [Browse] button and selected an area on my external media drive (a GRaid Mini) as: [H:\_App_Specific\Adobe].

In response, a prompt came up saying “Move the existing media cache database to the new location, or delete it (Buttons: [Move] [Delete] [Cancel] ).

To remove conformed and indexed files from the cache and to remove their entries from the database, click [Clean]. This command only removes files associated with footage items for which the source file is no longer available.

Important: Before clicking the [Clean] button, make sure that any storage devices that contain your currently used source media are connected to your computer.

The timeline region remained red, indication no render-files were associated.

Experiment: Tidy migration of a project to a new location.

Warnig: in the case of doing a Copy (which is Windows’ default drag operation between different volumes), take care to ensure the Project (file) is not simply referencing the original preview files at the old location…

Drag both Project and its folders (including render-file folder) to a new location (e.g. on a new disk).

If name and relative location of folder are unchanged (as they ought to be, in good practice) then the files will be automatically detected and used, not even a user-prompt.

Just be sure though that the project isn’t simply referencing the render-files in their original location, if they are still present there. Premiere is “lazy” in this respect.

Experiment: The relative location of the Rendered Files folder does matter (relative to the project file).

Tried putting the render files in a non-standard location.

The “Locate/Browse” prompt appeared

I located the file

All at first appeared well, and the corresponding section of the timeline went green

However, the “Composer” window simply displayed “Media Pending”. That never went away.

Experiment:

When migrating, also need to move (or copy):

The Media Cache directories

Actually I’m not so sure about this. I tried exiting Premiere, renaming these directories and opening Premiere. It created and repopulated the same directories in their original location, which in my case was an external drive.

I suggest marking each external with the drive letter that the user assigns to it, say Z:\. Then, whenever Z:\ is plugged in, it will always be seen as Z:\. This way, the NLE can keep up with where the Assets are located, starting with the drive letter.

If one is migrating Projects between computers, they will repeat this exact process in the OS of each computer.

Note: when doing the migration, ALL Assets, Scratch Disks, and the Project file, MUST be included on that external

Work Procedure for Migrate-ability:

By associating cache and XMP files with the media (or its essence), Adobe projects are migratable. However adding such files into the BPAV/CLIPR folder structure is considered by some applications to be an adulteration of that structure, requiring their deletion. However, such deletion on an as-needed basis is not too onerous – given it is easy to do and in any case this situaion should rarely arise in practice.

When using different disks, remember to re-define (in Preferences) the location of cache files etc.

One work-around would be to -re-set the cache location before opening any individual project.

Might be hard to remember to do when opening a project from within the NLE, easier to remember when double-clicking a project file in Windows Explorer.still

I’m not 100% sure what to do about these…

As noted earlier:

When doing the migration, ALL Assets (Sources), Scratch Disks (Renders), and the Project file, MUST be included on that external.

Shooting green-screen onto a 4:2:0 chroma-subsampled format, intending of course to use it for chroma-keying. Obvious disadvantage is green-ness of green-screen only gets sampled at quarter-resolution. Not a show-stopper, given my target deliverable is standard definition, but anyhow, towards perfectionism, is there any way to up-sample to 4:4:4 i.e. full definition colour?

It does occur to me that something more sophisticated than chroma blur ought to be possible, broadly along the lines of edge-following methods employed in resizing. What’s out there?

Simplest method, that most people seem to use, is chroma-blur. That’s only the chroma, not the luma.

Searching around, Graham Nattress has analysed the problem and seems to have produced a more mathematical approach. But it’s only available (at time of writing) for Final Cut (which of course is Mac-only at present).

Some tools that “promise” upsampling, but I wonder by what methods:

GoPro-CineForm intermediate. The codec settings include an option to up-sample to 4:4:4

Adobe Premiere, but only if a Color Corrector effect employed.

But the crucial thing here, regarding the usefulness of this, is whether it uses any better method than chroma blur.

Some questions:

Does Adobe have anything built-in to do something Nattress-like nowadays?

If I had a 10-bit video recording such as from the PIX 240, would I know what to do with it, in order to make full use of the 10-bit information? This question is important, because it cannot be assumed that this is simply a case of inputting it into any arbitrary nonlinear editing system (NLE) – not all NLEs preserve the extra information – and even for those that do, the workflow and configuration must be set up appropriately. And even having got that right, how can we verify all is working as expected? Can the NLE’s own effects and waveform monitors etc. be trusted to preserve the extra bits?

{The following is a variety of viewpoints from various people. I don’t necessarily agree with any of them but do regard them as useful thought-provokers. }

You “rent” the software, rather then first buying it, then continually paying for upgrades to new versions.

I’m gonig {going} with the non-cloud version of CS6 also. I don’t like the idea of an expiring software package, in the event that I don’t want to spend another $600 next year.

Alternatively, maybe Google and Microsoft will see this as an opportunity to offer some competition, because what I dislike even more than expiring software is having to keep up with files across ten different web sites. Someone needs to invent a “cloud drive” standard and then everyone needs to build their apps to function with any “cloud drive”. Google is getting close with their new Google Drive and a selection of third party web apps that can use it for storage.

About the Google thing, remember that anything you put on the Google drive is owned by GOOGLE, and they can use it for anything at all that they see fit. Trusting Google with your work is insane.

Google says in its disclosure. “You retain ownership of any intellectual property rights that you hold in that content. In short, what belongs to you stays yours.”

Urban lagend {legend}, fear mongering. Google (and YouTube and Widows SkyDrive and Amazon Cloud) assumes a LICENSE to your work for the legal protection of being able to move and disperse it throughout their servers. And in the case of YouTube to change the format.

My own inclination would be to stick with the suite license. I have no faith that Adobe won’t just screw everything up. … Another consideration. Some editors want their edit systems isolated from the internet. Cloud service won’t be so good if that’s what you want.

You don’t need to stay connected all the time. But if you’re not going to be connected maybe you don’t need the cloud service.

The one nice thing about Adobe’s Creative Cloud is that you can install both the Mac and Windows version for the same membership price. I have a Windows 7 desktop but a Mac OS X Lion laptop so this would benefit me. Of course, Adobe could have just been nice and allowed my desktop license to work on both platforms like other companies do but that’s another story

The turnoff for me … is the FORCED yearly upgrade. It says you can keep the version you lease for one year, then you must upgrade. Patches are installed by you (just like now), but your software license (appears) to expire a year after you initially get it.

…the cloud concept is not beneficial unless you like being beta test guinea pig.

…remember when all software was owned lock-stock and barrel by the hardware companies. (You couldn’t buy a computer, you had to lease it from the manufacturer). You paid an annual maintenance fee and the owner (DEC, IBM, etc) maintained the hardware and software. In that scope, things haven’t changed much. We still pay an annual or biannual “fee” in the form of software upgrades. Personally I prefer the old “Rent the Software” model because if it didn’t work, you didn’t pay, and bugs got fixed really fast.

What is it? Not the “ubiquitous computing” I first imagined. Marginally handy in some ways, possibly more risky in others, e.g. if forget to exit on one machine (e.g. at work) then will it be accessible on another machine (e.g. at home or remote location)? An in any case, how sustainable will it be? My recent experience with Adobe CS Review makes me slightly wary…

What I expected was something more like the Kindle model, where I could install apps on as many devices as I wished, albeit with reduced functionality on weaker devices, and to have only one project open at a time, identically visible (apart from synch-delay) on all of those devices (maybe auto-branching where synch failed, with expectation of future manual pruning/re-synching).

Then there’s rendering – I’d expect that not to be counted as “usage”, instead usage should be actual user-interaction. The technical model could be a thin client for user interface, sending commands to processing engines (wherever, even on another machine, e.g. to run a muti-core / CUDA desktop from ipad or iphone) and at the same time “approval requests” to Adobe Central, but with some degree of “benefit of the doubt” time-window so as not to delay responsiveness of the application. They could then even respond to attempted beyond-licence actions with piecemeal license-extension options, e.g. “Provided you pay in next working day or two for temporary additional subscription” option (defaulters get credit score reduced). Why let inflexibility get in the way of capitalism?

Unfortunately, in the words of REM, “that was just a dream”. Instead activation is restricted virtually to the same degree as the non-cloud variety, that is to two computers (main & backup or work & home etc). The only extra freedom is that the two computers need not be the same operating system – e.g. can be mac and windows – a nuisance restriction of the traditional non-cloud model. And rendering counts as usage.

It is possible to deactivate one of these computers and reactivate on another but if this happens “too frequently” then a call to Adobe’s support office is required. It’s slightly more complicated in practice but that’s the essence of it.

Might give it a try though. Like I said, it could be marginally handy, and marginal is better than nothing.

As a relatively new Adobe user, I was vaguely aware of an attractive-sounding Adobe Premiere collaboration feature, I think it was originally called Clip Notes (http://boardreader.com/thread/Clip_notes_alternative_for_CS5_other_tha_1yitjXfs3i.html confirms this), where one could send out reviews to people, who accessed it via Acrobat or as a pdf or something. Having Adobe Production Premium CS5.5 I explored under Premiere’s File menu, discovering Create New Review. I wish I had not, for it wasted several hours of scarce production time… It seems that this feature has been discontinued, as announced at http://www.adobe.com/products/creativesuite/cslive.html and complained about at http://forums.adobe.com/message/4266469. The only reason I discovered this, following three hours of rendering by the Create New Review command and further one hour waiting for the Share Review website to complete (black screen with rotating wait-animation) was googling for acrobat.com login problems.

How come there wasn’t a simple website message to say “Discontinued”? Furthermore, why not an Application Update to remove this feature from the File menu or change the menu action to state that this feature was discontinued? Just as well I had not based a commercial workflow on this feature. I feel somewhat Apple’d….

My alternative, until I find anything better, will be good-old-fashioned highly compressed renders with burnt-in timecode, shared bia DropBox. I am also aware of Sorenson 360, it looks like it has a great set of features, but its cost is prohibitive for my current purposes.

One item I did manage to salvage from my “wasted time” was the render – that had taken 2.5 hours – that had been generated as part of the CS Review process. It appeared in the folder [C:\Users\David\AppData\Local\Temp] with the pseudo-random probably-unique filename of [8D4E4C20-0C00-0F8A-A501-B6B7CA2E4883.f4v]. The [f4v] extension indicates it is an Adobe Flash container, most likely containing h264-encoded media. I moved it to my own [Renders] folder for the given project and it played fine in VLC Media Player, which confirmed h264 was the codec and indicated it had resolution 960×540 i.e. half-size in terms of length, quarter-size in terms of area, bitrate was around 1Mbps.

Is it possible to edit a timeline on Premiere, send it to Resolve, as a project/timeline structure rather than as a rendered intermediate file, color-correct in resolve then return (again at project/timeline level) to Premiere (say)? From a brief web-search, it looks like the answer is “yes”.

<<This is a feature that experienced editors have been using for years. Instead of using the ultimate slow method of double clicking a clip in the project panel and opening it in the source monitor, just move the mouse over any clip while pressing a shortkey, and set in/out points on the fly.>>

Organization & Layout:

Suppose you have a few Source sequences, such as one sequence per rush, or maybe per scene.

Maybe also combine all rushes into a single (additional) sequence.

Create a main “Target” sequence.

Open/place a “Source” timeline along the whole width of the top of the screen.

To that timeline, add sequences (tabs) for each of the Rush-Sequence(s)

In the centre, have the program monitor along with a scope

Below that, open/place a second timeline, this time for the Target sequence.

Now you have two separate timelines and can drag etc. individual clips (maybe trimmed) from the Source timeline to the Target timeline.

Just remember it’s not the same as using the Source Monitor.

To Enable Skimming:

Make a keyboard short cut (e.g. [§] ) to the [Move Playhead to Cursor] function

How about an After Effects plugin for automatically grading any footage featuring a Gretag Macbeth color chart in-shot (e.g. at the beginning and/or end of shot)? Matt Roberts’ new plugin, still “steaming off the press”, works in Premiere as well as After Effects, and has been tested in CS5 and CS5.5. You simply pause on a frame featuring a color chart in-shot, place corner locators to identify that chart, and ‘Go”. It not only fixes white balance but also adjusts for saturation and compensates for certain kinds of “color twisting” defects such as can occur in cameras. Subsequent “expert tweaks” can then be made if preferred, e.g. 20% saturation reduction for “film look”. The free version works in 8 bits, the paid (£50) one (in the process of being made available on ShareIt.com) works in 32 bits, multithreaded etc. To find out more and to download it: http://www.mattroberts.org/MBR_Color_Corrector/.

Example: Canon 7D Video Footage:

So what’s the point of this plugin? Greater quality, reliability and productivity, as compared to traditional color correctors, as explained below.

Those with an eye for accurate color reproduction from video footage will be familiar with traditional tools such as 3-way color correctors and meters such as waveform monitors and vectorscopes. All proper Non Linear Editing systems (NLE’s) have these. Generally-speaking such tools work well, but sometimes in practice the situation can become confused when for example a subject’s “white”(assumed) shirt is in fact off-white, or when tinted light mirror-reflects off skin or results from camera filters. Easy to understand in retrospect, but initially can cause “running round in circles” of interative adjustment and re-checking. Furthermore, some cameras have peaks, pits, twists and ambiguities (e.g. infra-red) in their colour response that many such correctors cannot correct in a straightforward manner. Not only can time be wasted but it is quite possible to end up with an image that “looks” right to most people but which in fact has done something inexcusable such as altering the very precise color of a corporate logo.

One way to reduce the potential fo such confusion is to incorporate a color chart in shot. Various types exist, including Gretag Macbeth (GM) and Chroma Du Monde (CDM). The GM card, while primarily targeted at photography, is also in widespread use for video. That chart consists of a matrix of colored squares, one row of which represents (steps on) a grey-scale. It also includes some near-primary colours and some approximate skin colours of a few types. The simplest use of such a chart would be to use the grey-scale row for white balancing and the other colours for “by eye” grading/tweaking. The more experienced will probably make use of vectorscopes etc. but that can still be a nuisanceful if not cumbersome process.

Enter Matt Roberts’ Automatic Color Corrector. We tried it out on some footage from his own Canon 7D and from my Sony EX3, the latter fitted with a slightly green-tinted infra-red filter, on a snowy day. We even tried it on an image (featuring such a chart, as well as a model with lots of fleshtones) on Canon’s website ?URL? for their C300 camcorder. In all cases, the correction was achieved in seconds. We were particularly confused as to why Canon’s web-image image was so off-colour, but it certainly was, and the Corrector fixed it.

I want to put someone inside a virtual world (based on their own paintings), ultimately to be rendered out as a Stereographic 3D movie, but with development in progressive stages/generations, initially based on readily available standard tools and techniques, later proceeding to specialist 3D modelling apps etc. How to proceed? Some possibilities that come to mind are:

Initial development using only “Planar 3D” as in After Effects

I believe this is possible to some extent, using native features of AE.

{The planar tracker is (primarily?) ..used as a roto assist to speed up the roto process…
..and also the tracking data can be exported to a wide variety of programs such as Nuke, After Effects, Combustion and many more for corner pinning, stabilizing, and match move that suite.

Subsequent development using “Full 3D” as in Blender etc. Some relevant previous entries in this blog are:

…. would liked to use similar techniques, especially with the web pages tracking the buildings in city shots.

I have tried to use mocha to track the side of a building, but its not proving ideal. Obviously the motion tracker inside AE won’t give me the depth/perspective.

So what’s the best way to do this? … Is Syntheyes/Camera tracker the way forward?

A1: (by ben g unguren on Sep 8, 2011 )

The general rule of thumb is that Mocha works great if your graphics are ATTACHED to an EXISTING surface (like a logo on the side of a building, or changing the words on a sign). If you’re trying to add something in 3D space (like graphics that “hover” around the building, and seem to actually be there), then you need a 3D solution.

Mocha and AE’s internal trackers give you 2D solutions. Mocha’s solutions are a bit more sophisticated, producing corner-point information that mimics 3D, but [this is key] it doesn’t produce a 3D camera.

Syntheyes and similar apps will give you an animated 3D camera as well as target points that simulate the world you’re tracking (target points for the ground, buildings, etc — whatever you’ve managed to track and can get a 3D “solution” for). This is A LOT more information than what Mocha or AE’s internal tracking can get you.

One other point: when the camera is only panning and tilting (not actually changing it’s own position) then a 2D solution can (sort of) mimick a 3D camera solution. So if all you’re doing is panning and tilting, then you could track that in Mocha, then use that data to animate objects (that are given perspective, for instance). You would be able to achieve a lot of the graphics in the video you linked to using that technique, as they’re using a lot of static cameras.

A2: (by Tudor “Ted” Jelescu on Sep 8, 2011):

I agree with Ben.

In most of the shots from your example Mocha can be used. I suspect that some of those shots where not really video files, but still images cleverly transformed in a 2.5d comp where camera moves can be animated in AE – so no tracker there.

…AE is a “2.5d” application. The worldspace is 3d, but any imagery you have (discounting 3rd party applications such as Invigorator and such) is “flat”.

(As opposed to 3D Modelling apps such as 3dsMax, Blender)

After Effects Render Engine

Render farm: Network rendering with watch folders and render engines.

Previously, it was possible to install render engines on as many machines as wished, but not so under CS5.5, where a separate serial number must be obtained for each machine. For small guys like me that makes it pretty useless. It seems likely that a more flexible option will exist in future versions.

Audition

Audio editor, derived (many years ago) from CoolEdit.

Bridge

A combination of media file manager, media manager, metadata editor, also does some kinds of media processing.

Can be run standalone or from within apps e.g. Premiere: [File > Browse in Bridge…]

A collaborative script development tool/service. There is an application for working alone on an offline version and a web-based service where you can sync up with an online version.

Ultra

Vector-keyer (simple-to-use effective chroma-keying) that was once a standalone app by Serious Magic, now available as a plugin within Premiere (but not AE). I get the impression it is regarded (or at least branded) as simple to use but ultimately less sophisticated/capable than KeyLight (???).

Serious Magic used to highlight its capabilities regarding reflections, semi-transparent areas and hair…

Utilities

ExtendScript

An integrated development environment (IDE) for the creation and debugging of JavaScript code for Adobe Bridge, to facilitate workflow-enhancing automation of tasks between elements of Creative Suite.

I came across some old videos I shot on a Nokia N95 and pulled these into Adobe Premiere. However the individual video clips were each listed with a different framerate, hovering vaguely around 29 fps (27.08 up to 29.45). Questions:

What does that even mean?

From web-search, it sounds like it’s an average, and N95 framerates within a given recording can vary wildly

e.g. between 6 and 38 fps.

How do various apps etc. handle such material?

YouTube:

In 2009 at least, it sounds like YouTube went for the minimum fps in any such clip.

Adobe Premiere

Seems to go for the average

I dragged a N95 clip on the “New Sequence” button and the resulting sequence had the clip’s average framerate.

Presumably just duplicates/drops frames as required to maintain the Sequence’s framerate.

GSpot (video analyzer):

For a clip reported by Adobe Premiere to be 28.81 fps, GSpot reported it to be 29.412 fps.

I’ve begun by re-creating from-scratch a project which I already have in Sony Vegas.

Media Browser:

aMedia > DoubleClick

Goes to Source Monitor

Source Monitor:

Adjust In/Out points e.g. as per [View Details] window in Sony Vegas

Drag from Source Monitor to Timeline

Use timecode from elements in Sony Vegas project as a starting-point. Drag Dissolve transitions (Effects) from bottom-left hand pane in between clips. Trim transition to preference, either by drag or numbers. Pretty easy.

Timeline:

[Clip > RtClk > Scale to Frame Size]

Then drag scale/pan/crop rectangle to required size/zoom level.

[File > Export > Media] = Ctrl-M

This created [C:\Users\David\Documents\Adobe\Premiere Pro\5.5\Sequence 01.avi]

Renamed the Sequence as [Intro]

Now it saves to [Intro.avi] or the following time to [Intro_1.avi]

Pretty intuitive really. Next I’d like to find the levels and color correction features, then (following the inevitablegrain created by that process) some means of running the Neat Video temporal denoiser.

Opened an Adobe Premiere project, set its Sequence settings to XDCAM EX HQ 1080i50

Wondered immediately what that implied. Like OK the source is XDCAM-EX which is Mpeg2 encoding inside a MP4 container, but why does the Sequence care how the source is stored? Surely it only needs to know things like the format is 1080i50 then it can store any intermediate files in DNxHD or Cineform or whatever Adobe prefers. I am very confused by this kind of thing, just as I was in FCP. Maybe it’s obvious or maybe “I think to much”.

Adobe has a thing called Import and it can (I discovered) accept MP4 files from XDCAM-EX’s BPAV folder-structures (deep down within the CLIPR subfolder). But I know that is a stupid way to go. The MP4 files are but the “essence” that is “stitched together” (mixed metaphors or what?) by the likes of SMIL and XML files. It’s only at the latter level that smooth continuum happens.

Enter Adobe Premiere’s Media Browser. I “knew” there had to be something like that. I discovered it via http://wwwimages.adobe.com/www.adobe.com/content/dam/Adobe/en/products/premiere/pdfs/cs5-premiere-pro-sonyxdcam-wfg.pdf which itself I discovered via Bing search on [sony xdcam-ex adobe premiere cs5.5 workflow]. OK to get XDCAM-EX footage into an Adobe Premiere project you do [WIndow >Media Browser] or else Shift-8, then don’t expect some window popping-up or anything, just instect the [Media Browser] tab at the lower-left of the GUI screen. Drill-down to the required recording and double-click. The media appears in a Source Preview window (I wonder but don’t mightily care what Adobe calls it).

OK I do care a bit really, and according to an Adobe video tutorial, it’s called a Source Monitor.

Initially it was too zoomed-in, presumably displayig at 1:1 (pixel). “Zoom to Fit” was but a right-click away…

You can drag from Source Monitor to the Timeline or to other places.

I tried that with some EX3 footage where I pan across the front of the famous Wembley Stadium, UK. In Sony Vegas (my erstwhile “comfortable old shoe”) it snatches and drags. In Adobe Premiere, as in Sony Clip Browser, it pans smoothly. Guess where I’m heading…

Personal Virtual Machine (PVM) (in use) for about seven years with retail boxed version of Windows XP.

VM has been moved from virtualization platform to virtualization platform over the years … the most recent incarnation … inside Hyper-V.

…nothing beats Windows Server 2008 R2. It comes with a top-notch virtualisation platform (Hyper-V), and added RemoteFX support with Service Pack 1. You can still use the desktop operating system for all your HTPC needs, and a single Server 2008 R2 Standard license allows you to run both a host copy and a single virtual instance of Server 2008 R2.

In my case, the host instance does little more than play movies on the projector via VLC. The virtual instance of Server runs my Plex media server, and aggregates my many storage devices into a single share using DFS.

Want a mobile “suitcase” editing system, something more (and more expandable) than a laptop but not too expensive. Primarily to be used for Adobe CS5.5 for media enhancement / editing / compositing etc.

Nearest I found was NextDimension’s range around $7000 I think (but just guesswork – could be way off – would need to get a quote). That would (if true) be around £4500 at current rates. Plus import… NextDimension call such machines “flextops” (Maybe they coined the term? Google searches on it mostly come up with them.)

Apart from the (mil/broadcast-lite but me-heavy) price, it might possibly be undesirably heavy to lug around much. If so (just guessing, not assuming), it would make more sense to go for a modular quick-setup system. So, starting to “think different” in this direction:

Standard tower, capable of taking new CUDA etc. graphics cards etc. as they emerge, but no need for more than say a couple of disks, maybe if SSD could even get away with just a single disk? (For system and media – inadvisable for traditional disks of course, what about for SSD’s? I have much to learn about SSD’s though).

“Laptop-Lite” to talk to it. With robust shuttered-stereoscopic HD monitor.

Gigabit network to NAS fast storage (SSD and/or RAID ?).

Maybe in that case it would be far more logical/affordable to use an existing laptop as a client working together with a luggable tower server, sufficiently light and robust for frequent dis/re -connection and travel. And remote access of course (no heavy data to be exchanged, assume that’s already sync’d). And some means to easily swap/sync applications and projects (data) between laptop and tower, giving the option to use just the (old) laptop on its own if needed. All such options are handy for the travelling dude (working on train, social visits etc.) who also occasionally has to do heavy processing. Then would just need a protective suitcase for the tower, plus another one for a decent monitor for grading etc.

I certainly won’t be spending anything just yet, but it’s good to have at least some kind of “radar”.

i was self taught for aftereffects, as many were before non-linear editing became so affordable that any school could afford to start up a digital media program (even jr. high/middle schools).

you might look into an aftereffects book by chris & trish meyers called ‘creating motion graphics’ (i actually didn’t have that book when i was learning, but i’ve since heard that it’s one of the best for learning ae).

total training has a good series aftereffects of tutorials

lynda.com is good resource too.

and, of course, there are lots of ae tutorials here at the (creative) cow.

aharon rabinowitz has many geared towards the fundamentals of ae. look into some of his workflow tuts and other earlier ones where he covers some basic essentials.

Web-research about graphic tablets – having seen and heard of their use by many editors, e.g on Avid and Adobe. Bear in mind however that tablet computers like iPad might become (or already have become?) game-changers…

Q: I’ve noticed that most editor’s I’ve worked with tend to use graphics tablets instead of mice for their input device. I would imagine there are many on this forum who do as well. I’m just curious why this is, are they just more comfortable to use, or more accurate or what? I’m considering getting one if they’re worth it.

Responses:

It’s just a matter of which tool you’re more comfortable with. If you’re considering installing a tablet, do some searches on this Forum for Waacom, as there have been conflicts using them, and specific driver versions needed to solve the problem

This link is to a great overview/chooser for a set of variants of the Intuos 4 tablet.

I like the Wireless one. Seems the most useful when not at a desk
(e.g. in bed or on a train).

Available from Amazon UK for under £300

One user recommends putting acetate on its surface before use. This reduces scratches and also reduces nib wearout. Several users report (unexpectedly quick) nib wearout as an issue.

Some users report issues with the wireless (BlueTooth). One responded with advice: Make sure on at least the first 4 charges, that you fully charge the battery, and use it till it runs out of battery (not just red light) rinse and repeat this process, and you’ll be fine with the wireless.

Some users were concerned that the tablet doesn’t come with a bluetooth receiver (e.g. USB stick?). Not a concern for my MacBook which has it built-in.

The Wacom Tablet changed the way I interact with the Avid application. For me, it’s much faster and intuitive for my hand to simply move right to the spot on the screen I need and click. No more dragging a mouse along. I feel like I’m moving faster and the carple tunnel I was developing has gone away.

If you want to actively prevent Premiere Pro from using one or more VST plug-ins, create a text file called Blacklist.txt listing the filename of each plug-in one per line. Put the text file in the same folder as the plug-in files, one blacklist file per folder. The blacklist file is read only when Premiere Pro starts up.

You must restart your computer for the Blacklist.txt to work.

….there is a limit to the number of VST effects that can appear in the list in the mixer panel, however all supported VST effects should appear in the list in the effects panel.

If some VST effects are not available in Premiere Pro when you expect them to be, search your hard drive for a file called Plugin Loading.log after configuring your search to find hidden files. The log may tell you why a plug-in is not being loaded.

Premiere Pro supports the Steinberg VST (Virtual Studio Technology) audio plug-in format so that you can add VST audio effects from third-party vendors. Premiere Pro includes VST plug-in effects that are available in both the Audio Mixer and the Effect Controls panel. Track-based VST plug-ins may provide additional controls. Apply VST effects the same way you apply other audio effects to tracks or clips.

In the Effects And Sends panels of the Audio Mixer, VST effects appear in the Effect Selection menus. In the Effects panel, they appear in the Audio Effects bin so you can apply them to individual clips. In most cases, VST effects appear in the Audio Effects bin and track type that corresponds to the number of channels the effect supports. For example, stereo VST effects appear in the Audio Mixer track effect menus for stereo tracks only, and in the Stereo bin in the Audio Effects bin in the Effects panel. After you apply any VST effect, you can open a window with all of its controls. You can leave multiple VST editor windows open as long as you want, such as when automating effects, but Premiere Pro closes all VST editor windows when you close the project.

If you previously installed a VST-compatible application other tha Premiere Pro, Premiere Pro finds VST effects in the VST folder that already exists. Inside the Plug-ins folder of the Premiere Pro application folder, there is also a VSTPlugins folder with plug-ins that are used only by Premiere Pro.

Note: When you use a VST effect not provided by Adobe, the specific control layout and results of the plug-in are the responsibility of the plug-in manufacturer. Adobe Premiere Pro only displays the controls and processes the results.

I use a set of VST plugins by Voxengo with 32-bit CS4. I recently upgraded to 64-bit CS5. So, I went and snagged the 64-bit versions of these Voxengo plugins. I put them in the C:\Program Files\Adobe\Adobe Premiere Pro CS5\Plug-ins\en_US\VSTPlugins\.

Here’s the info in the Plugin Loading.log file:

Loading C:\Program Files\Adobe\Adobe Premiere Pro CS5\Plug-ins\en_US\VSTPlugins\Elephant.dll
Loading from the registry…
The plugin was successfully loaded from the registry.

Yet, the plugins do not show up in the mixer or in the effects list.

I do not get any error messages. Also, I’m using Vista. Any ideas?

If you’d like to try the plugins yourself, there are free trials here:

If you want to actively prevent Premiere Pro from using one or more VST plug-ins, create a text file called Blacklist.txt listing the filename of each plug-in one per line. Put the text file in the same folder as the plug-in files, one blacklist file per folder. The blacklist file is read only when Premiere Pro starts up.

You must restart your computer for the Blacklist.txt to work

If some VST effects are not available in Premiere Pro when you expect them to be, search your hard drive for a file called Plugin Loading.log after configuring your search to find hidden files. The log may tell you why a plug-in is not being loaded.

I expected the video and audio to “want” to go in the existing video and audio tracks. Instead, while I could drag the video component anywhere (including the existing video tracks), the audio component only went to new tracks (that it automatically created).

Four audio tracks were created, not the two that I was expecting (given it was only a stereo recording).

No audio waveforms displayed (I expect there is a setting somewhere)

Found an [Info] tab in the pane at the lower-left of the app.

It showed that file [929_3798_01.mxf] contained 3 video channels, of which only vchannel 1 was populated, and 7 audio channels, of which the last four (4-7) achannels were populated.

Found [Preferences] under [Edit > Preferences]

Discovered cache location was at [C:\Users\David\AppData\Roaming\Adobe\Common]

There was also an option <<Save Media Cache files next to originals when possible

I fairly frequently use this in another NLE, but with feathered edges. The settings for this transition in Premiere do not appear to include feathering. Nothing obvious came up in Google or Help searches.

One suggestion, from July 2009, was to instead use Gradient Wipe, which has a Softness control, together with a suitable image for the required shape (e.g. circle).

Lookslike the same stuff as seen earlier. Lots of promotions, whizz-bang and specialist stuff, when I kind of expected novice introduction stuff, especially since I indicated I was that level when applying for the download…

That’s what I was afraid of. It’s making the same demands as DaVinci Resolve. I cannot satisfy those demands, all updates are under Apple’s control – it is normal that laptops (as my MacBook Pro is) have customized versions of graphics card drivers…

I closed the popup.

Premiere prompted for [New Project] etc.

I clicked on [Help]

The Help panel, once populated (after a minute or so), included a [Getting Started and Tutorials] link.

Having got the suite installed and ready to use, I ran Adobe Premiere. It created an account for me on [CS Live Services]. It complained that my video card drivers were insufficient for CUDA accelerated rendering. Sadly I cannot update these – I must only accept those that Apple provide (via Boot Camp updates). So no CUDA acceleration then I guess…

Nevertheless, how well does it work in other respects, and how usable is it overall? The

To add to the confusion, I think the difference is in the default dictionaries and the spellings in the interface, i.e. it presumes you want British-style spelling (as you have), and has nothing to do with licensing.

I have no clue how to change that election other than to re-install. If you can live with the funny spelling in the menus, you can set the default dictionary to US English in InDesign, and probably other apps.

With nothing open, click the text tool and set the control panel to character mode options. Change the language in the dictionary dropdown near the right end. This is also available from the character panel (which which is where you’d change it in Photoshop, it’s in the prefs under Hyphenation in Illustrator — and you may be able to reset the interface language in the Photoshop prefs, too)

I chose [English (International)]

Next it asked for:

Serial Number or else check the Trial button.

I did the latter

Also it asked for Language

I assume this to be the operating language for the app

Again I chose [English (International)]

Next, [Install Options]

Apps:

Flash Pro CS5.5

AIR for Apple iOS Support

After Effects CS5.5

Audition CS5.5

Encore CS5.1

Flash Catalyst CS5.5

Illustrator CS5.1

OnLocation CS5.1

PhotoShop CS5.1 (64..)

PhotoShop CS5.1

Premiere Pro CS5.5

Location:

[C:\Program Files\Adobe]

Next it began calculating the total time for install and began installing. After a few minutes it returned its time estimate as around half an hour. This (initial conservative estimate?) rapidly dropped to around 20 minutes.

Next it asked for web browsers to be closed

Finally it displayed what looks like a Launcher window for the Production Premium suite, withbuttons labelled akin to Periodic Table elements, except that one of them [Ps] (PhotoShop) appeared twice, identically labelled.

On mouse-hover it emerged (from tooltip text) that the second [Ps] was 64-bit, the first then presumably being 32-bit, though its tooltip text did not confirm this

Possibly unrelated, Kapersky AntiVirus reported

<<Detected a potentially dangerous modification of the application BMDSTREAMINGSERVER.EXE without a digital signature>>

That application was installed yesterday, as part of DaVinci Resolve Lite for Windows.

(I got distracted by domestic events)

The Kapersky prompt appeared to time-out, I don’t know whatit assumed/did…

Lookslike the same stuff as seen earlier. Lots of promotions, whizz-bang and specialist stuff, when I kind of expected novice introduction stuff, especially since I indicated I was that level when applying for the download…

That’s what I was afraid of. It’s making the same demands as DaVinci Resolve. I cannot satisfy those demands, all updates are under Apple’s control – it is normal that laptops (as my MacBook Pro is) have customized versions of graphics card drivers…

I closed the popup.

Premiere prompted for [New Project] etc.

I clicked on [Help]

The Help panel, once populated (after a minute or so), included a [Getting Started and Tutorials] link.

Given my poor experiences on my [MacBook Pro (2009) > Boot Camp > Windows 7] with Boris Blue and with DaVinci Resolve, it is by no means certain that [Adobe CS5.5 Production Premium] will fare any better. But it’s worth a try.

So I downloaded a trial. As part of that I had to first allow [Adobe Download Assistant] to be nstalled and executed. It prompted for my level of expertise. I answered: <<Novice: I could use all the help I can get>>. In response it gave the following link:

I bought a discount copy of Adobe CS5.5 Production Premium, because (after much discussion with others) its feature-set seems to match my typical and forseeable production requirements more than those of other NLEs, including my current mainstay, Sony Vegas 9 (which I am still trying to wean myself off, but when any proper job comes along, I tend to fall back on the familiar and trusted, for low risk including avoidance of learning-delay).

Being (so far) a one-man-band who is traditional Windows user, I purchased the Windows version. But, confirming what I had heard, it does seem that most media people I have met use Macs. So should I have purchased the Mac version? Are the versions exactly the same or have they different functionalities? Is there an option for the license to cover installing the same product on both Windows and Mac OS provided only one of them is run at a time? (e.g. when on the same physical machine). Ideally at zero or negligible cost of course. For example Avid Media Composer does have this flexibility. While the uncertainty remains, I will not open the box (in case it turns out that I need to exchange it).

Here is what I have learnt so far (mainly from web-searching, unverified information):

Differences between the OS-Specific variants:

It appears that for CS5.5 Production Premium (at least), the Windows variant has slightly greater functionality.

However it remains to be seen what will be the case for CS6, when it becomes available.

Some options are:

Volume licensing.

Intended not only for businesses but also for individuals. If the “volume”is for two licenses, they can be for each of the OS’s.

Crossgrade.

But as far as I can tell it’s intended only for one-off (or infrequent) crossgrades, requiring “destruction of the software” on the old machine each time. Shame it isn’t simply happy with repeatable deactivation/reactivation on each machine / OS.

Avid After Effects EMP is an Avid-supplied plugin for Adobe After Effects allowing that application to use a DNA family video output box such as Mojo or Nitris to provide External Monitor Preview (EMP) on a monitor. Helpful in order to make use of that Avid box for the Adobe After Effects application, both for convenience and consistency. Unfortunately it does notwork with the more recent DX family, such as the Mojo DX box.