Editing Gameplay Videos without Re-encoding using FFmpeg

Tuesday, December 26th 2017

I recently worked on a Lego Island 2 let’s play with my brother. It was recorded using Dxtory with the x264vfw codec, meaning that the saved recordings are H.264 streams in an AVI container1. Our recordings are 1920x1080 at 60fps. Audio commentary was recorded separately in Audacity.

When it came time to edit the videos, I fired up Adobe Premiere2, but quickly ran into a problem. Rendering the 1080p/60fps videos was taking upwards of an hour. Futhermore the output videos had significantly lower quality due to the re-encoding. I knew that after YouTube transcoded the videos for it’s internal format, the final output would look even worse. To fix this, I did some experimenting with FFmpeg, a command-line based video processor, and found a worflow for editing our let’s play without re-encoding.

These tips require the command-line, but if you can get past that barrier, you’ll be able to edit your videos without long rendering times and quality downgrades. Also, you can do this for free without purchasing any software!

Trimming Footage

You can use FFmpeg to trim footage off of the beginning and end of your video. Below is an example of trimming a 20-second clip from the 100 second mark 3.

ffmpeg -ss 100 -i input.avi -t 20 -c copy output.avi

If you run this, you might find that your video isn’t exactly 20 seconds, but a little bit longer. This is because this method trims the video starting from the nearest keyframe to the provided timestamp. Unfortunately, there’s no way to get around this without re-encoding.

Concatenating Footage

Now let’s say you have two separate gameplay recordings that you want to concatenate together. FFmpeg lets you do this without having to re-encode. The interface for this is a bit strange - you have to create a file with file '{VIDEO_FILE_NAME}' on each line. Here’s a snippet for cutting two videos out of a source file and concatenating them together:

Adding in Audio Commentary

This is the snippet that I use to mix in audio commentary with game audio, assuming that the audio commentary is already synced with the video. It’s a bit complicated, as it uses FFmpeg’s filtergraph functionality. Note that the audio is encoded using the mp3 codec. I keep my audio in FLAC form up till this point, so this is the first point where the audio is encoded.

To find sync points, what I’ve done is move the cursor up and down in the menu, while saying the words “up” and “down.” This gives me a point in both the commentary and game-play recording to sync up 4.

Putting it All Together

You shouldn’t use this process for videos with a lot of edits as it’s much easier to use an NLE such as Premiere or Vegas 5. However, if you’re creative about how you split-up and sync your videos, I think this approach is worth it. The output videos are the same quality as your raw recordings, and the encoding process is often faster than real-time. Check out our let’s play below!

More Complicated Editing

Here’s an example of a more complicated edit involving multiple overlays and zoom levels that show up at different times. The entire edit, including trimming, audio syncing, and audio merging is done in one FFmpeg command. Since overlays are applied to the video, it must be re-encoded.

Once I forgot to do this and had to sync up by trying to match button press sounds with in-game actions! ↩

The reality is that, right now, there aren’t any good open-source NLEs so you’ll have to open your wallet for that kind of editing. The most promising one that I’ve looked at is OpenShot, and it just got an update that’s supposed to improve stability. ↩