KWLUG - The Kitchener Waterloo Linux User Group - ImageMagickhttp://kwlug.org/taxonomy/term/186/0
enVideo editing using ffmpeg and ImageMagickhttp://kwlug.org/node/854
<p>Sometimes, you want to edit multiple videos in a similar way. For that, I've found that the best way is to create a script that you can run consistently on those videos and get a consistent ouptut. For example, recordings from a conference or a day by day video log of your family vacation.</p>
<p>I have created some scripts using ffmpeg and ImageMagick for my some of my videos, saving me hours in the process.</p>
<p>The main advantages I've found are:</p>
<ul>
<li>Consistency: I don't have to remember what I did to a video. It is all in the script</li>
<li>Time savings: Execute the script with the right parameters and go for dinner, let the computer work.</li>
<li>Control: I can execute just a portion of the script, play around with the options and be limited only by my imagination.</li>
</ul>
<p>In this posts I will show some of the commands I've found most useful, these are very basic commands that can be improved by people more creative than me.</p>
<p>For this example I will use a script I created to edit my Toastmasters videos. You can find the full script attached to this post along with an explanation of each one of the components of the script</p>
<p>To test the script, just extract the attached sample file and execute the following command</p>
<p><code>./VideoEdit.sh sample.prm</code></p>
<p>This script does the following:</p>
<ol>
<li>Creates a few seconds of introduction video with a still image and information about the video (Title, speaker, date, etc)</li>
<li>Creates a few seconds of closing video with a still image and credits and disclaimers</li>
<li>Extracts a portion of the input video for editing as individual PNG frames</li>
<li>Fades out the introduction image into the first 3 seconds of video</li>
<li>Overlays some slides at certain points in the video</li>
<li>Reassembles the video from the modified frames. adds the audio and attaches the introduction and closing.</li>
</ol>
<p>It receives as parameters a file that contains the actual editing parameters: </p>
<ul>
<li>INPUTFILE : Path to the file we want to edit</li>
<li>LOGO : Path to a file that will serve as the cover for the introduction</li>
<li>LOGOLENGHT : Lenght in seconds in which the introduction should show</li>
<li>SPEAKER : Name of the speaker</li>
<li>TITLE : Title of the speach</li>
<li>DATE : Date of the speach</li>
<li>SCENESTART : Time where the section of video I want to extract starts (can be as ss or as hh:mm:ss.fff)</li>
<li>SCENEDURATION : Time where the section of video I want to extract ends (can be as ss or as hh:mm:ss.fff)</li>
<li>PROJECT : Short name for the project. It will be used to name temporary files and the output file</li>
</ul>
<p>Here is an example of my parameters file:<br />
<code><br />
INPUTFILE="/media/cdrom0/VIDEO_TS/VTS_01_1.VOB"<br />
LOGO="../TM-titleLogo.png"<br />
LOGOLENGHT="5"<br />
SPEAKER="Raul Suarez"<br />
TITLE="The dreaded empty page"<br />
DATE="July 28, 2011"<br />
SCENESTART="00:06:58"<br />
SCENEDURATION="00:07:20"<br />
OUTPUTFILE="TM2"<br />
</code><br />
<strong>Extract Video:</strong><br />
Extracts a portion of video from the input file converting the video to frame files one PNG file per frame and splitting the audio to to an mp3 file<br />
<code><br />
mkdir -p "${OUTPUTFILE}-frames"<br />
ffmpeg -loglevel quiet -threads 4 \<br />
-i ${INPUTFILE} -ss ${SCENESTART} -t ${SCENEDURATION} \<br />
-f image2 -y "${OUTPUTFILE}-frames"/frame%d.png \<br />
-acodec copy -sameq -y "${OUTPUTFILE}.mp3"<br />
</code><br />
<cite><br />
-loglevel : defines how verbose I want the console output<br />
-threads : If allows using multiple cores in a multicore processor. Can speed up some tasks<br />
-i : The name of the input file<br />
-ss : Indicates when does the segment of video we want start<br />
-t : Indicates the duration of the segment of video we want<br />
-f : Format of the output. In this case the format is image as we are extracting to one frame per file<br />
-y : Overwrites files without asking<br />
"${OUTPUTFILE}-frames"/frame%d.png : This part of the command tells ffmpeg to extract each frame to a file named frame1.png, frame2.png, etc<br />
-acodec copy : extract the audio without re-encoding<br />
-sameq : do not loose quality (may not be necessary here but I left it just in case)<br />
"${OUTPUTFILE}.mp3" : is the name of the audio file we are saving<br />
</cite><br />
I could have done this with three commands, which is more readable, but takes more time<br />
<code><br />
# Extract the video<br />
ffmpeg -i ${INPUTFILE} -ss ${SCENESTART} -t ${SCENEDURATION} \<br />
-acodec copy -vcodec copy -sameq -y "${OUTPUTFILE}-1.mpg"<br />
</code><code><br />
# Split the audio to preserve it<br />
ffmpeg -i "${OUTPUTFILE}-1.mpg" -acodec copy -sameq -y "${OUTPUTFILE}.mp3"<br />
</code><code><br />
# Convert the video to frame files one PNG file per frame<br />
mkdir -p frames<br />
ffmpeg -i "${OUTPUTFILE}-1.mpg" -f image2 -y frames/frame%d.png<br />
</code></p>
<p><strong>Create Introduction</strong><br />
Uses the ImageMagick convert command to create the introduction image by merging the logo file with the titles for the video<br />
<code><br />
convert ${LOGO} -gravity Center -font DejaVu-Sans-Book \<br />
-pointsize 20 -fill gray -draw "text 1,21 'Talk of the Town Toastmasters'" \<br />
-fill white -draw "text 0,20 'Talk of the Town Toastmasters'" \<br />
-pointsize 50 -fill gray -draw "text 2,72 '${SPEAKER}'" \<br />
-fill white -draw "text 0,70 '${SPEAKER}'" \<br />
-pointsize 30 -fill gray -draw "text 1,131 '${TITLE}'" \<br />
-fill white -draw "text 0,130 '${TITLE}'" \<br />
-pointsize 20 -fill gray -draw "text 1,171 '${DATE}'" \<br />
-fill white -draw "text 0,170 '${DATE}'" \<br />
"${OUTPUTFILE}-intro.png"<br />
</code><br />
ImageMagick can, in a single instruction add multiple lines of text. It has different ways of doing this. For this example I used the "draw" command. To see other methods you can go to <a href="http://www.imagemagick.org/Usage/text/">http://www.imagemagick.org/Usage/text/</a></p>
<p>This command gets the ${LOGO} file and overlays the text on top of it. </p>
<p>As you can see in this example, I have two "draw" commands for each line, I do this to create a "gray shadow" effect on the text.</p>
<p>The resulting image will be saved as "${OUTPUTFILE}-intro.png"</p>
<p><strong>Create video from an image adding a silent sound track</strong><br />
<code><br />
ffmpeg -loglevel quiet -threads 4 \<br />
-loop_input -i "${OUTPUTFILE}-intro.png" -qscale 1 -r 29.97 -t ${LOGOLENGHT} \<br />
-ar 48000 -t ${LOGOLENGHT} -f s16le -acodec pcm_s16le -i /dev/zero -ab 64K -f mp2 -acodec mp2 \<br />
-map 0.0 -map 1.0 -sameq -f mpegts -y "${OUTPUTFILE}-intro.mpg"<br />
</code><br />
The first section of the command defines the video portion of the file<br />
<cite><br />
-loop_input will loop over the following file to create the output<br />
-r : The output video will be 29.97 frames per second. This is the framerate for NTSC<br />
</cite><br />
The second portion defines the audio portion of the file: The silence. It is important to add a sound track or we will have problems concatenating at the end.<br />
<cite><br />
-ar : audio sampling frequency of the audio<br />
-f : format, Note how when the format and codec are before -i, they refer to the input format, if they are after, they refer to the ouptut format.<br />
-acodec : Audio codec<br />
-i /dev/zero : This is where we take the "silence". Of course, if you want a real audio file you can use it here.<br />
-ab : Audio bitrate<br />
</cite><br />
Finally we assemble the input video and audio into the ouput file<br />
<cite><br />
-map : These map parameters say: take the first channel of the first input (video) and the first channel of the second input (audio)<br />
(I've explained the rest of the parameters in previous commands)<br />
</cite><br />
This is equivalent to the following three commands<br />
<code><br />
# create proper lenght of silence<br />
ffmpeg -ar 48000 -t ${LOGOLENGHT} -f s16le -acodec pcm_s16le -i /dev/zero -ab 64K -f mp2 -acodec ac3 -y silence.mp2<br />
</code><code><br />
# Create still logo video<br />
ffmpeg -loop_input -i "${OUTPUTFILE}-Logo.png" -qscale 1 -r 29.97 -t ${LOGOLENGHT} -y -f mpegts "${OUTPUTFILE}-logo1.mpg"<br />
</code><code><br />
# Assemble still logo and silence<br />
ffmpeg -loglevel error -i "${OUTPUTFILE}-logo1.mpg" -i "silence.mp2" \<br />
-vcodec copy -acodec copy -map 0.0 -map 1.0 -sameq -threads 4 \<br />
-y -f mpegts "${OUTPUTFILE}-intro.mpg"<br />
</code></p>
<p>The exit video is created in a similar way</p>
<p><strong>FadeOut introduction</strong><br />
You may be wondering why we extracted all the frames to PNG files. The main reason is that it allows us to manipulate them however we want using ImageMagick. We can rotate them, mix, change colors, add overlays. Your imagination is the limit.</p>
<p>In this case I am disolving the intro image progressivelly into the corresponding input video frames.<br />
<code><br />
mkdir -p "${OUTPUTFILE}-frames2"<br />
for i in {1..90}; do<br />
percent=$(echo "scale=3; ${i}*100/90" | bc )<br />
convert -compose dissolve -gravity center -define compose:args=${percent} \<br />
-composite "${OUTPUTFILE}-intro.png" "${OUTPUTFILE}-frames"/frame${i}.png "${OUTPUTFILE}-frames2"/frame${i}.png<br />
done<br />
</code><br />
<cite><br />
for i in {1..90} : The loop will operate in the first 90 frames (3 seconds at 29.97 frames per second ~ 90 )<br />
percent=$(echo "scale=3; ${i}*100/90" | bc ) : calculates the percentage we want to use to disolve. From 0 to 100% in 90 steps<br />
The convert command disolves the "intro" image into each of the frames applying the corresponding disolve percentage.<br />
Each of the resulting frames is written to another temporary folder.<br />
</cite></p>
<p><strong>Overlay an image</strong><br />
If you want to overlay an image on a section of video, you need to calculate the starting and ending frames so you can loop through those images doing the overlay:</p>
<p>I created a function to do the overlay, so I can overlay different images at different points in the video:<br />
<code><br />
overlayImage () {<br />
# Overlays an image on top of each frame in a range.<br />
FILE="${1}"<br />
GRAVITY=$2<br />
START_FRAME=$(echo "scale=0; ${3}*29.97/1" | bc )<br />
END_FRAME=$(echo "scale=0; ${4}*29.97/1" | bc )<br />
</code><code><br />
for (( i=${START_FRAME} ; i&lt;=${END_FRAME} ; i++ )) ; do<br />
convert -compose dissolve -gravity ${GRAVITY} -define compose:args=90 \<br />
-composite "${OUTPUTFILE}-frames"/frame${i}.png "${FILE}" "${OUTPUTFILE}-frames2"/frame${i}.png<br />
done<br />
}<br />
</code><br />
The overlayImage function receives as parameters</p>
<ul>
<li>The image we want to overlay</li>
<li>The relative positioning (gravity)</li>
<li>The start time on the section of video in seconds</li>
<li>The end time of the section of video in seconds</li>
</ul>
<p><cite>The loop disolves the overlay image into each of the frames for the section of video we want<br />
</cite><br />
<strong>Reassemble the video</strong><br />
<code><br />
# Copy the modified frames on top of the original frames<br />
cp "${OUTPUTFILE}-frames2"/frame*.png "${OUTPUTFILE}-frames"<br />
</code><code><br />
# Reassembles the frames and the audio into an output video<br />
ffmpeg -loglevel quiet -threads 4 \<br />
-r 29.97 -f image2 -i "${OUTPUTFILE}-frames"/frame%d.png \<br />
-i "${OUTPUTFILE}.mp3" -acodec copy \<br />
-map 0.0 -map 1.0 -sameq -f mpegts -y "${OUTPUTFILE}.mpg"<br />
</code><code><br />
# Concatenates the introduction, video and closing<br />
ffmpeg -loglevel quiet -threads 4 \<br />
-i concat:"${OUTPUTFILE}-intro.mpg"\|"${OUTPUTFILE}.mpg"\|"${OUTPUTFILE}-exit.mpg" \<br />
-r 29.97 -sameq -y "${OUTPUTFILE}.mp4"<br />
</code></p>
<p>And that's it. As you can see, with small modifications to these commands you can edit your video however you want.</p>
<p>If you search the internet you will find plenty of examples for ffmpeg for example<br />
<a href="http://www.catswhocode.com/blog/19-ffmpeg-commands-for-all-needs">http://www.catswhocode.com/blog/19-ffmpeg-commands-for-all-needs</a></p>
<p>While the ImageMagick documentation is the best source for more examples.<br />
<a href="http://www.imagemagick.org/Usage/">http://www.imagemagick.org/Usage/</a></p>
<p>While you can do it using a GUI video editor such as Kino, Cinelerra or final cut. I found that the flexibility and repeatability of scripting it simplified my life. </p>
<p>I hope it simplifies yours too.</p>
<p>If you have any comments you can email me at</p>
<p>rarsa --at-- yahoo.com</p>
<table id="attachments" class="sticky-enabled">
<thead><tr><th>Attachment</th><th>Size</th> </tr></thead>
<tbody>
<tr class="odd"><td><a href="http://kwlug.org/sites/kwlug.org/files/sample.tar_.gz">sample.tar_.gz</a></td><td>4.83 MB</td> </tr>
</tbody>
</table>
ffmpegHow-ToImageMagickReferenceTutorialvideoSun, 13 Nov 2011 23:44:48 +0000Raul Suarez854 at http://kwlug.org