Share or Embed Document

Cinelerra CV Manual

Non-linear video editor for GNU/Linux Community Version 2.1 Edition 1.55.EN

Heroine Virtual Ltd Cinelerra CV Team

Copyright c 2003, 2004, 2005, 2006 Adam Williams - Heroine Virtual Ltd. Copyright c 2003, 2004, 2005, 2006, 2007 Cinelerra CV Team. This manual is free; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This document is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110, USA.

high resolution processing. Kino (http://kinodv.com and the other at http://cvs. In this way. All the recording. function naming. a CV member (j6t) merges HV’s code with Cinelerra CV code. editing. but does not actively participate with the community of developers responsible for Cinelerra-CV. the Cinelerra CV code is very similar to the official release. It can be used to record audio or video.org. Occasionally. and consumers who want to acquire the content and watch it. In this way. obtaining the SVN just before a merge will generally be more stable than a post-merge CV version.Chapter 1: Introduction
1
1 Introduction
1. Cinelerra tries to be a single location for all your audio and video editing needs.org/).1 About Cinelerra
For years. the latest Cinelerra CV release is a little unstable as users find bugs. As mentioned. enhancements to the SVN and compliance fixes. directory naming) to be more similar to HV’s with slight changes to implementations. releasing code on a periodic basis every 6 months or so. It can be used as an audio player. After the merge. Ltd (HV). After HV’s Cinelerra is released. for example. and for some operations blistering fast. HV shares its code base with a community version of Cinelerra (Cinelerra-CV). When there is a new release. There are two types of moviegoers: producers who create new content and revisit it for further refinement. which makes Cinelerra very complex.2 The two versions of Cinelerra
There are two branches of Cinelerra. Cinelerra CV has a number of features that the official version does not. HV likes to work on its own copy of Cinelerra. Cinelerra-CV was founded by developers who wanted to extend the functionality and fix bugs inherent in the HV code base. or Edit
. However.mitvwiki. The official Cinelerra source code is developed "upstream" by Heroine Virtual. the HV release can not be described as "stable". Unlike other programs. Consumers should consider other tools such as Avidemux (http://www. the CV programmers will address as many of these bugs as possible. Cinelerra is not intended for consumers. HV will give feedback on implementations that the members of the CV submit to it.heroinewarrior. some people have wanted a way to edit their audio and video in one place as fluidly as writing text. This can be harder to use.org/). not all of the enhancements that the community create make it upstream. This documentation is focused on Cinelerra-CV (Community Version). It can even be used as a photo retoucher. but does make it tremendously powerful. CV coders apply bug fixes (http://bugs.org). They decided to develop Cinelerra in a community fashion and not create a separate fork of the original HV code. and compositing. the community adds new enhancements to the HV source.org/) or Kdenlive (http://kdenlive. Programmers occasionally send patches upstream. taking the enhancements from HV and reformatting the CV code (white spaces. and playback are handled here. One can be found at http://www. Cinelerra CV can be seen as the community’s attempt to stabilize HV’s release. Members will comment on each other’s implementations in order to create a more fully functional and stable product.cinelerra. Time permitting. Producers need these features in order to retouch many generations of footage."
1.org/Cinelerra): "The key difference between Cinelerra and many of the commercial editors is that Cinelerra hides much less from the user. there are often bugs or unusable new features. Quote from Miro’s Wiki (http://www. So. Cinelerra has many features for uncompressed content. YUV pipe rendering. Given the above discussion.avidemux.cinelerra. exposing much of the inner workings to direct interaction. Be aware that existing project description files.

Willie Marcel Spanish: Alberto Ramallo. This manual originates from "Secrets of Cinelerra". for the precious help he gave us during the elaboration of this manual. as published by the Free Software Foundation. see the doc/TRANSLATIONS file and contact the Cinelerra CV Community. Marcin Kostur. gour. Herman Robak. Martin Ellison. clearly and concisely documenting these bugs for the community that fixes them is a task that we ask of all users of the software. You may redistribute it and/or modify it under the terms of the GNU General Public License. Sean Pappalardo. HTML (single page) HTML (one page per chapter.1.no/listinfo/cinelerra Cinelerra CV website: http://cvs. in a folder).org
.. Therefore. it is normal in the PDF manual for some even pages to be left blank. David McNab. may not be compatible with the newly merged CV version.55. either version 2 of the License. Graham Evans.2
Chapter 1: Introduction
Decision Lists (discussed below). Pierre Dumuid. Please help them by creating well-formed bug reports. websites. In 2003 Alex Ferrer created a Wiki based on that manual and added many screenshots and topic descriptions. Kevin Brosius. Valentina Messeri. The community is very responsive. Carlos Davila. Ben Jorden. TXT in DokuWiki sintax (one file per chapter. Hermann Vosseler. Mike Edwards. Joseph L. wiki.cinelerra. Cinelerra CV still did not have its own manual and information regarding the Community Version of Cinelerra was scattered across the Internet (mailing-list. Raffaella Traniello. Joe Friedrichsen.3 About this manual
This manual edition is 1. The sources of the manuals are Texinfo files. Norval Watson. and particularly to Karl Berry. Cillian de Roiste.cinelerra. However. maintainer of GNU Texinfo. In 2006. Scott Frase. the task of finding bugs is relatively easy. Other contributors: Alexandre Bourget. plain TXT.php
If you would like to translate this manual in your language.org/docs/wiki/doku.
1. They are in the same SVN repository as Cinelerra’s source code (hvirtual/doc folder).4 Getting help
Help can be found at: IRC channel: #cinelerra on Freenode Mailing list: https://init. Jim Scott.EN. At that time. or (at your option) any later version. Dana Rogers. Gus Gus. Thanks to the GNU project team. You can participate on editing this manual by making changes in the Cinelerra-CV wiki:
http://cvs. Andraz Tori. The doc/README en file contains instructions for building the manual in PDF. Paolo Rampino.no/mailman/skolelinux. Docbook. an excellent primer written by Adam Williams from Heroine Virtual Ltd. Mikko Huhtala.cinelerra. They can be converted in many formats.org. Alex Ferrer. With any version of Cinelerra.linpro.Keyframes) Basque: I~ naki Larra~ naga Murgoitio "Dooteo". Raffaella Traniello (apprentice sorcerer) Manual translators: French: Jean-Luc Coulon Brazilian Portuguese: Flavio Soares (maintainer). etc). Note: This manual is intended to be duplex-printed. Nathan Kidd. Nicolas Maufrais combined the original "Secrets of Cinelerra" with the contents from Alex Ferrer’s Wiki into a unified document. in a folder). You may join our mailing list at http://cvs. Terje Hanssen. Rafael Diniz. Gustavo I~ niguez Goya (chapter 17 . IRC.
1. for Cinelerra CV version 2. Cinelerra-CV documentation maintainers: English: Nicolas Maufrais (coordinator).

Effects and several tracks of audio will compound these problems. decoding and playing video can be quite taxing. DV uses about 3.1 Hardware requirements
Cinelerra is demanding on all PC subsystems. greater than 1 Gb memory space is suggested. you should have large (>200 Gb) and fast (<10ms) disk drives. mpeg3cat . Unfortunately. mpeg3toc.
2. 7600GS) are known to work well.Multiplexing of MPEG elementary streams without standards conformance but more efficiently. However. it stands to reason that a less powerful system will be sufficient for users working with audio only or lower resolution video formats. RAID0 (stripe set). Thus.These go into ‘/usr/lib/cinelerra’ in 32 bit systems and ‘/usr/lib64/cinelerra’ in 64 bit systems. This feature can be a very effective way of increasing productivity. soundtest . anything less would be useless. If you expect to produce long pieces in uncompressed or larger resolution formats. Have at least 256 Megabytes of memory. performance and usability of Cinelerra are directly proportional to the video format (SVCD/DV/HDV/HD/etc) used and the CPU and I/O bus speeds and video and memory bus architecture of your hardware. For example. Cinelerra benefits from OpenGL hardware acceleration.0 in order to benefit from that acceleration.Chapter 2: Installation
5
2 Installation
This is the general contents of all Cinelerra packages.5 Megs per second.Utility for determining sound card buffer size. Storage requirements are based on your particular video editing needs. Foreign language translations . mpeg3dump . Storage Video editing can be quite I/O intensive. make sure your video card supports it. ATI’s Linux drivers do not support a complete implementation of OpenGL 2. here are some suggestions for running Cinelerra: CPU speed At least 500 MHz CPU speed.Utility for displaying the byte offset of a frame in an MPEG file. For smaller projects you might get away with 1 Gb.Utilities for indexing and reading MPEG files. such as DV video.
. Multiple monitors You can use XFree86’s Xinerama features to work on multiple monitor heads.These go into ‘/usr/share/locale’ Cinelerra executable . Therefore.0. mpeg3cat. Memory When working with video. Given these constraints. If you are going to send a composite signal directly to a TV or video recorder. mpeg3peek . Nvidia series 7 (ie. a large amount of free memory available can help speed up operations by avoiding unnecessary disk swaps and keeping material ready accessible. or 12 Gigs per hour. Dual-core and SMP processors greatly improve Cinelerra speed. mplexlo . To really use Cinelerra for higher resolution video formats and larger projects.Utility for reading an MPEG file from a certain standard and outputting it to stdout.1.This goes into ‘/usr/bin’ Cinelerra plugins . that same system may slow down considerably when playing back a higher resolution format. as reading. Make sure the video card you use supports OpenGL 2. RAID1+0 (striped/mirrored) or RAID5 (stripe set with parity) will also speed playback Video adapters Since version 2.

connecting a TV or S-Video monitor to it is a great way to view your material as it will be seen on TV screen.3.org has a complete listing of supported cameras. any other pre-captured format or use an analog video grabber. you need some sort of video grabber.1 Usual compilation process
You can install Cinelerra CV by fetching the source code and compiling it.org) and some audio management software properly running. X.linux1394.3 Compiling Cinelerra CV
2. but most likely your distribution has prebuilt packages.2 Software requirements
To install Cinelerra you should have a current version of GNU/Linux with the X Window System (e. DV cameras There is an large variety of DV cameras that can be used with Cinelerra. The source code of Cinelerra-CV is available from a Subversion (SVN) repository.
2. That is the method to use if you want to compile the most up-to-date version of Cinelerra CV.tigris.g..com/nightly/en/index. In addition to the libraries listed here. or want to grab video from a trusty old VCR. Video grabbers are supported through Video4Linux in Cinelerra.org/. Video grabbers If you have an analog video camera. For many distributions. Firewire Firewire is the fastest way to download material into your system. Subversion is available for download at http://subversion. You should also have the following libraries installed (partial list): a52dec dv faac ffmpeg fftw lame libavc1394 libfaad2 libraw1394 mjpegtools OpenEXR theora x264 You will also need the headers for all required libraries.html. be sure you have the X library headers. Missing headers will usually result in compilation failing with cryptic error messages.
2. this means that you will need to install the "-dev" or "devel" packages that correspond to your installed library packages. Almost any camera that can connect using firewire will work. you will need firewire on your system.6
Chapter 2: Installation
TV-Out If your Adapter supports a TV-Out option. Be sure to set the appropriate parameters on the video grabbing system to match your particular camera.red-bean. Complete documentation of subversion is available at http://svnbook. Unless you will be importing your media from a CD.
. http://www.

this command can be used:
make 2>&1 | tee logfile
6./configure Replace ‘-prefer-non-pic’ with ‘-fPIC \’ in your ‘quicktime/ffmpeg/libavcodec/i386/Makefile./configure’ file by running: autoreconf -i --force 4. compilation may fail. make make install
Updating the source code: If you already fetched the sources of an out of date revision. we recommend you to add the ‘-j 3’ option to make in order to benefit from the available CPUs. Finally run as root (for first time compilation only):
Notes: SMP machine: If you compile Cinelerra CV on a multiprocessor machine (SMP). If you do that. otherwise./configure --prefix=/usr --enable-x86 --enable-mmx32 --enable-freetype2 --with-buildinfo=svn/recompile CFLAGS=’-O3 -pipe -fomit-frame-pointer -funroll-all-loops -falign-loops=2 -falign-jumps=2 -falign-functions=2 -ffast-math -march=pentium-m -mfpmath=sse.Chapter 2: Installation
7
1./configure --with-buildinfo=svn/recompile .org/cinelerra/trunk/hvirtual
cd hvirtual 3. 2. For x86 CPUs only: You probably want to enable MMX support. And run make: If you wish to log the make output in order to search for errors. 5./configure --help make
This option makes the revision number to be shown in the About tab of the Preferences window. Run the following command: The svn command above will create in your current working directory a directory hvirtual that contains the sources. Install Cinelerra CV:
sudo make install ldconfig
7.sh . run . run:
. Create the ‘. First you have to fetch Cinelerra CV’s sources from the SVN repository (approximately 170Mb or 60Mb for a read-only checkout). To do that.387 -mmmx -msse’
For 64bits: As root./configure with the ‘--enable-mmx32’ option. Then run the ‘.am’ file. Go to the hvirtual directory:
svn checkout svn://svn. You can have a look at all the other options available by running:
Most of the missing dependencies should be listed after running.configure’ file: . you can update to the latest revision by running:
. you may have to use the ‘--without-pic’ option too. For Pentium-M: Here are some useful compiler flags:
./autogen.skolelinux.

. page 141.
. Autoconf 2..skolelinux. for specific instructions./hvdbg cd . page 161.3 [Freeing more shared memory].2 Compiling with debugging symbols
When Cinelerra CV crashes./. fetch the SVN sources as usual. 1. for information about running Cinelerra inside gdb./hvirtual/configure CXXFLAGS=’-O0 -g’ CFLAGS=’-O0 -g’ --withbuildinfo=svn/recompile cd quicktime/ffmpeg nice -19 make CFLAGS=’-O3’ cd .57 is also required to build.7 to build.po’ files do not get installed correctly. If you get this error message when running Cinelerra for the first time:
WARNING:/proc/sys/kernel/shmmax is 0x2000000. page 19. create a ‘/usr/local_cinelerra’ folder./configure (replace ‘xxx’ by the number of the revision you are compiling): ‘--prefix=/usr/local_cinelerra/rxxx --exec-prefix=/usr/local_cinelerra/rxxx --program-suffix=_rxxx’ You will have to run Cinelerra CV from the directory in which it is installed: If you install Cinelerra using this method. The information displayed by gdb is far more detailed and will help CV developers find bugs faster. which is too low
See Section 21. the translated ‘. one can compile it with debugging symbols and run it inside gdb.4 Running Cinelerra
The simplest way to run Cinelerra is by running /usr/bin/cinelerra Command line options are also available by typing cinelerra -h These options are described in several sections below. If you want to run Cinelerra in another language..1 [Reporting bugs]. run:
svn checkout -r <revision> svn://svn...
2. nice -19 make nice -19 make install
See Section 22.9. For rendering from the command line See Chapter 20 [Rendering files].4 will not work.
cd /usr/local_cinelerra/rxxx .1 [Environment variables].org/cinelerra/trunk/hvirtual
Installing several revisions: If you wish to install several revisions of Cinelerra CV on your computer.8
Chapter 2: Installation
svn update
Installing old revisions: If you wish to fetch an old revision.3. page 171. Then. run the following commands:
cd hvirtual nice -19 autoreconf -i --force mkdir . for details. First./cinelerra_rxxx
2. See Section 3. Automake version: You need automake version 1./hvdbg nice -19 . and use the following options with .

To install the cinelerra package use the Install Software tool in Mission Control or run the following commands from a command line:
apt-get update apt-get install cinelerra.org.http://x-evian.knoppix.the "original" Debian-based LiveDistro .org http://www.kiberpipa. He also makes binary .based on Debian and Morphix .kiberpipa.org/~minmax/cinelerra/builds/pentiumm/ . that are GNU/Linux distributions which boot from a CD.archlinux.8 Debian
Andraz Tori maintains build rules for Debian Sid.org/~minmax/cinelerra/builds/athlonxp/ ./
.
2. They are built from the unofficial SVN releases./ deb http://www. This is possible by using Live CDs.dynebolic.dedicated to video editing .org/~minmax/cinelerra/builds/pentium4/ .com ArtistX .mediainlinux.ar
x-evian .org dyne:bolic . Debian Sid packages can be found here: Apt source for i386:
2.org
2. without installation on a hard drive.Knoppix based live CD for audio production.slo-tech.elivecd.http://garbure. Here are some of the Live CD’s known to contain Cinelerra: Knoppix .Debian based live CD./
Apt source for Pentium-M (optimized): Apt source for AthlonXP (optimized): Valentina Messeri built also
deb http://www.6 Arch Linux
Cinelerra CV is included in the Arch Linux community repository.http://www.kiberpipa.php/AUR_User_Guidelines for more info).1 Debian binaries
deb http://www.http://www. Gnome or KDE.5 Live CDs
You can try and use Cinelerra on a computer without having to install it on your system. graphic design and video editing pho (garbure) . Then run the following command from the command line:
pacman -Sy cinelerra-cv
first
(See
2.org/pho/ Slo-Tech .org/~minmax/cinelerra/builds/sid/ .org
Elive
-
Debian
based
live
CD
using
Enlightenment
window
manager
-
http://musix.8. To install the cinelerra package enable the community repository http://wiki./
Apt source for Pentium4 (optimized):
deb http://www.http://linux.uses Window Maker window manager -
http://www.Debian based live CD for multimedia creations .Knoppix based .Chapter 2: Installation
9
2. For multimedia productions -
http://artistx.kiberpipa.org/ Musix .org/index.7 Ark Linux
Cinelerra CV is included in the Ark Linux package repository.for multimedia production .org Mediainlinux .deb packages for Sid.

though. for stable.so./
Christian Marillat makes binary Debian packages.needed if you get /usr/bin/ld: cannot find -lXxf86vm Extra Debian packages These are development packages which are "non-standard".8: undefined symbol: faacDecOpen
Note: BRANCH = stable.8. Apt source for amd64. built from the unofficial SVN releases.needed if you get error: X11/extensions/xf86vmode. so you will probably need them: libtool nasm x11proto-xf86vidmode-dev .h: No
2.10
Chapter 2: Installation
Apt source for Opteron (AMD64) (optimized):
deb http://giss. that you wouldn’t have them installed by default. hppa.0-0.needed if you get error: X11/extensions/Xvlib.tv/~vale/debian64/ .debian-multimedia. powerpc (not optimized):
deb http://www. testing or unstable Note: Install debian-multimedia-keyring to add in your keyring Marillat’s gpg-key. Note: If Cinelerra produces the following error:
You can solve the problem by entering the following command as root:
apt-get install --reinstall libfaad2-0=2.h: No such file or
. i386.0. ia64.5
Standard development packages These are packages which might be considered "standard" development pacakges.4. and you’ll almost certainly have to install them if you want to compile Cinelerra: libogg-dev libvorbis-dev libtheora-dev libopenexr-dev libdv-dev libpng-dev libjpeg62-dev libtiff4-dev libfreetype6-dev libfaad-dev libsndfile1-dev uuid-dev Some packages which may or may not be required: libavutil-dev libmpeg3-dev libavcodec-dev
such file or directory directory
libxv-dev .org BRANCH main
cinelerra: relocation error: /usr/lib/libavcodec.0. testing and unstable.2 Debian prerequisites
libxxf86vm-dev . The chances are.

list. Add it by typing the following command in your terminal: .list
-Installations from this repository need an authentication key. Open Cinelerra and go to Settings->Preferences->Playback->Audio Driver. .Please.list’ the following line.org/ sid main
You will need to apt-get install the following packages: libx264-dev libfaac-dev
2.0 video card) cinelerra-generic (all x86 and x86 64 with opengl 2.04 Hardy Heron: for all x86 (full working on 32 and 64 bits).1 Ubuntu packages repositories
For Ubuntu 8.net/dists/akirad.Cinelerra package is available in 5 variants: cinelerra (x86 and x86 64 without opengl 2.To add this repository in your sources list use the following terminal command:
sudo wget http://repository. which is Christian Marillat’s repository:
deb http://www.9.key -O.Chapter 2: Installation
11
External packages You need some prerequisites which are not found in Debian’s official repositories.To add this repository in your sources list use the following terminal command:
sudo wget http://repository. report any package bug to akir4d at gmail dot com For Ubuntu 7.Cinelerra package is available in 5 variants: cinelerra (all x86 and x86 64 without opengl 2.akirad.net/dists/hardy.list
-Installations from this repository need an authentication key.net akirad-gutsy main wget -q http://repository.debian-multimedia. You should add in your ‘/etc/apt/sources.| sudo apt-key add -
.d/akirad.list.0 video card)
.d/akirad.9 Ubuntu
2.net akirad-hardy main
Installation notes: .akirad.0 video card) cinelerra-k7gl (amd32 with opengl 2.Cinelerra must be set to work with PulseAudio. Select ESound and set the following parameters: Server: Port: 7007 .list -O /etc/apt/sources. Add it by typing the following command in your terminal:
wget -q http://repository.net/dists/gutsy.list -O /etc/apt/sources. by Paolo Rampino:
deb http://repository.10 Gutsy Gibbon: for all x86 (full working on 32 and 64 bits).0 video card) cinelerra-generic (all x86 and x86 64 with opengl 2.akirad.0 video card) .net/dists/akirad.akirad.0 video card) cinelerra-k7 (amd32 without opengl 2.akirad.These packages set shmmax to 0x7fffffff and add non-English language support for Cinelerra.0 video card) cinelerra-k8 (amd k8 optimized with opengl 2.key -O.| sudo apt-key add -
Installation notes: . by Paolo Rampino:
deb http://repository.akirad.

kiberpipa. You should now see Cinelerra in the list of packages available in Synaptic.org/cinelerra/Gutsy
Compilation from source code on Ubuntu 7. Follow Synaptic instructions for installation. Install Cinelerra by typing in your terminal:
apt-get update
and then
apt-get install cinelerra
With GDebi Package Installer: Send Firefox to the web address of the repository (e.com/community/CinelerraOnFeistyAMD64 http://www. Click on the Add button and enter your chosen repository. multiverse and restricted sources enabled by checking you have the following line uncommented:
deb http://archive.g. cinelerra 2. Clicking Add Source will display the new repository enabled in the Software Sources window. http://www. Here are 3 ways of doing that: With Synaptic Package Manager: Open the Software Sources Window. You can do it in two ways: . HOWTOs for package installation or compilation from source code
Installation of Ubunty Feisty AMD64 Cinelerra package:
https://help. Click on tab Third Party.9.g.php?t=320701&highlight=cinelerra
.10 Gutsy Gibbon: Compilation from source code on Ubuntu 6.org/~muzzol/cinelerra/edgy-i386/.0+svn20070109-0ubuntu1 i386. Make sure you have universe.ubuntu.deb) A dialog window will ask you to confirm your intention to open this file with GDebi Package installer.dyne.10 Gutsy Gibbon (for beginners): Compilation from source code on Ubuntu 7. Clicking OK will start the download.) Click on the .2 Instructions for Ubuntu packages installation
Add the complete APT line of your chosen repository.com/ubuntu dapper universe multiverse restricted
2.Inside Synaptic Package Manager: Go to Settings -> Repositories. If during the process you get errors about not satisfiable dependencies.Go to System -> Administraton -> Software Sources . Make sure you have universe.html http://lab.g-raffa.Chapter 2: Installation
13
Chose a repository from the ones above according to your release and CPU type and install the package.deb link for your chosen Cinelerra package (e.eu/Cinelerra/cin_compilation.1. try installing the problematic library with the same method from the same webpage.10 Edgy Eft:
http://www. multiverse and restricted sources checked in the first tab.ubuntuforums. With the command line: Edit directly your ‘/etc/apt/sources.org/showthread.list’ file.ubuntu.

net.14
Chapter 2: Installation
http://placide. you will need to replace ~x86 with the relevant architecture e.gentoo. ~amd64.g.g.pt/cinelerra02.noarch.org/wiki/Cinelerra_CV_su_Ubuntu
Compilation from source code on Ubuntu Breezy:
Compilation from source code on Ubuntu (for beginners.noarch.conf’ file and make sure that the Freshrpms config gets included from the ‘/etc/yum. which may also carry audio and video libraries and Cinelerra.net/pub/freshrpms/fedora/linux/6/\ freshrpms-release/freshrpms-release-1.freshrpms.home.sapo. amd64. Also make sure that the Fedora Extras repository is enabled.d’.
2. You may also want to adjust the USE flags.rpm
On Fedora 5.org/doc/en/handbook/handbook-x86.1-1.11 Fedora
Cinelerra is included in the Freshrpms repository at http://freshrpms. First run
emerge -av cinelerra-cvs
to see what flags are ‘/etc/portage/package. This is the case by default on Fedora 5 and 6.html https://faberlibertatis. The easiest way to install packages from Freshrpms is to include the repository in the yum configuration. check the ‘/etc/yum. including ffmpeg and mjpegtools.com/index.1-1.keywords’ file in order to unmask it:
echo "=media-video/cinelerra-cvs ~x86" >> /etc/portage/package.keywords See http://www.php/archives/install-cinelerra-on-fe On Fedora 6. Notes: The package collection of Freshrpms may overlap and/or conflict with other third-party repositories such as Livna. as the user root do:
rpm -ivh http://ftp. Simply type:
emerge cinelerra-cvs
as root and it should install and run without any problems.fc.blog385. For Fedora 8 see http://kernelreloaded.use’:
available
and
then
add
the
relevant
ones
to
echo "media-video/cinelerra-cvs ieee1394" >> /etc/portage/package. and Core and Extras are merged into one on Fedora 7. It may be a good idea to stick with one repository and not mix packages from several different third-party repositories. If you are running on an architecture other than x86 e.xml?part=3&chap=3 for
details.use
This would enable support for firewire devices.repos.net/pub/freshrpms/fedora/linux/5/\ freshrpms-release/freshrpms-release-1.rpm
Then type:
yum -y install cinelerra
to get and install Cinelerra and all the dependencies.10 Gentoo
Installation for Gentoo GNU/Linux is very straight forward. Note that you may need to put cinelerra-cvs in your ‘/etc/portage/package.
. in Italian):
2. If it does not work.fc. do:
rpm -ivh http://ftp.freshrpms.

net/files/RPM/
Brosius
.org/package/cinelerra/16413.zarb. and next the package installation with the YaST2 Software Manager.org/packages.php for more information.0/multimedia/
For slackintosh:
2.kevb. At least the following are required: OpenEXR-devel SDL-devel a52dec-devel alsa-lib-devel e2fsprogs-devel faac-devel faad2-devel ffmpeg-devel fftw-devel imlib2-devel lame-devel libXv-devel libXxf86vm-devel libavc1394-devel libdv-devel libiec61883-devel libogg-devel libraw1394-devel libsndfile-devel libtheora-devel libvorbis-devel mjpegtools-devel x264-devel xvidcore-devel If you want to compile Cinelerra from source on Fedora Core 6. They are available at http://packman.blogspot. For x86:
http://slack.com /2007/03/fedora-core-6-cinelerra-dependencies.1 and 10.sarava. First by adding packman’s YaST2 repository as a YaST2 Installation source. http://plf.html
The header files of the various libraries are needed for compiling Cinelerra from source.14 Suse
RPMs for SuSE 9 are built from SVN sources by Kevin
http://cin.12 Mandriva
Cinelerra packages for Mandriva are made by PLF are ready to install.links2linux.Chapter 2: Installation
15
http://crazedmuleproductions. The RPM package(s) can be installed as root in a terminal using this command:
rpm -Uvh package_name. Read
2. The header files are included in separate devel packages that are included in the Fedora and Freshrpms repositories and can be installed with yum.org/packages/slackware/slackware-11.2 and add the root password when requested:
.0/multimedia/ http://slack.13 Slackware
Rafael Diniz build Slackware packages for Cinelerra.org/packages/slackintosh/slackintosh-11. architecture i586 and x86 64.sarava. are built from SVN by Leon Freitag at Packman.2 i586 using YaST2. Start the YaST Control center on OpenSUSE 10.2. and available at
RPMs for OpenSUSE 10. detailed instructions on how to install the necessary dependent files can be found at:
2.rpm
The following installation case shows four screenshots for a GUI based Cinelerra SVN installation on OpenSUSE 10.

Start the YaST2 Software Management.
Click Accept to start the package installation and afterwards Next to finish. wait until it is finished. Synchronization with Zenworks may take some time. select the HTTP protocol and add the servername for packman as shown.
. possibly try an update first.16
Chapter 2: Installation
Start the YaST2 installation source tool. delete it. Enter "Cinelerra" in the left search field and next check the checkboxes for the Cinelerra packages in the right window. If an older version of Cinelerra is installed on beforehand visible with a lock symbol.

Chapter 2: Installation
17
2.15 MacOSX
FIXME
.

.

go to settings->preferences to see the options.1 Environment variables
In UNIX derivatives. runtime configuration is the only option for most users because of the multitude of parameters available. Cinelerra cannot be optimized without special configuration for your specific needs.Brazilian Portuguese it IT . For example.German pt BR . In Cinelerra.ISO-8859-15
Then you can run cinelerra with this command: This is an example for Italian language:
env LANG=$(echo $LANG | sed -e s/UTF-8/(your ISO-8859 variant)/g) cinelerra env LANG=$(echo $LANG | sed -e s/UTF-8/ISO-8859-15/g) cinelerra
. Cinelerra language settings are normally read from your GNU/Linux language settings. first you must create the language charset with this command:
localedef -c -i (language_prefix) -f (your ISO-8859 variant) (language_ prefix). then run cinelerra from the same shell. this must be defined: a colon separated list of directories to search for LADSPA plugins. Plugins of different binary formats need to be in different directories.Italian If your distribution has only UTF-8 support (like Ubuntu). Below are configuration options as well as the supported API’s in GNU/Linux. The default is ‘/usr/lib/cinelerra’ but you may need an alternate directory if you share the same executable directory among many machines via NFS. Therefore. To run Cinelerra on a language different than the one selected on your system just change the LANG and LANGUAGE environment variables.Espanol sl SI .
3. LANG and LANGUAGE Cinelerra can be localized to display menus and messages in many languages. page 127.Euskera de DE . environment variables are global variables in the shell which all applications can read.Chapter 3: Configuration
19
3 Configuration
Because of its flexibility. They are set with a command like set VARIABLE=value.Slovenian fr FR . See Chapter 16 [Ladspa effects].Francais eu ES .English es ES . Cinelerra recognizes the following environment variables: LADSPA PATH If you want to use LADSPA plugins. It will open set on the Spanish language. Available languages are: en EN .(your ISO-8859 variant)
This is an example for Italian language:
localedef -c -i it_IT -f ISO-8859-15 it_IT. very few parameters are adjustable at compile time. All the environment variables can be viewed with a command like env. These are not native Cinelerra plugins. open a shell and type: export LANG=es_ES LANGUAGE=es_ES. Unfortunately. GLOBAL PLUGIN DIR The directory in which Cinelerra should look for native plugins.

po files are not installed. Bits The number of bits of precision Cinelerra should set the device for. This is probably the firewire card number. If you can’t run Cinelerra in your chosen language. but roughly the same performance that OSS had in 2. It supports most of soundcards now. try running the following commands before changing the LANG and LANGUAGE environment variables:
cd hvirtual .2 Audio drivers
The audio drivers are used for both recording and playback.
3. Unfortunately ALSA is constantly changing. Device The chosen device.4 Alsa
ALSA is the most common sound driver in GNU/Linux 2. Channel The IEEE1394 standard specifies something known as the channel.
3. New wrappers are being developed on top of ALSA.20
Chapter 3: Configuration
In some cases (e.6. Stop playback locks up This ALSA only checkbox is needed if stopping playback causes the software to lock up. It had an open source implementation and a commercial implementation with more sound cards supported. It takes advantage of low latency features in GNU/Linux 2.2 OSS
This was the first GNU/Linux sound driver. Their functionality is described below:
3.2.2.3 OSS Envy24
The commercial version of OSS had a variant for 24 bit 96 KHz soundcards.4. A program which works with it one day may not the next day.
. Port The IEEE1394 standard specifies something known as the port.0. We plan to support them at regular intervals. It still is the only sound driver which an i386 binary can use when running on an x86 64 system.1 Sound driver attributes
Device path Usually a file in the ‘/dev/’ directory which controls the device. Some sound drivers need to be set to 32 bits to perform 24 bit playback and will not play anything when set to 24 bits. For DV cameras it is always 63.g. if you compiled Cinelerra specifying a ‘--prefix=’ option different from ‘/usr/local’) the translated . It was the standard sound driver up to GNU/Linux 2.2.
3.2. hence the need for the new driver. Some sound drivers need to be set to 24 bits for 24 bit playback. This variant required significant changes to the way the sound drivers were used. This sometimes has a figurative meaning.4./configure prefix=/usr cd po sudo make install
3.6 to get better performance than OSS had in 2.

This is a library on top of RAW 1394 which is a library on top of the kernel commands. This is probably the firewire card number. Visit http://www. but it does have the ability to multiplex multiple audio sources. It just writes the RGB triplet for each pixel directly to the window.
3.
3.org for more information and the latest drivers.1 Video driver attributes
Display The interface is intended for dual monitor displays.2.2 X11
This was the first method of graphical display on any UNIX system.
3. For this scenario. Device path Usually a file in the ‘/dev/’ directory which controls the device.3.
3.2. This is the most reliable way to play audio to a camcorder and consists of direct kernel commands. Channel The IEEE1394 standard specifies something known as the channel. Swap fields Make the even lines odd and the odd lines even when sending to the device. It is the least reliable way to play audio to a camcorder and consists of a library on top of the kernel commands.
3.5 Esound
ESOUND was a sound server that sat on top of OSS.7 DV 1394
This is the second rewrite of DV camcorder support in GNU/Linux.
. it will crash. The next rewrite ought to fix that. It supports a limited number of bits and has high latency compared to more modern drivers. Depending on the value of Display.
3.linux1394. The third rewrite of DV camcorder support in GNU/Linux. the Compositor window will appear on a different monitor from the rest of the windows. It is still useful as a fallback when graphics hardware can not handle very large frames.Chapter 3: Configuration
21
though not at every new release of a new wrapper. ALSA is no longer portable between i386 and x86 64. For DV cameras it is always 63.2. It is unknown whether it still works. It is less reliable than DV 1394 but more reliable than RAW 1394.3 Video drivers
The video drivers are used for video playback in the compositor and the viewer. Output channel You may need a specific connector to send video out to devices with multiple outputs.3. Port The IEEE1394 standard specifies something known as the port. On an NTSC or 1080i monitor the fields may need to be swapped to prevent jittery motion. use OSS. If an i386 binary tries to play back on an x86 64 kernel.2.8 IEC 61883
3.6 Raw 1394
The was the first interface between GNU/Linux software and firewire camcorders. It is the slowest playback method. It was written for a window manager called Enlightenment.

2 NVIDIA 87. Chromakeyhsv. OpenGL allows video sizes up to the maximum texture size. Gradient. It is the preferred playback method but can not handle large frame sizes. RGB601.0 support.22
Chapter 3: Configuration
3. Diffkey. Linear blur. This requires compiling it on a system with the OpenGL 2.0 headers. To know if your video driver supports OpenGL 2. Here is what is written in the console when working with large frames:
BC_Texture::create_texture frame size <frame_width>x<frame_height> bigger than maximum texture 4096x4096. PBuffers are known to be fickle. try seeking several frames or restarting Cinelerra. OpenGL-enabled effects must be placed after software-only effects. Overlay. Chromakey. You need a video card which supports OpenGL 2. Scale. Color balance. Zoomblur. Project and track sizes need to be multiples of four for OpenGL to work. To enable it you will need a binary built with OpenGL support. The maximum video size for XV is usually 1920x1080.0 rendering: OpenGL version string: 1. The configure option to enable OpenGL is ‘--enable-opengl’. Histogram. such as Nvidia’s binary driver. OpenGL always uses linear scaling.0 rendering:
OpenGL version string: 2. All rendering before the last software-only effect is done in software. The core Cinelerra operations like camera and projector are OpenGL.4 (2.0. PBuffers will not work.3. Freeze frame.0 and Cinelerra needs to be explicitly compiled with OpenGL 2. Limitations: OpenGL does not affect rendering.5 Buz
This is a method for playing motion JPEG-A files directly to a composite analog signal.74
Video driver not supporting hardware OpenGL 2. Rotate. although the difference between YUV and RGB is retained. You also need to use a video driver supporting OpenGL 2.0. Threshold. which is usually larger than what XV supports.0. Hue saturation.
3. most effects are done in hardware.
.3. type the following command: glxinfo | grep
"OpenGL version"
3. depending on the graphics driver. Frames to fields.0. The following effects support OpenGL: Brightness. The most powerful video playback method is OpenGL. With this driver. It converts YUV to RGB in hardware with scaling. Gamma. Interpolate Pixels.3 X11-XV
This was an enhancement to X11 in 1999. To get the most acceleration. newer drivers have replaced BUZ. The graphics driver must support OpenGL 2. Not all of the effects support OpenGL acceleration.0. Radial blur. OpenGL does not work with frames whose size is larger than 4096x4096. X11-OpenGL processes everything in 8 bit color models. Perspective.3.4 X11-OpenGL
Video driver supporting hardware OpenGL 2. It uses a popular hack of the Video4Linux 1 driver from 2000 to decompress JPEG in hardware. It just accelerates playback. Flip. If the graphics card does not have enough memory or does not have the right visuals. Even though analog output is largely obsolete. If OpenGL does not work.2 NVIDIA 87. Deinterlace.
The scaling equation set in the preferences window is ignored by OpenGL.74)
OpenGL relies on PBuffers and shaders to do video rendering. Invert video. Recent Nvidia video cards should work. Dissolve.

The easiest way to set the audio offset is to create a timeline with one video track and one audio track. Audio offset The ability to tell the exact playback position on GNU/Linux sound drivers is poor.
. If the audio starts ahead of the video.3. Some sound drivers do not allow changing of the console fragment. Playback buffer size For playing audio. A larger value here causes more latency when you change mixing parameters but yields more reliable playback.org for more information and the latest drivers. you will need to change the audio offset because different sound drivers are unequally inaccurate. Highlight a region of the timeline starting at 10 seconds and ending at 20 seconds. so latency is unchanged no matter what the value. expand the audio track and adjust the nudge.8 IEC 61883 video playback
The third rewrite of DV camcorder support in GNU/Linux. The next rewrite ought to fix that.
3.3.
3. increase the nudge value. Drop a synthesizer effect on the audio and configure it to be clearly audible.1 Audio out
These determine what happens when you play sound from the timeline. This was the most reliable way to play video to a camcorder and consists of direct kernel commands. decrease the nudge value.
3. That changed when the virtual console moved from the push model to the pull model. it would be difficult to disconnect the size of the console fragments from the size of the fragments read from disk. copy the nudge value to the audio offset value in preferences. It merely changes the synchronization of video playback. a good way of ensuring high quality playback was to read bigger fragments from the disk and break them into smaller fragments for the soundcard. Drop a gradient effect on the video track and configure it to be clearly visible. It is the least reliable way to play video to a camcorder and it consists of a library on top of the kernel commands. The audio offset does not affect the audio playback or rendering at all. If the audio starts after the video.3. if it is provided at all. small fragments of sound are read from disk and processed sequentially in a virtual console. it has to be accurate. Once the tracks play back synchronized.7 DV 1394 video playback
The second rewrite of DV camcorder support in GNU/Linux. Previously. Expand the audio track and center the audio pan.Chapter 3: Configuration
23
3. If it does not. Since different stages of the rendering pipeline can change the rate of the incoming data.4.6 Raw 1394 video playback
This was the first interface between GNU/Linux software and firewire camcorders. This is a library on top of RAW 1394 and is less reliable than DV 1394 but more reliable than RAW 1394. Note: if you change sound drivers or you change the value of Use software for positioning information. The frame size should be small enough for your computer to render it at the full framerate. The frame rate should be larger than 24 fps and the sampling rate should be greater than 32000. The audio offset allows users to adjust the position returned by the sound driver in order to reflect reality. Visit http://www.linux1394.4 Playback
3. Play the timeline from 0 and watch to see if the gradient effect starts exactly when the audio starts. Since this information is required for proper video synchronization.

which blurs slightly but does not show stair step artifacts. The sound drivers and their parameters are described in the sound driver section. This does not affect 1:1 playback. this setting allowed uninterrupted playback during periods of heavy load. Bicubic interpolation is used for enlarging. This can bog down the X Server or cause the timeline window to lock up for long periods of time while drawing the assets. You must be root to get real-time priority. As of 1/2007. Today. most compressed codecs do not support frame dropping anymore.24
Chapter 3: Configuration
View follows playback This causes the timeline window to scroll when the playback cursor moves. It forces the audio playback to the highest priority in the kernel. Cinelerra may crash if the input frames are very large. Audio playback in realtime Back in the days when 150 MHz was the maximum speed for a personal computer. This allows selecting one sound driver and setting parameters specific to it. Since this option requires enormous amounts of memory. Decode frames asynchronously If you have lots of memory and more than one CPU. it is most useful for achieving very low latency between console tweeks and soundcard output. You need this information for synchronization when playing back video.2 [Audio drivers]. Bicubic enlarge and bilinear reduce High quality output with slow playback.2 Video out
These determine how video gets from the timeline to your eyes. Nearest neighbor enlarge and reduce Low quality output with fast playback. Audio driver There are many sound drivers for GNU/Linux. Operations involving reverse playback or frame dropping are negatively impacted. Produces jagged edges and uneven motion. This is updated during playback only. this option can improve playback performance by decoding video on one CPU as fast as possible while dedicating the other CPU to playing back video. page 20. Framerate achieved The number of frames per second being displayed during playback. Bilinearly reduced images can be sharpened with a sharpen effect with less noise side effects than a normal sized image.4. See Section 3. which produces very sharp images and reduces noise. Use software for positioning information Most soundcards and sound drivers do not give reliable information on the number of samples the card has played. This option should always be enabled unless you use uncompressed codecs. Scaling equation This algorithm is used when video playback involves any kind of scaling or translation in the virtual console.
3.
. This option causes the sound driver to be ignored and a software timer to be used for synchronization. A bilinear interpolation is used for reduction.
Play every frame This causes every frame of video to be displayed even if it means that the playback of the video track(s) will fall behind. It assumes all playback operations are forward and no frames are dropped.

be aware the Interpolate Pixels effect always does both interpolation and white balancing using the camera’s matrix. Interpolate CR2 images Enables interpolation of CR2 images. This determines the output file format for recordings. However. Go to the asset corresponding to the MPEG file in the Resources window and right click. The Record video tracks toggle must be enabled to record video. This is because proper white balancing needs a blending of all 3 primary colors. Disabling white balancing is useful for operations involving dark frame subtraction. Click on Info. DVD subtitle to display DVD IFO files usually contain subtitle tracks. The interpolation uses dcraw’s built-in interpolation and is very slow. a bilinear enlargement looks better than a bicubic enlargement. The file format is applied to all recordings. For normal use this should be 0. If you disable Interpolate CR2 Images and use the Interpolate Pixels effect. White balance CR2 images This enables white balancing for CR2 images if interpolation is also enabled. See See Chapter 20 [Rendering files]. since the hardware determines the supported file format (in most cases). function by allowing the user to pre-configure the file format. White balance uses the camera’s matrix which is contained in the CR2 file. Instead of sending video to the Compositor window. regardless of the settings in Preferences.5 Recording
The parameters here expedite the File->Record. The menu selections are the same as the rendering interface. Interpolation is required since the raw image in a CR2 file is a Bayer pattern. This reduces the amount of seeking required. The wrench button left of each toggle opens a configuration dialog in order to set the compression scheme (codec) for each audio
3. The dark frame and the long exposure need to have the same color matrix. This operation can be disabled and the Interpolate Pixels effect used instead for faster previewing.1 File format
.Chapter 3: Configuration
25
Bilinear enlarge and bilinear reduce When slight enlargement is needed. this does not affect where video is routed when the insertion point is repositioned. this tends to impair performance by slowing down decoding. When reading high bitrate sources from a hard drive. The Record audio tracks toggle must be enabled to record audio.
3. Video driver Normally video on the timeline goes to the compositor window during both continuous playback and when the insertion point is repositioned. Dark frame subtraction needs to be performed before Interpolate Pixels.5. Also set here is the hardware for recording. The number of subtitle tracks is shown at the bottom. There are usually multiple subtitle tracks indexed by number and starting from 0. page 141. The video drivers and their parameters are described in the video drivers section. See Section 3. These must be decoded with the MPEG decoder.. page 21. It depends heavily on the type of driver used.3 [Video drivers].. the video driver can be set to send video to another output device during continuous playback. Preload buffer for Quicktime The Quicktime/AVI decoder can handle DVD sources better when this is set to around 10000000. Select Enable subtitles to enable subtitle decoding. Enter the index number of the subtitle track to be decoded in the "DVD Subtitle to display" text box or use the tumbler to increase the index value.

The disk writing is done in a separate thread from the device reading. The disk writing process is done in a different thread. Samples to write to disk at a time First. page 21. Note that the drivers are the same as those available in Preferences->Playback. Some video drivers can only record to a certain container.5.
3. The value set here determines how many frames are written to disk at a time. The audio and video is wrapped in a container format defined by the File Format menu. If the video driver is changed.
3. First. or both. Selecting this options makes Cinelerra calculate an estimation of audio position in software instead of hardware for synchronization. The value here determines how large the combination of fragments is for each disk write. audio is read in small fragments from the device. It may be configured the same as the Record Driver for video if the audio and video are wrapped in the same container. It may be configured the same as the Record Driver for video if the audio and video are wrapped in the same stream. they are read into a larger buffer for writing to disk. The sample rate should be set to the highest value the audio device supports. frames are buffered in the device. DV. Use software for positioning information Video uses audio for synchronization. the value set here will be the sample rate used for recording. Size of captured frame This is the size of the recorded frames in pixels. the file format may be updated to give the supported output. for example. the disk drives you use may be fast enough to store the data but your operating system may wait several minutes and stall as it writes several minutes of data at a time. Sync drives automatically For high bitrate recording.3 [Video drivers]. See Section 3. many small fragments are combined into a large fragment before writing to disk. Sample rate for recording Regardless of what the project settings are. Record driver This is used for recording audio in the Record window. Then. Then. but most soundcards do not give accurate position information.2 [Audio drivers]. Frames to record to disk at a time Frames are recorded in a pipeline. video only. Available parameters vary depending on the driver. Note that the drivers available are the as those available in Preferences->Playback.3 Video in
These determine what happens when you record video.5. If you change the file format to an unsupported format. Different wrappers may record audio only.26
Chapter 3: Configuration
and video output stream. It is independent of the project frame size
. This forces the operating system to flush its buffers every second instead of every few minutes to produce slightly better real-time behavior. it may not work with the video driver. Record driver This is used for recording video in the Record window. can only record to Quicktime with DV as the video compression scheme.2 Audio in
These determine what happens when you record audio. page 20. See Section 3. Available parameters vary depending on the driver. For certain codecs the disk writing uses multiple processors. Frames to buffer in device This is the number of frames to store in the device before reading and determines how much latency there can be in the system before frames are dropped.

Every job in a renderfarm is prerolled by this value. If a renderfarm is enabled.30 since network bandwidth is used to initialize each job. like in a renderfarm client. The temporary output is displayed during playback whenever possible. Frame rate for recording The frame rate recorded is different from the project settings. If the frame size given here is not supported by the device. This is useful for transitions and previewing effects that are too slow to display in real time. you will sometimes need to preroll to get seemless transitions between the jobs. but sometimes you will only want to use one processor. This does not affect background rendering. Background rendering causes temporary output to be rendered constantly while the timeline is being modified. this number is ideally 0. So this option is really a 1. If any video exists. When using the renderfarm.1 Background rendering
Background rendering was originally conceived to allow HDTV effects to be displayed in realtime.25 processor mode. Seconds to preroll renders Some effects need a certain amount of time to settle in. It has one interactive function Settings menu -> Set background render.
3. When using background rendering. A number too small may result in slow playback as assets need to be reopened more frequently. The optimum number is 10 .6 Performance
You will spend most of your time configuring this section. several assets are kept open simultaneously. This gives you the potential for real-time effects if enough network bandwidth and CPU nodes exist. This sets the point where background rendering starts up to the position of the insertion point. Background rendering uses a different preroll value. This forces only one processor to be used. Cinelerra may crash. Frames to preroll background This is the number of frames to render ahead of each background rendering job. Some effects may require 3 frames of preroll. The main focus of the performance section is rendering parameters not available in the rendering dialog. Cache items To speed up rendering. The number of frames specified here is scaled to the relative CPU speed of rendering nodes and used in a single renderfarm job. This sets the recorded frame rate. Background rendering is enabled in the Performance tab of the Preferences window.Chapter 3: Configuration
27
because most video devices only record a fixed frame size. A number too large may exhaust your memory pretty fast and result in a crash. otherwise.6. Force single processor use Cinelerra tries to use all processors on the system by default. background rendering creates a single job for the entire timeline. Checking this option sets a number of seconds to render without writing to disk before the selected region is rendered. It is often useful to insert an effect or a transition and then select Settings menu -> Set background render right before the effect to preview it in real time and full framerates. In addition. Frames per background rendering job This only works if a renderfarm is being used. however. a red bar appears in the time ruler showing what has been background rendered. the renderfarm is used for background rendering. Background rendering is degraded when preroll is used since the jobs are small.
3. the operating system usually uses the second processor for disk access. This determines how many are kept open.
. The value of this parameter is used in renderfarm clients.

Nodes may be edited by highlighting a row and hitting apply changes. experienced users may be better off editing the ‘~/. If you have 10 slave nodes and one master node. select this to commit the changes to hostname and port. Remember that ‘. The changes will not be committed if you do not click this button. Ignore them for a standalone system Use render farm for rendering When selected. Since hundreds of thousands of image files are usually created.2 Renderfarm
To use the renderfarm. all the file->render operations use the renderfarm.Cinelerra_rc’ file rather than using configuration dialog. The format of the image sequences determines the quality and speed of playback. Hostname Edit the hostname of an existing node or enter the hostname of a new node here. Nodes are added by entering the host name of the node. verifying the value of port and selecting add node. Port Edit the port number of an existing node or enter the port number of a new node here. ls commands will not work in the background rendering browse button for this option normally will not work either. but the directory.
3. It should be on a fast disk. The more jobs you create. specify 33 to have a well balanced renderfarm.6. Nodes Displays all the nodes on the renderfarm and shows which ones are active. Add node Create a new node with the hostname and port settings. Sort nodes Sorts the nodes list based on the hostname. The configuration button for this option works. Apply changes When editing an existing node. Total jobs to create Determines the number of jobs to dispatch to the renderfarm. Once nodes are created. accessible to every node in the renderfarm by the same path. the more finely balanced the renderfarm becomes. Multiply them by 3 to have 3 jobs dispatched for every node.
. set these options. Delete node Deletes whatever node is highlighted in the nodes list.Cinelerra_rc’ is overwritten whenever a copy of Cinelerra exits. You can determine the total jobs to create by multiplying the number of nodes including the master node by some number. JPEG is good most of the time.bcast/. File format The file format for background rendering has to be a sequence of images.28
Chapter 3: Configuration
Output for background rendering Background rendering generates a sequence of image files in a certain directory. Reset rates This sets the framerate for all the nodes to 0. Frame rates are used to scale job sizes based on CPU speed of the node. This parameter determines the filename prefix of the image files. If you have hundreds of nodes. select the ON column to activate and deactivate nodes. Multiply them by 1 to have one job dispatched for every node. Frame rates are calculated only when renderfarm is enabled.

Index files go here Back in the days when 4 MB/sec was extremely fast for a hard drive. This value is presented merely to show how far over the limit a sound wave is.8 About window
This section gives you information about the copyright. Size of index file This determines the size of an index file. Be sure to agree to the terms of the lack of the warranty. index files were introduced to speed up drawing the audio tracks. The usage of each editing mode is described in the editing chapter. This option determines where index files are placed on the hard drive. Max dB for meter This sets the maximum sound level represented by the sound meters. but also defines three separate operations that occur when you drag an edit boundary. Use thumbnails The Resource Window displays thumbnails of assets by default. while slowing down the drawing of large files. See Chapter 11 [Sound level meters window]. the time of the current build. page 58. page 81. The time representation can also be changed by CTRL clicking on the timebar. while slowing down small files. See Section 7. Time format Various representations of time are given. Select the most convenient one. This option disables thumbnail drawing. Delete existing indexes When you change the index size or you want to clean out excess index files.
3. Consumer soundcards usually bottom out at -65. Drawing asset thumbnails can take a while. Smaller index sizes allow large files to be drawn faster. Select one here and restart Cinelerra to see it. See Chapter 11 [Sound level meters window]. Theme Cinelerra supports variable themes. old index files are deleted. and the versions of some of the libraries.10 [Trimming]. Min dB for meter Some sound sources have a lower noise threshold than others.Chapter 3: Configuration
29
3. Larger index sizes allow smaller files to be drawn faster. the lack of a warranty.
. Everything below the noise threshold is meaningless. This option sets the meters to clip below a certain level. Professional soundcards bottom out at -90. this deletes all the index files. Number of index files to keep To keep the index directory from becoming unruly. This determines the maximum number of index files to keep in the directory.7 Interface
These parameters affect purely how the user interface works. no soundcard can play sound over 0 dB. page 81. No matter what this value is. Dragging edit boundaries does what Cinelerra not only allows you to perform editing by dragging edit boundaries. Here you can select the behavior of each mouse button.

.

and so on and so forth. The number of audio channels does not have to be the same as the number of tracks. frame size.1 Set format window
When you play media files in Cinelerra. a certain sample size.3 Audio attributes
Tracks Sets the number of audio tracks for the new project.
4. The project attributes are adjusted in Settings->Set Format and to a lesser extent in File>New. a certain frame size. if an audio file’s sample rate is different than the project attributes. a new. Media is resampled to match the project sample rate. empty timeline is created. The project samplerate does not have to be the same as the media sample rate that you load. the media files have a certain number of tracks. color model. Cinelerra uses some unusual settings like channel positions. When you adjust settings in Settings>Format. and aspect ratio. media on the timeline is left unchanged. every timeline created from this point uses the same settings. but options are provided here for convenience.
Set Format window In addition to the traditional settings for sample rate. No matter what attributes the media file has. either cropped or bordered with black. Samplerate Sets the samplerate of the audio.2 Presets
Select an option from this menu to have all the project settings set to one of the known standards. the video is composited on a black frame.
. In like fashion.
4. it is played back according to the project attributes. frame rate. Tracks can be added or deleted later. it is resampled. Also. Every timeline created from this point on uses the same settings. When you adjust project settings in File->New. if a video file’s frame size is different than the project attributes. Channels Sets the number of audio channels for the new project.Chapter 4: Project attributes
31
4 Project attributes
4. So.

the New Project dialog creates video tracks whose size match the video output.
The channel position widget The channels are numbered. Canvas size Sets the size of the video output. The project framerate does not have to be the same as an individual media file frame rate that you load. Tracks can be added or deleted later. When rendered.4 Video attributes
Tracks Sets the number of video tracks the new project is assigned.1. Channel positions is the only setting that does not affect the output necessarily. Different channels can be positioned very close together to make them have the same output. The audio channel positions correspond to where in the panning widgets each of the audio outputs is located. If the aspect ratio differs from the results of the formula above. but options are provided here for convenience. Aspect ratio Sets the aspect ratio. page 45. This ensures pixels are always square. See Section 6. It has nothing to do with the actual arrangement of speakers.3 [Panning audio tracks]. Initially. the pan controls on the timeline can distinguish between them. each track also has its own frame size. The speakers can be in any orientation.
.32
Chapter 4: Project attributes
Channels positions The currently enabled audio channels and their positions in the audio panning boxes in the track patchbay are displayed in the channel position widget. Framerate Sets the framerate of the video. Click on a speaker icon and drag to change the audio channel location. The aspect ratio is applied to the video output. page 52. the output from channel 1 is rendered to the first output track in the file or the first soundcard channel of the soundcard. See Section 7.4 [The track popup menu]. The closer the panning position is to one of the audio outputs. The video track sizes can be changed later without changing the video output. the more signal that speaker gets. It is merely a convenience. so that when more than two channels are used. In addition. Auto aspect ratio If this option is checked. Later channels are rendered to output tracks numbered consecutively. your output will be in non-square pixels. A different speaker arrangement is stored for every number of audio channels since normally you do not want the same speaker arrangement for different numbers of channels. Media is reframed to match the project framerate. The aspect ratio can be different than the ratio that results from the formula: h / v (the number of horizontal pixels divided into the number of vertical pixels). the New Project dialog always recalculates the Aspect ratio setting based upon the given Canvas size.
4.

and RGBA Float. 16 bit integers were used in the past and were too
. These are RGBA8888. This allows more processing to be done with less destruction of the original data. Cinelerra colormodels are described using a certain packing order of components and a certain number of bits for each component. RGB-Float This allocates a 32 bit float for the R. RGBA-Float This adds a 32 bit float for alpha to RGB-Float. The packing order is printed on the left and the bit allocation is printed on the right. with the slowest being RGBA Float. The video is stored on disk in one colormodel. If footage stored as JPEG or MPEG is processed many times in RGB. Cinelerra decompresses it from the file format directly into the format of the output device. Video intermediates must use the least amount of data for the required quality because it is slow. Color model is very important for video playback because video has the disadvantage of being very slow. so it is a good idea to try the effect without alpha channels to see if it works before settling on an alpha channel and slowing it down.Chapter 4: Project attributes
33
Color model The project will be stored in the color model video intermediates selected in the dropdown. Cinelerra decompresses the video into an intermediate colormodel first and then converts it to the format of the output device. work around the need for alpha channels while other effects. U. This is used for low dynamic range operations in which the media is compressed in the YUV color space. When using compressed footage. When played back. Most compressed media is in YUV and this derivate allows video to be processed fast with the least color degradation. and B channels and no alpha. like fade. If effects are processed. The selection of an intermediate colormodel determines how fast and accurate the effects are. and B channels and no alpha. YUV-888 This allocates 8 bits for Y. G. like chromakey. RGBA-8888 This allocates an alpha channel to the 8 bit RGB colormodel. a colormodel with an alpha channel must be selected. This is used for high dynamic range processing with transparency. The 4 channel colormodels are slower than 3 channel colormodels. usually a YUV derivative. G. It is used for overlaying multiple tracks. YUVA8888. Audio always uses the highest bandwidth intermediate because it is fast. Years of working with high dynamic range footage have shown floating point RGB to be the best format for high dynamic range. audio intermediates contain much more information than the audio on disk and the audio which is played. This is normally used for uncompressed media with low dynamic range. RGB-888 This allocates 8 bits for the R. require an alpha channel to do anything. Some effects. YUV colormodels are usually faster than RGB colormodels. Although it is not noticeable. and V. YUVA-8888 This allocates an alpha channel to the 8 bit YUV colormodel for transparency. the colors will fade whereas they will not fade if processed in YUV. This is used for high dynamic range processing with no transparency.
In order to do effects which involve alpha channels. but video intermediates still use a higher bandwidth color model than video which is stored and video which is played. They also destroy fewer colors than RGB colormodels.

. RGB float does not destroy information when used with YUV source footage and also supports brightness above 100%. still clip above 100% when in floating point.34
Chapter 4: Project attributes
lossy and slow for the amount of improvement. Be aware that some effects. like Histogram.

you will change it to 96 kHz. If your project sample rate is 48 kHz and you load a sound file with 96khz. Supported file formats are mainly: PNG. Some file formats are very slow to display on the timeline. Drawing highly compressed video thumbnails on the timeline (picons) can be very slow. Cinelerra supports some compressed Quicktime movies. by default the image takes up one frame in length.1 Supported file formats
Here are most of the supported file formats that can be loaded and redered to.1 Quicktime
Quicktime is not the standard for UNIX but we use it because it is well documented.1. Unfortunately H-264 decoding is so slow it can not play very large frame sizes. a video stream and an audio stream. If Cinelerra crashes when loading a Quicktime movie. has good compression quality and good output quality. To extend the length of the image.1. TIF. The dual codecs interleave two video streams to improve efficiency without requiring major changes to the player. zoom in time (DOWN ARROW) on the timeline so you can see the single frame. The preferred encoding for Quicktime output is MPEG-4 Video and MPEG-4 Audio. Disable picon drawing for these files with the draw media toggle in the patchbay to speed up operations. These streams are compressed using separate encoding schemes.1. Quicktime is a container for two streams.1. Edit decision lists stored in XML save the project settings. You may be able to load other formats not described here. To view the image.3 Still images 5. They are designed for movies where the frames have been divided into two fields.1 Loading still images
You can load still images on video tracks just like you do for any video file. the first thing you have to do before you can use it is to somehow capture the assets into a usable
. Supported file formats that Cinelerra can import and export are currently:
5.
5. each field displayed sequentially. Formats which contain media but no edit decisions just add data to the tracks. This format is compatible in the commercial players for Windows.264 video. use H-264 video.2 MPEG-4 audio
This is the same as Quicktime with MPEG-4 Audio as the audio codec. raw digital camera images. When loaded on the timeline. Cinelerra lets you define the initial duration of the loaded images. These usually have video which is highly compressed. drag its boundaries just as you would do with regular video media.Chapter 5: Loading and saving files
35
5 Loading and saving files
5. TGA or JPG.3. you will still be playing it at 48 kHz. You can drag the boundaries of a still image as much as you want. it is most likely because the format was not supported. If you load an EDL file at 96khz and the current project sample rate is 48 kHz. These will not play in anything but Cinelerra and XMovie. Cinelerra supports two non-standard codecs: Dual MPEG-4 video and Dual H. EXR. Unless your original material comes from a digital source (like a digital photo camera). The parameter is set in the Images section of the Settings->Preferences->Recording window. The format of the file affects what Cinelerra does with it.
5. For better compression. Images in Cinelerra have the ability to be dragged to an infinite length. All of the Quicktime movies on the internet are compressed. with notes regarding their compression.

The rendered file is a single still image of the last frame of the video.*++’‘ let new_width=${width}*9375/10000 convert -resize "${new_width}x${height}!" -quality 100 ${element} resized/${element} done
When the size of your image is different from the size of your project. to load it on a dedicated track and adjust the display of it with the camera zoom. you might have to scan them into a file format like PNG. You might want to use Gimp to post-process the images. be sure to capture the material using the best resolution possible.3.2 Still images size
Imported images always stay at their original size.3 Open EXR images
You may not know about Open EXR. do size=‘identify ${element}‘ width=‘echo ${size} | sed ’+s+.
5.jpg
You have to take into account the aspect ratio of your video. It also supports a small amount of compression. paper maps. Therefore.1 [The camera and projector].9375. See Section 8. resize those images and put the new images in a ‘resized’ folder: Note: Make sure you have installed Imagemagick which provides the functions ’identify’ and ’convert’ used in the script.1. page 31.php) Example:
convert inputfile. Projects which render to EXR should be in a floating point color model to take advantage of the benefits of EXR. you have to multiply the horizontal size of the pictures you want to import by a factor of 0.*JPEG [0-9]*x++’ | sed ’+s+DirectClass.*++’‘ height=‘echo ${size} | sed ’+s+.2. This format stores floating point RGB images. Here is a small shell script which.36
Chapter 5: Loading and saving files
digital medium.jpg -resize 720x576 outputfile. For old photos. TIF. but 720x576 is 5/4. when ran from a folder containing jpg images.
. PAL images aspect ratio is 4/3. but the ratio is the same. clean damaged areas or color correct the assets. See Section 5. For your imported images to be displayed correctly. No table of contents is created. If your assets come from a digital source like a digital camera or a screen capture.*JPEG ++’ | sed ’+s+x. See Chapter 4 [Project attributes]. you may need to scale your pictures before importing them in Cinelerra.4 [Images sequence].1. drawings or diagrams.org/script/index.imagemagick. For example. you might want to keep the image at its original size. | grep -i ’\. This will allow you to get the best quality output from your Cinelerra project. TGA or JPG files by using a digital scanner.
5. Rendering a video to a single image causes the final image file to be overwritten for every timeline position.
#/bin/sh mkdir resized for element in ‘ls . page 37. you have to rescale their horizontal size: new horizontal size=(5=4)=(4=3) x original horizontal size For PAL videos.1. page 62.jpe*g$\’‘.3. For resizing your picture to fit the project size you can use Imagemagick (http://www.

the frames of an animated scene). When rendering a series of stills to an images sequence Cinelerra generates a TOC for the images sequence but also creates a different image file for every still.cinelerra.1.org/user-tips. First apply the Gamma effect to a track of raw images and set it to automatic with 0.1. Append the Quicktime JPEG file in a new track and disable playback of the old track. See http://cvs. Now the gamma corrected copy of each raw image can be previewed relatively fast in the same timeline position as the original image. This is the fastest. This is useful to split video into frames as single stills.php. Once they are on the timeline. An image sequence can be represented in Cinelerra also by an image list file.5. but worst compression. The TOC file contains the paths to the new files. Images lists can be edited manually. PXR24: Lossy compression where the floating point numbers are converted to 24 bits and compressed with gzip. When rendering a video to an images sequence Cinelerra creates a different image file for every timeline position and generates a TOC for this images sequence.4 Raw digital camera images
5. Cinelerra can create TOCs with the following formats: JPEG. Because raw images take a long time to interpolate. Then render the timeline to a Quicktime JPEG file.5 AVI
Because AVI (Audio-Video Interleave) is so fragmented with varied audio and video codecs. PNG. They can be loaded as multiple files.Chapter 5: Loading and saving files
37
Several compression options are available for EXR. RLE: Lossless run length encoding. you may not be able to play all AVI formatted files. called also Table Of Contents file (TOC). To get better performance. Raw images from Canon cameras are the only ones tested. TGA.
PIZ: Lossless wavelet compression. they are usually viewed first in a proxy file and then touched up. these must be processed in a floating point color space. ZIP: Lossless gzip algorithm. A TOC is a text file with a specific format containing absolute paths to every frame in the sequence plus additional information like image resolution. A TOC is not a media file but it behaves like a video clip. EXR.
Select Use Alpha if the project colormodel has an alpha channel and you want to retain it in the file. This is the best compression.6 gamma.4 Images sequence
5. the table of contents can be loaded as a single asset instead of the individual images. (by Claudio "malefico" Andaur) or Seven Gnomes (by Peter Semiletov).1. The source files are copied and renamed.
. They need to have the Gamma effect applied to correct gamma. This is useful only when you want to create a list and change the format of your source files.1. RAW digital camera images are a special kind of image file that Cinelerra only imports. fileformat and sequence framerate. Otherwise the primary colors are multiplied by the alpha channel. For creating a TOC file without creating new image files you can use external list generators like IMG2LIST 0. An images sequence is a series of ordered still pictures (e. Cinelerra creates TOC files by rendering to "Images sequence". TIFF.
5.3.g.

Run: mpeg3toc -v /cdrom/video_ts/vts_01_0.6 MPEG files containing video
MPEG files containing video can be loaded directly into Cinelerra.
5. each identified by a unique ‘IFO’ file.1. Unfortunately. a table of contents is built. If you know your audio stream has variable bitrate or if you see Cinelerra can’t seek and playback your file properly. The YUV 4:2:0 colormodel is encoded by a highly optimized version of mpeg2enc with presets for standard consumer electronics. it is the target bitrate.toc or something similar. Cinelerra converts each channel of audio into a track. it assumes that the MPEG file is in the same directory that Cinelerra is run from. YUV 4:2:2 encoding was kept around because the NTSC version of DV video loses too much quality when transferred to YUV 4:2:0. Then load ‘dvd. If the files are encoded using a fixed bitrate. If you do not use an absolute path. Each audio track can have 1-6 channels. mpeg3toc needs the absolute path of the MPEG file. This DV video must be transferred to YUV 4:2:2.mp3 myfile. If the file is supported. This is a quirk of the mpeg2enc version. In MPEG video there are two colormodels. Each track can be video or audio.264 or MPEG-4 Audio. If you want to load a DVD.8 MPEG 1 audio
MPEG 1 audio files have .
. you must create the TOC using mpeg3toc.
5.xml in the same directory as the filename. it usually crashes or shows very short tracks. If the quantization is fixed. they got rid of YUV 4:2:2 encoding. The quality is not as good as H. the bitrate parameter changes meaning depending on whether the bitrate or quantization is fixed.7 DVD movies
DVD are split into a number of programs. it is the maximum bitrate allowed. Notes on mpeg video encoding: MPEG video encoding is done separately from MPEG audio encoding.mp3 extension. For renderfarms the filesystem prefix should be / and the movie directory mounted under the same directory on each node.toc’. If the bitrate is fixed.9 Ogg Theora/Vorbis
The OGG format is an antiquated but supposedly unpatented way of compressing audio and video.1. Alternatively for renderfarm usage. If the file is unsupported. Load the IFO file directly and a table of contents will be built. Here is an example of command:
mpeg3toc -v /path/to/myfile. you need to run mpeg3toc in order to generate a table of contents for the file and then load the table of contents. In reality.
5.ifo dvd. Otherwise a table of contents (TOC) needs to be created and loaded as resources in place of the audio file.
The path should be absolute unless you plan to always keep your .mp2 and . In the process of optimizing mpeg2enc.1. a table of contents can be created separately. The YUV 4:2:2 colormodel is encoded by a less optimized version of mpeg2enc. they can be loaded directly on Cinelerra. anyone with enough money and desire can find a patent violation so the justification for OGG is questionable. find the corresponding ‘IFO’ file for the program of interest.1. When encoding YUV 4:2:0.toc’ is the Table of Contents that can be loaded as resource. To use MPEG files in a renderfarm. MPEG streams are structured into multiple tracks.38
Chapter 5: Loading and saving files
5. this method of loading MPEG files is not good enough if you intend to use the files in a renderfarm.toc ‘myfile.

Because edit decision lists consist of text. If the file has audio.
. they change the attributes of the current project. In this last case. Just go to file->Load Files.
5. The loading and playing of files is just as you would expect.14 AC3 audio
5. three things happen when you load a file: 1. the project’s attributes are not changed and the first frame of the track becomes the image.
5. select a file for loading.10 Edit decision list
Edit decision lists are generated by Cinelerra for storing projects.Chapter 5: Loading and saving files
39
5. This section describes loading.2 Loading files
All data that you work with in Cinelerra is acquired either by recording from a device or by loading from disk. click the forward play button and it should start playing.1.2.1 Insertion strategy
Usually.
The Load window If the file is a still image. Each of these options loads the file a different way.13 AIFF 5. and click ok.xml. Depending on the setting of the Insertion Strategy list box.1.12 PCM 5. In the Load dialog window go to the Insertion strategy box and select one of the options of the drop down menu. the new file’s tracks are created in the timeline However. Cinelerra may build an index file in order to speed up drawing. When loaded. regardless of whether a progress bar has appeared.11 WAV
FIXME FIXME FIXME FIXME
5. the project’s attributes are changed to match the file’s attributes 3. Cinelerra lets you change what happens when you load a file. EDL files end in . your file will be either loaded on the Resources Media window or directly on the Program window.1. the existing project is cleared from the screen 2. You can edit and play the file while the index file is being built. they can be edited in a text editor.1.1.

2. except that if multiple files are selected. Cinelerra adds a set of new tracks for each file. replacing the current ones. inserting different source files in the same set of tracks. Go to another file and select it while holding down CTRL. the source file will be inserted in the first set of armed tracks.3 Loading files from the command prompt
cinelerra myvideo. If multiple files are selected for loading. one set of tracks for each file. one after another. starting at 0. New resources are created in the Resources Window Paste at insertion point The file is pasted into the timeline at the insertion point. Using these options.40
Chapter 5: Loading and saving files
Replace current project All tracks in the current project are deleted and a set of new tracks are created to match the source file. If no tracks are armed.mov
Another way to load files is to pass the filenames as arguments on the command line. This starts the program with all the arguments loaded and creates new tracks for every file. New resources are created in the Resources Window Create new resources only The timeline is unchanged and new resources are created in the Resources Window only. one after the other. This selects one additional file. they will be inserted on the same set of tracks. it can be difficult to find the file you want.
5. If multiple files are selected for loading.
5. New resources are created in the Resources Window.2. If you load files by passing command line arguments to Cinelerra. the files are loaded with Replace current project rules. In each place the options do the same thing. Cinelerra will concatenate the tracks of each file. replacing the current ones. the Load window allows you to filter which files are displayed in the list box by extension name. you can almost do all your editing by loading files. in alphanumeric order.
5. Append in new tracks The current project is not deleted and new tracks are created for the source. Replace current project and concatenate tracks Same as replace current project. one after another.2.4 Filtering files by extension
If there are too many files in your media directory. Use this method and the Concatenate to existing tracks insertion strategy to create an images slideshow or a song playlist. Project attributes are only changed when loading XML. Click the dropdown box (right below the file name text box) and select the file extension of
. The insertion strategy is a recurring option in many of Cinelerra’s functions. Go to another file and select it while holding down SHIFT. This behavior is available in most list box.mov myothervideo. If the current project has more tracks than the source. For this purpose. in alphanumeric order. This selects every intervening file. on the first set of armed tracks. inserted in the same set of tracks of the current project. Select a file. New resources are created in the Resources Window. no files will be inserted. New resources are created in the Resources Window Concatenate to existing tracks The current project is not deleted and new files are concatenated to the existing armed tracks.2 Loading multiple files
In the Load dialog go to the list of files. starting at the end of the tracks.

It is important after a crash to restart Cinelerra without performing any editing operations as you will overwrite the backup.xml file is always an only file. If it is in a different directory. avi. mp3. mpg.xml’.xml’ extension is given. the last operation made in whatever instance will overwrite the backup. Cinelerra saves the current project to a backup in ‘$HOME/.97 (=30000/1001) fps For input files which do not have those properties.5 Loading other formats
mplayer -identify <your_video_file. In that case. The file list now shows only files with the selected extension. In this case you can freely move your XML file around.xml’ to the filename if no ‘.
. also when you are working with two instances of Cinelerra open at the same time.avi
5. In this case. since absolute paths are saved. It contains all the project settings and locations of every edit. However. This will start Cinelerra at the point in your editing operations directly before the program crashed. the XML file stores either an absolute path or just the relative path. you should use mencoder to convert to MPEG4. since Cinelerra load that format without any problem. In the event of a crash.4 Saving project files
Cinelerra saves projects as XML files. You can identify the codecs and container of any video by running the following command:
5.xyz>
Converting with ffmpeg:
ffmpeg -sameq -i original_video.3 Loading the backup
There is one special XML file on disk at all times. the first thing you should do after restarting Cinelerra is select File->Load backup in order to load the backup. the mpeg2 formats requires the video to have specific image size and framerates: PAL is 720x576 at 25 fps NTSC is 720x480 at 29.xyz converted_video. it saves an edit decision list (EDL) of the current project but does not save any media.xyz -ovc lavc -lavcopts vcodec=mpeg4:\ vhq:vbitrate=6000 -oac mp3lame -lameopts br=256:vol=1 \ -ffourcc DIVX -o converted_video. Cinelerra automatically concatenates ‘. If the media is in the same directory as the XML file. For each media file.
5. Note that the backup.2. a relative path is saved. Instead of media.
Converting with mencoder:
mencoder original_video. the file contains pointers to the original media files on disk. When Cinelerra saves a file. The saved file consists of text. To convert your file to mpeg2 is a good solution. You can keep the media and the XML file in the same directory forever and freely move the whole directory. mov. Select a file to overwrite or enter a new file. You have to be careful when moving files around: you risk to break the media linkages. you should use ffmpeg to do the convertion. If you can not load a particular kind of video clip and do not have the original source file.bcast/backup. etc). Go to File->Save. an absolute path is saved. you will have to convert it to a format supported by Cinelerra.Chapter 5: Loading and saving files
41
your media (i. since relative paths are saved.e.mpeg The ‘-sameq’ option maintains the original quality. After every editing operation. Alternatively you can save the XML file in a different directory than the media but you can’t move the media ever again.

and saving your XML file in the same directory of the media. you can change the paths from relative to absolute by going to File->Save as... you can always repair broken media linkage by editing the XML file in a text editor. You can not play XML files in a dedicated movie player. Don’t forget to make a backup copy of your XML file before doing any editing! You can replace the path of every asset whose source file you moved also within the program. Open a second copy of Cinelerra 4. and entering the new location. Cut and paste from A to B
. right click on the asset in the Resources window and choose Info. by entering the new location in the Asset info window. XML files are specific to Cinelerra only. Render your projects to a final format for more persistent storage of the output.42
Chapter 5: Loading and saving files
If you saved your XML file in the same directory of your media but you would like to move your project to another location. The XML file also requires you to maintain copies of all the source assets on hard disk.. Load project A 3.. For every media you moved. If you want to create an audio playlist and burn it on a CD-ROM. Similarly if you saved your project outside your media directory but you would like to move your media to another location. Directly type the path in the first field of the dialog or click on the magnifier on the right to browse your files. in the popup menu. This keeps the media paths relative. which can take up space and cost a lot of electricity to spin. you can change the paths from absolute to relative by going to File->Save as.
5.. It must be said that since an XML file is a text file. Operating from the GUI is convenient only when a very small number of changes is needed.. search for the old path and replace it with the new one. Real-time effects in an XML file have to be re-synthesized every time you play it back.5 Merging projects
To merge several separate projects into one big one : 1. Load project B 5. save the XML file in the same directory as the audio files and burn the entire directory. XML files are useful in saving the current state of Cinelerra before retiring from an editing session. To open this window. Open Cinelerra 2.

The vertical scroll bar allows you to scan across tracks. just as if you placed real photographic film stock end-to-end on a table. called the patch bay. This defines the output of rendering operations and what is saved when you save files. you will find options that affect the main windows.1 Navigating the program window
The program window contains many features for navigation and displays the timeline as it is structured in memory: tracks stacked vertically and extending across time horizontally. On dual headed displays. Following the film analogy. the Default positions operation fills only one monitor with windows. Every track on the timeline has a set of attributes on the left. it would be as if you "viewed" magnetic tape horizontally on your table. The timeline consists of a vertical stack of tracks with a horizontal representation of time.
6.
The timeline Under the Window menu. You can adjust the horizontal and vertical magnification of the tracks and the magnification of the audio "waveform" display using the zoom panel bar controls. Default positions repositions all the windows to a 4 screen editing configuration.
An audio track Audio tracks represent your sound media as an audio waveform.1 Video and audio tracks
A video track Video tracks represent the duration of your videos and clips.
6. The horizontal scroll bar allows you to scan across time.
. The individual images you see on the track are samples of what is located at that particular instant on the timeline.1. It is used to control the behavior of the tracks. To the left of the timeline is the patchbay which contains options affecting each track. The most important attribute is arm track.Chapter 6: Program window
43
6 Program window
This window contains the timeline and the entry point for all menu driven operations.

but the scrollbars will not let you do it. It determines the height of each track.3 The zoom panel
Below the timeline.
The amplitude only affects audio. The sample zoom value is not an absolute reference for the unit of time since it refers to the duration visible on the timeline and thus changes also as you modify the length of the program window horizontally. The horizontal scroll bar allows you to scan across time. Use PAGE UP and PAGE DOWN to scroll up and down the tracks. The tumblers changes curve amplitude. The curve zoom affects the curves in all the tracks of the same type.44
Chapter 6: Program window
6. The vertical scroll bar allows you to scan across tracks.
6. ALT-UP and ALT-DOWN cause the curve amplitude to change. For horizontal scrolling you can use also the mouse wheel with the CTRL key. In addition to the graphical tools.
Changing the sample zoom causes the unit of time displayed in the timeline to change size. you will find the zoom panel.1. See Section 7. keyboard navigation is faster than navigation with a mouse. mouse over the tumblers and use the wheel to zoom in and out. amplitude (audio waveform scale).1 [The patchbay]. As a general rule. track zoom (height of tracks in the timeline). It determines how large the waveform appears. It determines the value range for curves. First select the automation type (audio fade. page 49. If you change the track zoom.
CTRL-UP and CTRL-DOWN cause the amplitude zoom to change. the amplitude zoom compensates so that the audio waveforms look proportional. For vertical scrolling you can use also the mouse wheel. It allows you to view your media all the way from individual frames to the entire length of your project.
The track zoom affects all tracks. If your mouse has a wheel and it works in X11. video fade. In I-beam mode. zoom.0 for audio fade and 0.
. Use the UP and DOWN arrows to change the sample zoom by a power of two. You will often need to scroll beyond the end of the timeline. The higher the setting. you may also use the keyboard to navigate.Y) then use the left tumblers for the minimum value and the right tumblers for the maximum value or manually enter the values in the text box. hold down SHIFT while pressing HOME or END in order to select the region of the timeline between the insertion point and the key pressed. but the only way to curve offset is to use the fit curves button .0 to 100. Instead.1. use the RIGHT arrow to scroll past the end of timeline. In addition to the scrollbars. The program window contains many features for navigation and displays the timeline as it is structured in memory.0 to 6. the more frames you can see per screen. The zoom panel contains values for sample zoom (duration visible on the timeline). these zooms are the main tools for positioning the timeline. Use the HOME and END keys to instantly go to the beginning or end of the timeline. CTRL-PGUP and CTRL-PGDOWN cause the track zoom to change.2 Track navigation
Track Navigation involves both selecting a specific audio or video track and moving to a certain time in the track. X.0 for video fade. Normally you will use -40. and curve zoom(automation range).

removes the track from the timeline Add Track . Move down . you will want to align to frames. It’s the point where a paste operation takes place. the insertion point marks the place on the timeline where the next activity will begin. The popup menu affects the track whether the track is armed on the patch bay or not. When editing audio you will want to align to samples.1. it defines the beginning of the region of the timeline to be rendered. Select your preference by using Settings->Align cursor on frames.6 Editing modes
Editing modes are two different methods of operation that affect the insertion point and the editing on the timeline.moves the selected track one step down in the stack.adds a track of the same media type (audio/video) as the one selected. The main timebar When moving the insertion point the position is either aligned to frames or aligned to samples.
6.1. Any region of the timebar not obscured by labels and in or out points is a hotspot for repositioning the insertion point. RIGHT-click on the track. When editing video.
The insertion point on the main window.resizes the track Match Output Size .5 The insertion point
The insertion point is the flashing hairline mark that vertically spans the timeline in the program window. They are:
drag and drop mode cut and paste mode
. the insertion point can be moved also by clicking in the timeline itself.moves the selected track one step up in the stack. When rendering.500 point Normally. Delete track . In cut and paste editing mode only.resizes the track to match the current output size
6. To activate the track popup menu.1. The Track Menu contains a number of options: Attach Effect .opens a dialog box of effects applicable to the type of track (audio/video) Move up . Resize Track . Analogous to the cursor on your word processor. the insertion point is moved by clicking inside the main timebar. represented as a vertical hair-line at the 00:00. It is also the starting point of all playback operations.4 The track popup menu
Each Track has a popup menu.Chapter 6: Program window
45
6.

9 [Cut and paste editing]. sorting movie scenes.46
Chapter 6: Program window
The editing mode is determined by selecting the arrow or the i-beam in the buttonbar. This is the case of the Gentoo ebuild media-video/cinelerra-cvs-20061020.7 The in/out points
In both editing modes.8 [Drag and drop editing]. page 54. clicking in the timeline does not reposition the insertion point. This is useful for reordering audio playlists. The highlighted region becomes the region affected by cut and paste operations and the playback range during the next playback operation. See Section 6.1. The editing mode buttons If the arrow is highlighted. If the i-beam is highlighted it enables cut and paste mode. the highlighted area is affected by editing operations and the in/out points are ignored. you can set one in point and one out point. Double-clicking in the timeline selects the entire edit the mouse pointer is over. page 57. shown inside the green outline When highlighting a region. When editing video.
6. In cut and paste mode. When editing audio you will want to align to samples. clicking in the timeline repositions the insertion point.7 [The in/out points]. shift key being differentiation between them. Dragging in the timeline repositions the edit the mouse pointer is over. Dragging in the timeline highlights a region. To cut and paste in drag and drop mode you need to set in/out points to define an affected region.1. In both cut and paste mode and drag and drop mode. Note: Cinelerra CV revisions 943 and 944 (SVN checkouts from 19 to 21 October 2006) had no editing modes buttons. Double-clicking in the timeline selects the entire edit the cursor is over. If a highlighted area and in/out points are set. you will want to align to frames. The in/out points define the affected region. the highlighted area overrides the in/out points. See Section 7. they are the only way to define an affected region. Alternatively you can use E as a keyboard shortcut to toggle between modes. the in/out points are
. page 46. Select your preference by using settings->align cursor on frames. it enables drag and drop mode. the start and end points are either aligned to frames or aligned to samples. SHIFTclicking in the timeline extends the highlighted region. In drag and drop mode.
Tracks with highlighted area. In drag and drop mode. See Section 7. If no region is highlighted. moving effects around. "Copy and paste" and "Drag and drop" editing modes were merged into one.

Normally. Instead of using the button bar. When label button. You can edit the label list and add a title for every item using the popup menu. After selecting an out point. Select in point button. Tip: To quickly get rid of in/out points. SHIFT-clicking on an in/out point highlights the region between the insertion point and that in/out point. the insertion point will be ignored and in/out points will be set at the beginning and at the end of the highlighted area. the label traversal buttons reposition the timeline so the label is visible. a new label appears you position the insertion point somewhere and click the on the timeline.1. it extends the highlighted region up to that in/out point. clicking on the label highlights it and positions the insertion point exactly where you set the label. To open the Label info dialog
. if you click the in point button the in point will be deleted. The first click will set a new point or reposition an old one at the insertion point. you can use the [ and ] keys to toggle in/out points. If you click on in/out points while a region is highlighted.
6.8 Using labels in the program window
Labels are an easy way to set exact locations on the timeline that you want to jump to. just double click on [ and ] buttons. If you select either the in point or the out point. Timebar with in/out points set. After selecting an in point. The insertion point and the in/out points allow you to define an affected region. To set in/out points. When a label is out of view. go to the timebar and position the insertion point somewhere. Hitting the L key has the same effect as the label button.Chapter 6: Program window
47
used. For this purpose there are labels. in/out points do not affect the playback region. it is better to use either highlighting or in/out points but not both simultaneously. CTRL-RIGHT repositions the insertion point on the next label. The in/out points determine the playback region only if you hold down CTRL while issuing a playback command. Move the insertion point to a position after the in point and click the the out point button. With label traversal you can quickly seek back and forth on the timeline. too. If a region is already highlighted. There are keyboard shortcuts for label traversal. if you click the out point button the out point will be deleted. Timebar with a label on it No matter what the zoom settings are. when you click the in/out buttons the existing points will be repositioned. without caring about where they are or if they are set or not. Obviously this trick does not work if the in point or the out point is already set at insertion point. but they do not let you jump to exact points on the timeline very easily. If you set the insertion point somewhere else while in/out points already exist. the second click will delete it. Labels can reposition the insertion point when they are selected but they can also be traversed with the label traversal buttons. To avoid confusion. the insertion point will jump to that location. CTRL-LEFT repositions the insertion point on the previous label. The Label tab of the resources window lists the timestamp of every label.

SHIFT-CTRL-LEFT highlights the region between the insertion point and the previous label. However. Hitting the label button again when a label is selected deletes it. Double-clicking on the timebar between two labels highlights the region between the labels. just disable the Edit labels option or enable the Lock labels from moving button. these labels will be pushed to the right on the timebar for the length of the selected area. If you hit the label button when a region is highlighted. first highlight a region. If in/out points exist. In Drag and Drop editing mode labels will be always locked to the timebar. if a selected area of a resource is spliced from the viewer to the timeline in a position before labels. labels are created at each end of the highlighted region. Second. by enabling Edit labels in the settings menu. if one end already has a label. then the existing label is deleted. disabling the copied or pasted along with the selected region of the first armed track. even with the Edit labels option enabled. it extends the highlighted region up to that label. or by Lock labels from moving button on the program toolbar labels will be cut. Manually hitting the label button or L key over and over again to delete a series of labels can get tedious. SHIFT-CTRL-RIGHT highlights the region between the insertion point and the next label. Similarly. If a region is already highlighted. SHIFT-clicking on a label highlights the region between that label and the insertion point.
. the labels between the in/out points are cleared and the highlighted region is ignored.48
Chapter 6: Program window
right click on the label icon in the Resources window or directly on the label symbol on the timebar. With labels you can also select regions. use the Edit->Clear labels function. To prevent labels from moving on the timebar. To delete a set of labels. In Cut and Paste editing mode only.

The active region is determined first by the presence of in/out points in the timeline. you need to render it. If it is pointing sideways.1 The patchbay
On the left of the timeline is a region affectionately known as the patchbay. See Section 5. for information about the editing controls keyboard shortcuts. See Chapter 20 [Rendering files]. The timeline is where all editing decisions are represented.1. If it is pointing down. If those do not exist the highlighted region is used. page 189.4 [Saving project files]. the track is expanded. down.
7. you need to worry about how to create and sort tracks in addition to what time certain media appears on a track. the feature is enabled. Click on the expander to expand or collapse the patchbay and the track. There are several concepts Cinelerra uses when editing which apply to all the methods. If the toggle is the background color of most of the windows. Editing only affects pointers to source material. Since the timeline consists of a stack of tracks. Existing effects appear below the media for the track. If no highlighted region exists the insertion point is used as the start of the active region. This is non destructive editing and it became popular with audio because it was much faster than if you had to copy all the media affected by an edit. It can be scrolled up. and cut and paste editing. It can also be scrolled up and down with a mouse wheel. Cinelerra offers many ways to approach the editing process. All tracks have an expander for viewing more options on the patchbay and for viewing the effects represented on the track.
. it is disabled. The active region is the range of time which is affected by editing commands on the timeline. Click on the toggle to enable or disable the feature. page 141. In the time domain. page 41. See Section 24. drag and drop editing. This is a stack of tracks in the center of the main window. Some commands treat all the space to the right of the insertion point as active while others treat the active length as 0 if no end point for the active region is defined. left and right with a mouse wheel and the CTRL key. The three main methods are two screen editing. Several mouse operations speed up the configuration of several tracks at a time.1 [Editing Media shortcuts]. so if you want to have a media file at the end of your editing session which represents the editing decisions. left and right with the scrollbars on the right and bottom of it. Finally. All tracks have a text area for naming the track. All tracks have the following row of toggles for several features. editing decisions never affect source material. The patchbay enables features specific to each track.
Track attributes If the toggle is colored. the track is collapsed.Chapter 7: Editing
49
7 Editing
Editing comprises both the time domain and the track domain.

unlike curves. only two values: on or off. Gang also causes Nudge parameters to synchronize across all the ganged tracks. Press SHIFT-TAB while the cursor is over a track to toggle the arming status of every other track. the rendered media file will have only audio tracks. regardless of play status of the shared track that in this particular case affects the media output but not fade and effects. If a track is part of a shared track effect.50
Chapter 7: Editing
Click on an attribute and drag the cursor across adjacent tracks to copy the same attribute to those tracks.1 [Realtime effect types]. the track is not rendered. Go to View -> Mute to make it show. See Section 14. the other tracks perform all the effects in this shared track. The attributes affect the output of the track: Play track Determines whether the track is rendered or not. Gang fader Causes the fader to track the movement of whatever other fader you are adjusting by dragging either the fader or the curve on the track. page 87. See Section 14. If it is off. Click until all the tracks except the selected one are disabled. In addition to restricting editing operations. Press TAB while the cursor is anywhere over a track to toggle the track arming status. By default. It doesn’t affect the editing made with menu controls. some file formats load with this off while other file formats load with it on. For example if you turn it off in all the video tracks. This is normally used to adjust audio levels on all the tracks simultaneously. For example if you mute all the video tracks. Draw media Determines if picons or waveforms are drawn on the asset in the track. the output of the track with the shared track effect is overlaid on the final output even though it is routed back to another track (the shared track). Merely set it to on if you want to see picons for any file format. Arm track Determines whether the track is armed or not. Mute track Causes the output to be thrown away once the track is completely rendered. A fader is only ganged if the arm track is also on. Hold down SHIFT while clicking an attribute. page 87. This depends on whether the file format takes a long time to draw on the timeline. the rendered media file will have a blank video track. Mute track is used to keep the track with the shared track effect from overlapping the output of the source track (the shared track) where the shared track effect is not present. the armed tracks will be used as destination tracks. If the files are loaded with one of the insertion strategies which do not delete the existing project. Only the armed tracks are affected by editing operations. Hold down SHIFT while clicking a track’s attribute to enable the attribute in the current track and toggle the attribute in all the other tracks. Then drag the cursor over the adjacent track to enable the attribute in the adjacent track. but Mute track keyframing is a toggle and it has. Make sure you have enough armed destination tracks when you paste or splice material or some tracks in the material will get left out.1 [Realtime effect types]. It is a keyframable attribute. the armed tracks in combination with the active region determine where material is inserted when loading files.
. if the track is chained to any other tracks by a shared track effect. This happens whether or not play track is on. Mute track is represented on the timeline with a blue line. However.

5 [Overlay modes]. or compensating for an effect which shifts time. If it is ganged to other tracks of the same media type.3 [The zoom panel]. Negative numbers make the track play later. You may have to expand the track to see it. The nudge units are either seconds or the native units for the track (frames or samples). where 0 is the unaltered original sound level.2. Nudge settings are ganged with the Gang faders toggle and the Arm track toggle. See Section 6. These are views of the patchbays when expanded. Use the mouse wheel over the nudge textbox to increment and decrement it. -80 the minimum value set by default. For your convenience you can set a different fade range with the curve zoom. Click and drag the fader to fade the track in and out.
. creating fake stereo. They represent relative levels. 100 for video). All tracks have a fader. but the units of each fader depend on whether it is audio or video. or "fade in" to make sounds appear gradually instead of suddenly.
Pan and nudge for an audio track
Overlay mode and nudge for a video track The nudge value is the amount the track is shifted left or right during playback. See Section 8. page 72. the percentage of the layer that is mixed into the render pipeline in the other overlay modes. all without tampering with any edits.
7. -40 is silence.2 Nudging tracks
Each track has a nudge textbox in its patchbay. but it is shifted when it is played back. Merely enter the amount of time to shift to instantly shift the track. Audio fade values are in dB.Chapter 7: Editing
51
Fader Fade values are represented on the timeline with a white curve that is keyframable. page 44. Hold down SHIFT and drag a fader to center it on the original source value (0 for audio.1. The track is not displayed shifted on the timeline. Audio faders’ main purpose is to "fade out" sound or to lower the sound level smoothly to silence. This is useful for synchronizing audio with video. Positive numbers make the track play sooner. Video fade values are the percentage of opacity of the image in normal overlay mode. with the arm option enabled. You can move fader and keyframes down to -80 but the parameter’s curve won’t go below -40. the other faders should follow. Select the units by right clicking on the nudge textbox and using the context sensitive menu.

It is most useful for making 2 tracks with 2 channels map to stereo and for making 6 tracks with 6 channels map to a 6 channel soundcard. They are listed in the Audio menu.3 [Audio attributes].
Pan and nudge for an audio track Position the pointer in the panning box and click/drag to reposition the audio output among the speaker arrangement.52
Chapter 7: Editing
7. This is most useful for down-mixing 5.5 Standard audio mappings
Although Cinelerra lets you map any audio track to any speaker. front left. the channels are numbered to correspond to the output tracks they are rendered to.1:2 This maps 6 tracks to 2 channels. low frequency effects. back right. Go to Settings->format to set the output channels to 2.3 [Audio attributes]. They are: Audio->Map 1:1 This maps every track to its own channel and wraps around when all the channels are allocated.
7. The project should have 2 channels when using this function. See Section 4. For stereo.6 Manipulating tracks
Tracks in Cinelerra either contain audio or video. The panning box is shown here. A patchbay may have to be expanded to see the panning box. front right.3 Panning audio tracks
Audio tracks have a panning box in their patchbays.1 surround sound.4 Automatic audio track panning
Several convenience functions are provided for automatically setting the panning to several common standards. Audio->Map 5. the sources of the 6 channels need to be in the order of center. In the channel position widget See Section 4.
7.
7. Also. There is no special designation for tracks other than the type of media they contain. back left. If the right tracks are not mapped to the right speakers.1 audio to stereo. The panning box uses a special algorithm to try to allow audio to be focused through one speaker or branched between the nearest speakers when more than 2 speakers are used. page 31. These functions only affect armed audio tracks. there are standard mappings you should use to ensure the media can be played back elsewhere. most audio encoders require the audio tracks to be mapped to standard speaker numbers or they will not work. When you create a new project. the source of channel 1 needs to be the left track and the source of channel 2 needs to be the right track. it contains three
. For 5. The loudness of each speaker is printed on the relative icon during the dragging operation. most audio encoders will not encode the right information if they encode anything at all. page 31. The low frequency effects track specifically can not store high frequencies in most cases.

Move tracks up and Move tracks down shift all the armed tracks up or down the stack. Set the starting point with the Seek to the ending point of the clip you want to use.4 [The track popup menu]. create new tracks or arm more tracks. In the viewer window. the new track is put on the top of the timeline.7 Two screen editing
This is the fastest way to construct a program out of movie files. There should be enough armed tracks on the timeline to put the subsections of source material that you want (usually one video track and two audio tracks). define a clip out of your movie file:
The two points should now appear on the timebar and define a clip. keeping the same order they have on the stack. The destination track wraps around until all the disarmed tracks are concatenated. This way.1. load some movies with the insertion mode Create new resources only.. In the case of video. The way to begin a two screen editing session is to load some resources. Holding down the D key quickly deletes all the tracks.Chapter 7: Editing
53
default tracks: one video track and two audio tracks.
7. Disarmed tracks that are not playable are not concatenated. See Section 6. You want the timeline to stay unchanged while new resources are brought in. the assets of two tracks are pasted after the assets of the armed tracks and the assets of the third track are pasted at the end of the first armed track. In the case of audio. page 45. New video tracks are overlaid on top of old tracks. Set the ending point with the out point button. Go to the Resource Window and select the Media folder. Delete last track deletes the last track. Concatenate tracks is more complicated. the concatenate operation copies the assets of the two disarmed tracks and pastes them after the assets of the two armed tracks. The Audio and Video menus each contain an option to add a track of their specific type. In File->Load files. Each track itself has a popup menu which affects one track. They are pasted one after the other. Operations in the Tracks menu affect only tracks which are armed. If there are not.
Finally. You can still add and delete tracks from the menus. If there are two armed tracks followed by two disarmed tracks.
. you will want to create new tracks. The newly loaded resources should appear. the new track is put on the bottom of the timeline and the output channel of the audio track is incremented by one.. The idea consists of viewing a movie file in one window and viewing the program in another window. There are several things you can do with the clip now: Splice Inserts the selected area in the timeline after the insertion point.
in point button. This operation copies all the assets of every disarmed but playable track and concatenates it by pasting those assets at the end of the first set of armed tracks. Double click on a resource or drag it from the media side of the window over the Viewer window. Delete tracks deletes the armed tracks. whether it is armed or not. If there are three disarmed tracks instead. Subsections of the movie file are defined in the viewer window and transferred to the end of the program in the program window. video has a natural compositing order. The Tracks menu contains a number of options for dealing with multiple tracks simultaneously.

Define the clip you want to use in the viewer with [ and ]. If both in and out points are set on the timeline. If a region is highlighted or both in and out points exist they limit the region of the overwriting and the clip may therefore be shortened. Load some files using File->Load files.. the clip is inserted after the in point. Overwrite Overwrites the region of the timeline after the insertion point with the clip. When you move the mouse pointer over any button a tooltip should appear.9 [Cut and paste editing]. Create a clip Generates a new clip for the resource window containing the affected region but does not change the timeline.54
Chapter 7: Editing
After the splice has taken effect. Beware: If the destination region is longer than the clip defined in the viewer. This way you can continuously build up the program by splicing. If both in and out points are set on the timeline. Create some video and audio tracks on the timeline using the Video and Audio menus. 1.. Every clip has a title and a description. These are optional.
Two screen editing can be done purely by keyboard shortcuts. 3.
7. the insertion point moves to the end of the edit ready to be used as the next splice location. If the destination region is shorter than the clip defined in the viewer. On the timeline the following edits will move to the left. using only the mouse. If after watching it. add transition or insert/delete material.. the portion of the clip longer than the destination region won’t be inserted.Define the destination region on the timeline with [ and ]. 4. the destination region will shrink. Open the Media folder in the resource window. set effects. This loads the files into the Resource Window.8 Drag and drop editing
Drag and drop editing is a quick and simple way of working in Cinelerra. just drag and drop them on the timeline.Arm only tracks to change. On the timeline the following edits won’t move. they will be pushed to the right. . The basic idea is to create a bunch of clips.
. TIP: To overwrite exactly on a precise region of the timeline: . 2. This is so clever that it is worth the following detailed description. you wish to re-arrange your clips. it behaves the same. the clip is inserted after the in point.Overwrite from Viewer to the timeline. If there are edits after your chosen splice location on the timeline. Copy See Section 7. In the Viewer window. Set the insertion mode to Create new resources only. page 57. If an in point or an out point exists on the timeline the clip is inserted after the in point or after the out point. showing what key is bound to that button. . If an in point or an out point exists on the timeline the clip is overwritten after the in point or after the out point. . the number pad keys control the transport and the [ ] V keys perform in/out points and splicing. then drag them in order into the timeline building a prototype film that you can watch on the compositor.

it is quite cumbersome to precisely insert it. Since the mouse pointer is in the middle of the white outline. you need to create on the timeline the same set of tracks of your media file. Lengthening the duration visible in the timeline by changing the sample zoom in the zoom panel will reduce the size of the white rectangle. once over the timeline. when you move the white outline over an edit. Drag the media to the desired position of an empty track of the timeline and drop it. you will need a video track only. If the media is pure audio. If the media is a still image. This affects what tracks you should create initially and which track to drag the media onto. If you drop the media there. the new edit will start from the edit boundary indicated by the center of the bow tie ><. If the media has video. The way of selecting multiple files to drag changes depending on if the resources are displayed as text or as icons. Make sure the necessary tracks are armed and drag a media file from the resource window to the timeline. To drag and drop a file on the Program window. drag it onto a video track. SHIFT-CLICKING on media files extends the number of highlighted
. When dropped in the timeline they are concatenated. your mouse pointer will drag a thumbnail and. When displaying text in the resource window CTRL-CLICKING on media files selects additional files one at a time. To change the display mode right click inside the media list and select either Display icons or Display text. A common camcorder file has a set of one video track and two audio tracks. If the media has audio only you will need one audio track on the timeline for every audio track in the media and the media should be dragged over the first audio track. the outline of a white rectangle. When you drag your chosen media from the media folder to the timeline.Chapter 7: Editing
55
5. If there are other edits on that track. drag it onto a video track. you will see a bow tie symbol >< appearing at edit boundaries. If the media is a still image. In this case you will need one video track and two audio tracks and the media should be dragged over the first video track. when this rectangle is bigger than the visible part of the timeline. making a precise insertion possible. You can also drag multiple files from the resource window.
Cinelerra fills out the audio and video tracks below the dragging cursor with data from the file. (This will likely happen for long media). as big as the edit you are going to have. drag it onto an audio track.

To enable the dragging functionality of the timeline.56
Chapter 7: Editing
selections. A silence will appear in place of the original edit. In the timeline there is further dragging functionality. drawing a box around the files selects contiguous files. and give better NAB demos but not much else. Cinelerra will drag any edits which start on the same position as the edit the mouse pointer is currently over. Cinelerra recognises as a group the edits of different armed tracks that have aligned beginnings. if you create clips and open the clip folder you can drag clips on the timeline. regardless of whether they have the same source or aligned ends. If you have more armed tracks on the timeline than in the asset you are dragging. In addition to dragging media files. In other words. When displaying icons in the resource window SHIFT-CLICKING or CTRL-CLICKING selects media files one at a time. Dragging edits around the timeline allows you to sort music playlists. select the arrow toggle on the control bar. you can drag and drop a group of edits. only the
. click and drag it to the middle. If you drop an edit when there are no bow ties >< shown. When you drag and drop edits within the timeline: If you drop an edit when bow ties >< are shown. the original edit will be muted and pasted where you dropped it.
Original track with three scenes. Arm a track with various scenes. Following edits on the same track will move. No edits will move. sort movie scenes. Go to scene #3. If more than one track is armed.
When you drop scene #3
scene #2 shifts to the right
This is how the finished sequence looks. that edit will be cut and pasted starting at the edit boundary indicated by the centre of the bow tie ><.

Delete the selected area and hold it on the clipboard for future pasting.To perform overwriting within the timeline paste on a selected region (highlighted or between in/out points). The start of this one edit is the start of the first edit and the end of this one edit is the end of the second edit. Select a region of the timeline by click dragging on it and select the cut button to cut it. This either results in the edit expanding or shrinking. Following edits will move. In the case of Cinelerra.Delete everything but the selected region Select All a . if a selected area of a resource is spliced from the Viewer to the timeline in a position before labels. Cut x . you can copy edits in the same track. overwrite from the Viewer. Overwrite .9 Cut and paste editing
This is the traditional method of editing in audio editors.
7. If in/out points are defined. Move the insertion point to another point in the timeline and select the paste button. Most editing operations are listed in the Edit Menu. They will be always locked to the timebar. the insertion point and highlighted region are overridden by the in/out points for clipboard operations. even with the Edit labels option enabled.Paste the material held in the clipboard Clear Del . Copy c .Chapter 7: Editing
57
following edits of the tracks affected by the drag and drop operation will move to the right. Paste v . If the insertion point is over an edit boundary and the edits on each side of the edit boundary are the same resource. The selected region will be overwritten.
. Thus. the edits are combined into one edit comprised by the resource. with in/out points you can perform cut and paste in drag and drop mode as well as cut and paste mode. Some of them have a button on the program control toolbar and a keyboard shortcut. disarm the tracks affected by the drag and drop operation. copy from different tracks in the same instance. With in/out points you can perform Cut and Paste operations in Drag and Drop mode as well as in Cut and Paste mode.Paste blank audio/video for the length of the selected area. Load some files onto the timeline.Select the whole timeline
Other editing operations: Copy&Mute cm . these labels will be pushed to the right for the length of the selected area. with the Edit labels option enabled.Clear the selected area.7 [Two screen editing]. Trim Selection . Mute Region m .Copy the selected area and hold it on the clipboard for future pasting. If the clip pasted from the clipboard is longer than the selected region. This will cause loss of synchronization. Paste Silence Shift+Space . the selected region will be shrunk. page 53. In Drag and Drop editing mode you can’t drag and drop labels. Still. highlight the just dropped edit and paste silence over it (Edit -> Paste Silence). If the clip pasted from the clipboard is shorter than the selected region. start a second instance of Cinelerra and copy from one instance to the other or load a media file into the Viewer and copy from there. Following edits will move.Overwrite blank audio/video on the selected area. Go to the Edit Menu to view the list and the keyboard shortcuts. the selected region will be overwritten with the first part of the clip and the remaining part of the clip will be written after the overwriting. To restore it. Alternatively. Following edits don’t move. Assuming no in/out points are defined on the timeline this performs a cut and paste operation. To perform cut and paste editing select the i-beam toggle.Mute the selected area and hold it on the clipboard for future pasting. Following edits will be pushed to the right. See Section 7.

anything adjacent to the current edit expands or shrinks to fill gaps left by the drag operation.58
Chapter 7: Editing
Concatenate . Split . the extent of the trimming operation is clamped to the source file length. The cursor will either be an expand left or an expand right. the source reference in the edit shifts forward. If you move the beginning or end of the edit backward.7 [Interface]. Finally. or by disabling the Lock labels from moving button on the Program Control Tool Bar labels will be cut. keeping the same order they have on the stack. In a Drag source only operation. All the edits thereafter shift. The only difference is none of the other edits in the track shift. To insert a transition in the middle of an edit delete a single frame. The start and stop points of the cut are identical in each waveform and might be offset slightly. By enabling Edit labels in the Settings Menu. the mouse button number determines which dragging behavior is going to be followed.6 [Manipulating tracks]. The effect of each drag operation not only depends on the behavior button but whether the beginning or end of the edit is being dragged. See Section 7. the edit is deleted. the dragging operation affects the beginning of the edit. By trimming you shrink or grow the edit boundaries by dragging them. Perform a cut. the dragging operation affects the end of the edit. and perform a paste. Another option is to set in/out points for the source region of the source waveform and set labels for the destination region of the destination waveform. If you move the beginning or end of the edit forward. the source reference shifts backward. In a Drag only one edit operation. This way two highlighted regions can exist simultaneously.Cinelerra can’t spit an edit in two. See Section 3. if you drag the end of the edit past the start of the edit. copied or pasted along with the selected regions of the armed tracks. In drag and drop mode or cut and paste mode. Instead.Go to Tracks -> Concatenate tracks. the trimming operation is performed. This operation copies all the assets of every disarmed but playable track and concatenates it by pasting those assets at the end of the first set of armed tracks. If the cursor is an expand left. It would be very hard to highlight one waveform to cut it and highlight the second waveform to paste it without changing the relative start and stop positions. When you click on an edit boundary to start dragging. When editing audio. They are pasted one after the other. clear the in/out points. If the cursor is an expand right.
In Cut and Paste editing mode you can edit labels as well. select the region between the labels. See Section 7.
.7 [Two screen editing]. One option for simplifying this is to open a second copy of Cinelerra. page 29. The end of the edit pastes data into the edit if you move it forward or cuts data from the end of the edit if you move it backward.
7. The edit remains in the same spot in the timeline but the source shifts. In a Drag all following edits operation. page 53. move the cursor over an edit boundary until it changes shape. 3 possible behaviors are bound to mouse buttons in the interface preferences. the behavior is the same when you drag the beginning or end of an edit. Attempting to drag the start of the edit beyond the start of the source clamps it to the source start. while the wave data is different. When you release the mouse button.10 Trimming
With some edits on the timeline it is possible to do trimming. nothing is cut or pasted. cutting and pasting to transport media between the two copies. the beginning of the edit either cuts data from the edit if you move it forward or pastes new data from before the edit if you move it backward. To insert a clip in the middle of an edit splice from the Viewer. For all file formats besides still images. page 52. it is customary to cut from one part of a waveform into the same part of another waveform.

Most effects in Cinelerra can be figured out just by using them and tweeking.
.Chapter 7: Editing
59
In all trimming operations. Unarm tracks to prevent edits from being affected. all edits which start on the same position as the cursor when the drag operation begins are affected. Here are brief descriptions of effects which you might not utilize fully by mere experimentation.

.

1 [The camera and projector]. page 67. page 62. through the Auto option. Here are the functions in the toolbar:
8. The video output can be zoomed in and out and panned.2. Enable the This tool tool window to see options for this tool. The zoom menu does not affect the window size.2 [Masks]. The Hide controls option hides everything except the video.1 Protect video
This disables changes to the compositor output from clicks in it. It is an extra layer on top of the track arming toggle to prevent unwanted changes.1 Compositor controls
The video output has several navigation functions.2 Magnifying glass
This tool zooms in and out of the compositor output without resizing the window. It is the interface for most compositing operations or operations that affect the appearance of the timeline output. Navigating the video output this way does not affect the rendered output. clicking in it with the magnifying glass unlocks it and creates scrollbars for navigation. In addition there is a zoom menu and a tally light.4 Camera
This tool brings up the camera editing tool See Section 8. Underneath the video output are copies of many of the functions available in the main window.1.2 [Compositing].1. If it is unlocked from the window size.2. The reset camera and reset projector options center the camera and projector See Section 8. Rotating the wheel on a wheel mouse zooms in and out. The zoom menu jumps to all the possible zoom settings and. Enable the tool window to see options for this tool.
8. locks the video to the window size. In this particular case the zoom levels resize the entire window and not just the video. Right clicking anywhere in the video output brings up a menu with all the zoom levels and some other options. page 62.
8. Ctrl clicking in the video zooms out. If the video output is currently locked to the size of the window.Chapter 8: Compositor window
61
8 Compositor window
This window displays the output of the timeline. Hitting the + and . The tally light turns red when rendering is happening.keys zooms in and out of the video output.1. Operations done in the Compositor affect the timeline but do not affect clips. The video output size is either locked to the window size or unlocked with scrollbars for navigation. This is useful for knowing if the output is current.
8.
.3 Masks tool
brings up the mask editing tool See Section 8.
8. Left clicking in the video zooms in. it just changes the point of view in the compositor window. On the left of the video output is a toolbar specific to the compositor window.1. middle clicking and dragging anywhere in the video pans the point of view.

Inside Cinelerra’s compositing pipeline. Changing the resolution of a show.1 The camera and projector 8.
8.5 Projector
brings up the projector editing tool See Section 8.1. a frame of video in memory where all graphics processing is performed. This tool page 62. tool
8. Enable the tool window to see options for this tool.
8.2. the compositor window is a good place to try compositing. The eyedropper detects whatever color is under it and stores it in a temporary area. the most important functions are the camera button and the projector button. The eyedropper not only lets you see areas which are clipped. the camera determines where in the source video the "temporary" is copied from. Otherwise.
8. it uses the fastest decoder available in the hardware. This does not affect the rendered output See Section 8. Different effects handle the eyedropper differently.
8.62
Chapter 8: Compositor window
8. you are compositing. Compositing operations are done on the timeline and in the Compositor window.2.1 The temporary
In the compositor window.8 Tool info
This tool button works only in conjunction with the other controls on the compositor.1.1 [The camera and projector]. The projector determines where in the output the "temporary" is copied to.2 Compositing
A large amount of Cinelerra’s binary size is directed towards compositing. but its value can be applied to many effects.9 Safe regions tool
This tool draws the safe regions in the video output. When you remove the letterboxing from a widescreen show. Enabling the tool info shows the currently selected color.1.
. making a split screen. Controls with dialog boxes are: Edit mask Camera automation Projector automation Crop control
8. Shortcuts exist in the Resource window for changing some compositing attributes.1.1.1. The window must be enabled to use this tool.2. Based on what compositing control is active the toggle button will activate/deactivate the appropriate control dialog box. These control operation of the camera and projector.2.4 [Safe regions]. Once some video files are on the timeline.7 Eyedropper
This brings up the eyedropper.2. Click anywhere in the video output to select the color at that point. Cinelerra’s compositing routines use a "temporary". and fading in and out among other things are all compositing operations in Cinelerra. page 71.3 [Cropping]. page 71. Cinelerra detects when it is in a compositing operation and plays back through the compositing engine only then.6 Crop tool
This tool brings up the cropping tool See Section 8.

for example) we then project the finished image back into a new roll of film. This solos the track. the first track with record enabled is the track affected. then (using Gimp. and zooms. Once the image has been transformed by the filters (color correction. pans. Even if the track is completely transparent. for example) digitally altered the scanned image with various filters.
.
Each track has a different "temporary" which is defined by the track size. If multiple video tracks exist.Chapter 8: Compositor window
63
The process is pretty much as if we scanned in a roll of film one frame at a time. By resizing the tracks you can create split screens. it is still the affected track.
Visual representation of the compositing pipeline
When editing the camera and projector in the compositing window. thus creating a new "modified" version of the original. the easiest way to select one track for editing is to SHIFT-click on the record icon of the track.

A guide box appears in the video window. SHIFT-dragging anywhere in the video window causes the guide box to shrink and grow along with the video.2 Compositing projector controls
When the projector button is enabled in the compositor window. Once you have positioned the video with the projector.3 Compositing camera controls
Select the camera button to enable camera editing mode.2.
. hopefully along with the video. The intent of the projector is to composite several sources from the various tracks into one final output track. Dragging anywhere in the video window causes the guide box to move. except that it guides where on the output canvas to put the contents of each temporary.64
Chapter 8: Compositor window
The purpose of the projector is to place the contents of the "temporary" into the project’s output. you are ready to master the camera. The projector alignment frame is identical to the camera’s viewport. In this mode.
8.1. Dragging the camera box in the compositor window does not move the box but instead moves the location of the video inside the box.1.2. the guide box shows where the camera position is in relation to past and future camera positions but not where it is in relation to the source video. you are in projector editing mode.
8.

Once we have our viewport defined. like when watching the output of a moving camera. To control the location of the camera: 1.
When we drag over the viewport in the compositor window (although initially counterintuitive).
The viewport
Viewport sizes The size of the viewport is defined by the size of the current track. Drag over the display window. 2. Select the camera button to enable camera editing mode. A larger viewport (800x200) captures an area larger than the source video and fills the empty spaces with blanks. 3.Chapter 8: Compositor window
65
The viewport is a window on the camera that frames the area of source video to be scanned. Open the compositor window with a track selected. we still need to place the camera right above the area of source video we are interested on. the viewport does not moves but the area of video that sits under the camera’s location does.
. The viewport is represented as a red frame with diagonal cross bars. A smaller viewport (640x400) captures a smaller area.

Reset Projector causes the projector to return to the center. the video seems to move left.4 Popup menu of options
In the compositing window. y.2.
8. but from our perspective on the compositor screen. and z coordinates.
Left
. the camera and projector can be precisely positioned.5 The camera and projector tool window
The camera and projector have shortcut operations that do not appear in the popup menu and are not represented in video overlays. the tool window shows x. This is used when reducing the size of video with aspect ratio adjustment. Most operations in the Compositor window have a tool window which is enabled by activating the question mark. not where it is in relation to the source video.66
Chapter 8: Compositor window
In the compositor window. Note: The guide box shows where the camera position is in relation to past and future camera positions. there is a popup menu of options for the camera and projector. By either tumbling or entering text directly. we see the video moving up. what moves is the video under it For example. the viewport in effect is moving downwards on the video. and so on.
8. Reset Camera causes the camera to return to the center position. 9 justification types are also defined for easy access. showing its path towards the bottom of the video. A popular justification operation is upper left projection after image reduction. the viewport is always shown centered.
The camera and projector tool window In the case of the camera and projector. Right click over the video portion of the compositing window to bring up the menu. These are accessed in the Tool window.1.2. when you drag the camera down.1. When you drag the camera right.

8. To put the reduced video in the center subsection that the projector shows would require offsetting out x and out y by a complicated calculation.
. The translation effect is dropped onto the video track.2 Masks
Masks select a region of the video for either displaying or hiding. masks are performed on the temporary after effects and before the projector. A copy of one video track may be delayed slightly and unmasked in locations where the one copy has interference but the other copy does not. Removal of boom microphones. The input dimensions of the translation effect are set to the original size and the output dimensions are set to the reduced size.Chapter 8: Compositor window
67
Center Horizontal Right Top Center Vertical Bottom
The translation effect allows simultaneous aspect ratio conversion and reduction but is easier to use if the reduced video is put in the upper left of the temporary instead of in the center. this produces just the cropped center portion of the video in the output. Our compositing pipeline graph now has a masking stage. the projector displays the reduced Merely by selecting image from the top left corner of the temporary in the center of the output.2. A mask can be applied to just a subsection of the color corrected track while the vanilla track shows through. Without any effects. although they each perform the same operation. airplanes. Instead. The order of the compositing pipeline affects what can be done with masks. The output size is set to the reduced size of the video. left justify and top justify. Color correction may be needed in one subsection of a frame but not another. The track size is set to the original size of the video and the camera is centered. Mainly. This means multiple tracks can be bounced to a masked track and projected with the same mask. whether it is addition or subtraction. There are 8 possible masks per track. Each mask is defined separately. and housewives are other mask uses. we leave out x and out y at 0 and use the projector’s tool window. Masks are also used in conjunction with another effect to isolate the effect to a certain region of the frame.

go into the Compositor window and enable the mask toggle.68
Chapter 8: Compositor window
Compositing pipeline with masks To define a mask.
Click-drag again in another part of the image to create each new point of the mask. Now go over the video and click-drag.
CTRL-drag allows you to move existing points to
.3 [Automatic keyframes]. page 134. Creating each point of the mask expands a rubber band curve. While it is not the conventional Bezier curve behavior. this masking interface performs in realtime what the effect of the mask is going to be. they can be moved by CTRL-dragging in the vicinity of the corner. Once points are defined. the mask position will be the same even if you edit at different places on the timeline.) if you wish to move a mask over time. If you do not select automatic keyframes. IMPORTANT: You have to select automatic keyframes (See Section 18.

however. Selecting the question mark when the mask toggle is highlighted brings up the mask options. the mask can be translated in one piece by ALT-dragging the mask. Then SHIFT-dragging near the in or out point causes the point to move. does not smooth out the curve.
SHIFT-drag activates belzier handles to create curves between mask points Finally. once you have a mask.
.Chapter 8: Compositor window
69
new locations. The in-out points of the Bezier curve are accessed by SHIFT-dragging in the vicinity of the corner.
CTRL-ALT-drag translates an entire mask to a new location on the screen The masks have many more parameters which could not be represented with video overlays. thus altering the shape of the mask This. These are represented in the tool window for masks. Mask editing in Cinelerra is identical to how The Gimp edits masks except in this case the effect of the mask is always on.

This creates softer edges but takes longer to render. The edges of a mask are hard by default but this rarely is desired. If the mode is additive. the mask causes video to disappear. there are parameters which affect one point on the current mask instead of the whole mask.
Mask mode The value of the mask determines how extreme the addition or subtraction is. The active point is defined as the last point dragged in
. For fine-tuning of masks (with large feather values) OpenGL should be switched off and the software renderer be used.
Feather parameter Note: The OpenGL mask renderer is of low quality and only suitable as a preview for initial work. higher values subtract more alpha. x. In the additive mode. Every mask in a single track uses the same value and mode. If the mode is subtractive. These are Delete. In the subtractive mode. you are only editing one of the masks. The feather parameter determines how many pixels to feather the mask.70
Chapter 8: Compositor window
Mask options window The mode of the mask determines if the mask removes data or makes data visible. y. When multiple masks are used. their effects are ORed together. Finally. Change the value of mask number to cause another mask to be edited. Each track has 8 possible masks. The previous mask is still active but only the curve overlay for the currently selected mask is visible.
Mask value The mask number determines which one of the 8 possible masks we are editing. When you click-drag in the compositor window. higher values make the region in the mask brighter while the region outside the mask is always hidden. the mask causes video to appear and everything outside the mask to disappear.

y allow repositioning by numeric entry. Click-drag over any corner of the rectangle to reposition the corner. Cropping reduces the visible picture area of the whole project.3 Cropping
Crop control dialog box Click-drag anywhere in the video to define the crop area. It changes the values of the output dimensions (width and height in pixels) and the X Y values of the projector in a single operation. Since it changes project settings it affects all the tracks for their entire duration and it is not keyframable. Once a point is activated.2.
Crop area defined Click-drag anywhere in the video to start a new rectangle. Any point can be activated merely by CTRL-clicking near it without moving the pointer. hit the Do it button in the crop control dialog to execute the cropping operation: the portion of the image outside the rectangle will be cut off and the projector will make the output fit the canvas.Chapter 8: Compositor window
71
the compositor window. The crop control dialog allows also text entry of the top left coordinates (X1.
8. The borders are intended for
. crop toggle and the tool window in the compositor window to display the Enable the Crop control dialog box. ALT-click in the cropping rectangle to translate the rectangle to any position without resizing it.Y2) that define the crop rectangle. Track size will remain unchanged.4 Safe regions
On consumer displays the borders of the image are cut off and within the cut-off point is a region which is not always square like it is in the compositor window. To undo the cropping enter the original project dimensions in the Set Format window and click on Reset projector in the popup menu of the compositor. Delete deletes it and x. The projector tool window will show the new X Y values.2. The Set Format window will show the new project Width and Height values.
8.Y1) and bottom right coordinates (X2. When the rectangle is positioned. This draws a rectangle over the video.

Multiply This is the most useful operation.5 Overlay modes
Every video track has an overlay mode.
8. Overlay modes are processed inside the projector stage of compositing. Each track has a different size.
.
8. If the temporary is smaller than the output. Keep titles inside the inner rectangle and keep action inside the outer rectangle. You can show where these borders are by enabling the safe regions toggle. Addition In this mode. If the temporary is bigger than the output. With the multiply operation. Right click on a track to bring up the track’s menu.6 Track and output sizes
The size of the temporary and the size of the output in our compositing pipeline are independent and variable.
8. Select Resize Track to resize the track to any arbitrary size.72
Chapter 8: Compositor window
scratch room and vertical blanking data. This fits into everything covered so far.2. Subtraction In this mode. When no alpha exists in the project color model. The current track is multiplied by the output and the result blended onto the output. Replace This mode does no blending and overwrites the output with the current track.6.2. The result is blended based on the current track’s alpha onto the output. It usually results in overloaded levels. The overlay mode is a pull-down menu on the left under the fader. accessible by expanding the track. Divide This mode divides the current track by the output and the result is blended into the output. Usually a black and white image with no alpha channel or a white title on a black image is used as the current track. Alternatively you can select Match output size to make the track the same size as the output. When collapsed. whatever is in the output is added to the current track. the temporary is cropped. Effects are processed in the temporary and are affected by the temporary size. the new track always replaces the output.
Normal This mode uses a traditional Porter-Diff equation to blend tracks with alpha.2. Select other modes by clicking the overlay button and selecting an item from the popup menu. expand track toggle to view all the options for a video track if you can not see Select the the overlay mode. only the output portions under the white area show.1 Track size
The temporary size is defined as the track size. Projectors are rendered to the output and are affected by the output size. the temporary is bordered by blank regions in the output. The overlay mode of video tracks is normal by default. The different modes are summarized below. it displays an icon representing the current overlay mode. the current track is subtracted from the output and the result is alpha blended onto the output. The camera’s viewport is the temporary size.

.
New track (640x400). Reducing the track (to 640 x 400) and leaving the project’s output size untouched makes the track show on the compositor smaller and framed by a blank area. smaller than the project’s output (720x480) Enlarging the track (to 800 x 560) and leaving the project’s output size untouched makes the track show on the compositor larger and cropped to the output’s dimension. then its appearance on the compositor changes accordingly. the next image shows how a video track and a project output of equal sizes look when displayed on the compositor.
Project output size and video track with equal dimensions (720x480) If you resize a track.Chapter 8: Compositor window
73
The resize track window For example.

the project’s output size is the final video track size where the temporary pipeline is rendered into. Right click on a video asset and select Match project size to conform the output to the asset. pans. the track size always conforms to the output size specified by these methods.74
Chapter 8: Compositor window
New track (800x560).6. Any space left on the Output is left blank. When new tracks are created. and zooms on the compositor.2 Output size
The output size is set in either New when creating a new project or Settings->Format.
8. In the Resource window there is another way to change the output size. If the output size is larger than the temporary then the image transferred from the temporary will fit inside the Output Track. cropped to the project’s output size (720x480) Using this relationship between the track and the project’s output size you can effectively reduce or magnify the size of a particular track with regards to the final output and therefore create visual "effects" like split screens.2.
Output size (shown in green) larger than the temporary If the output size is smaller than the temporary then some of the temporary video will be cropped out.
. When rendering.

When displaying media. To play. focus on an area of work with the preview region or you use editing controls to cut & paste segments into the project or create a clip for later use. Drag a file from the asset manager or the clip manager to the viewer
You can also load media onto the viewer by right clicking on a file in the asset manager and selecting View from the popup menu or by double clicking on the icon. 2. not the original assets format. Open the resources manager window and select the asset manager or the clip manager folder. Here you can quickly browse through an asset using the slider control.
. 100% or 200% of the original media size. You can change the project’s output to match the asset’s format using the match project size menu option in the asset manager. You can change the media display size by right clicking on the screen to activate the display zoom menu. go to Window->Show Viewer The display is the area on the viewer where you actually see media playing. you first must load it on the viewer. rewind or forward through it use the slider control or the transport controls.Chapter 9: Viewer window
77
9 Viewer window
The viewer window is a place to load and preview your source media and clips. Select zoom levels of 50%. Once your media loads you will see it appear on the display. To load media into the viewer: 1. Before you can play any media. the viewer uses the project’s defined output size format settings.
The viewer window To open the viewer window.

78
Chapter 9: Viewer window
In here you will scrub around source media and clips.
. Operations done in the viewer affect a temporary EDL or a clip but not the timeline. selecting regions to paste into the project.

and assets are accessed here. To access it. clips. One area lists folders and another area lists folder contents.1 Navigating the resources
The resource window is divided into two areas. Select Display text to display a text listing. Going into the folder list and clicking on a folder updates the contents area with the contents of that folder.Chapter 10: Resources window
79
10 Resources window
Effects. Select Sort items to sort the contents of the folder alphabetically. go to the asset manager folder and right click on the label or icon of the file you are interested on.
The asset info window
. An asset menu will appear.
The resources window The folder and contents can be displayed as icons or text. then click on Info. The asset info window displays detailed information about the selected media file.
10. Right clicking in the folder or contents area brings up a menu containing formatting options. transitions. Most of the resources are inserted into the project by dragging them out of the resource window. Management of resource allocation is also performed here.

.

7 [Interface]. They can be toggled in the viewer and compositor windows with the level toggle. and viewer correspond to the final output levels before they are clipped to the soundcard range. it is clipped to 0 when sent to a sound card or file. Overloading by less than 3 dB is usually acceptable. the levels window can be brought up from the Window menu.)
. Look at the color codings in a meter with numerical markings to see what colors correspond to what sound level. audio levels have numerical markings in dB but in the patchbay there is not enough room. page 49. Then for meters in the patchbay in expanded audio tracks. Most of the time. use the color codings to see if it is overloading. They appear in the patchbay when a track is expanded (See Section 7. Even without numerical markings.Chapter 11: Sound level meters window
81
11 Sound level meters window
An additional window. The sound level is color coded as an extra means of determining the sound level. page 29. compositor. This allows not only seeing if a track is overloading but how much information is being lost by the overloading. The sound levels in the levels window.) They appear in the recording monitor when audio is being recorded.
The sound level meters window Sound level meters appear in many locations. The levels window displays the output audio levels after all mixing is done. The visible range of the sound level meters is configurable in settings->preferences->interface (See Section 3. In the record monitor they are the input values from the sound card. In the patchbay they are the sound levels for each track after all effects are processed and before down-mixing for the output. the sound level color can distinguish between several ranges and overload. Be aware that sound levels in Cinelerra can go above 0 dB. While overloading is treated as positive numbers in Cinelerra.1 [The patchbay].

.

hence they are described here in the navigation section. When playback stops. the starting position is the position of the insertion point in the Program window and the slider in the Compositor window. the out point becomes the ending point and if playing backward. the insertion point stays where playback stopped. 4 Frame back 5 Reverse Slow 6 Reverse + Reverse Fast 1 Frame 2 Forward Slow 3 Play Enter Fast Forward Forward 0 Stop Hitting any key on the keyboard twice pauses it. The keyboard interface is usually the fastest and has more speeds. The ending position is either the end or start of the timeline or the end or start of the selected region if there is one. Compositor. and Program windows has a transport panel. If it is forward the end position is the end of the selected region. The transport panel is controlled by the keyboard as well as the graphical interface. the in point becomes the ending point. by playing back you change the position of the insertion point. This causes the starting point to be the in point if playing forward and the out point if playing backward. you cross the same frame both times and so the same frame is displayed. the behavior falls back to using the insertion point and track boundaries as the starting and ending points. the displayed frame does not change. The transport behavior changes if you hold down CTRL when issuing any of the transport commands. The orientation of the end or start depends on the direction of playback. Each of the Viewer. If no in/out points are specified. Thus. If you frame advance forward and then frame advance backward. When using frame advance functions the behavior may seem odd. The transport keys are arranged in a sideways T on the number pad. For each of the operations it performs.Chapter 12: Transport controls
83
12 Transport controls
Transport controls are just as useful in navigation as they are in playing back footage. This is because the playback position is not the frame but the time between two frames. The rendered frame is the area that the playback position crosses.
The transport panel. It is possible to use a hardware JogShuttle1
1
Refer to David Arendt’s message posted on the Cinelerra CV mailing-list on the 2003-11-11 for more information
. When you increment the time between two frames by one and decrement it by one. If it is backward the end position is the start of the selected region. The insertion point moves to track playback. If playing forward.

.

Go to the right of the timebar until a right resize pointer appears. the preview region stays the same size and shrinks. Labels and in/out points in the viewer. When you hit the label button in the compositor. In the viewer and compositor. First. The insertion point in the main window follows the compositor. When you select a label or in/out point in the compositor. The timebar and slider are critical for navigation. the preview region stays the same size and shrinks.Chapter 13: Timebar
85
13 Timebar
The navigation features of the Viewer and Compositor behave very similarly. too. Go to the center of the preview region in the timebar and drag it around to convince yourself if can be moved. the label appears both in the compositor timebar and the program timebar. By using a preview region inside the entire program and using the slider inside the preview region you can quickly and precisely seek in the compositor and viewer. When you replace the current project with a file the preview region automatically resizes to cover the entire file. Labels and in/out points are fully supported in the viewer and compositor. The timebar and slider in the viewer window work exactly the same. The preview region is the region of the timeline which the slider affects. the timebar displays the entire program here. The timebar represents the entire time covered by the program. you need to resize the preview region. Drag left so the preview region shrinks. Therefore. The preview region should have changed and the slider resized proportionally. When set to Auto the video is zoomed to match the compositor window size as closely as possible. When you define labels and in/out points it defines those. you need to resize the preview region. labels and in/out points are displayed in the timebar. The slider only covers the time covered by the preview region. the pull-down menu on the bottom of the compositor window has a number of zoom options. The only difference between the viewer and compositor is the compositor reflects the state of the program while the viewer reflects the state of a clip but not the program. Note: When you append data or change the size of the current project. When you append data or change the size of the current project. Therefore. Move the pointer over the compositor’s timebar until it turns into a left resize pointer. Each has a timebar and slider below the video output. Preview region in compositor If you go to the slider and slide it around with the preview region shrunk. Finally the timebar defines a region known as the preview region. the compositor has a zoom capability. Like the program window. use the manual go to button of the compositor. To scroll your video and thus move the insertion point into the visible part of the timeline. When set to any
. you will see the slider only affects the preview region. Load a file and then slide around it using the compositor slider. The click and drag right. the insertion point in the program window jumps to that position. Instead of displaying just a region of the program.

If you have a wheel mouse. In Furthermore.86
Chapter 13: Timebar
other percentage. rotating the wheel zooms in or out too. not only do scrollbars scan around it but middle mouse button dragging in the video output scans around it. It is merely for scrutinizing video or fitting it in the desktop. Playing video on the compositor when zoomed to any size other that 100%. the original size. the video is zoomed a power of 2 and scrollbars can be used to scroll around the output. Zooming in or out with the zoom tool does not change the rendered output. This could affect performance on slower systems. requires Cinelerra to do extra processing steps.
. This is exactly when The Gimp does. When the video is zoomed bigger than the window size. the zoom zoom mode. clicking in the video output zooms in while ctrl-clicking in the video output zooms out. toggle causes the Compositor window to enter zoom mode. mind you.

the shared effect is treated like a copy of the original effect except in the shared effect the
. The effect is the same as if the effect was dragged from the resource window. All the realtime effects are listed in the resource window.1 Realtime effect types
The two other effect types supported by the Attach Effect dialog are recycled effects. The attach effect dialog gives you more control than pure dragging and dropping. they appear in the shared effects and shared tracks columns. For one thing.Chapter 14: Realtime effects
87
14 Realtime effects
These are layered under the track they apply to. The show option causes the GUI for the effect to appear under the cursor. divided into two groups: audio effects and video effects. When dragging more than one effect onto a track. If you right clicked on a video track to attach an effect. In the effect popup is a show option. In the case of a shared track. the attach effect dialog lets you attach two more types of effects: shared effects and shared tracks. Video effects should be dragged onto video tracks. In addition to dragging from the resource window. If the track is an audio track. Audio effects should be dragged from the resource window onto audio tracks. If the track is a video track. there will not be anything in the shared tracks column if no other video track exists. When you tweek parameters in the effect GUI. The output of the top effect becomes the input of the bottom effect and so on and so forth. if a region of the track is selected the effect is pasted into the region. If there is data on the destination track. Go to the effect and right click on it to bring up the effect popup. the effects must be audio effects. In the case of a shared effect. The other effects must be of the same type as the track you are attaching an effect to. The insertion point or selected region must start inside the other effects. In order to use a recycled effect.
In the case of a shared effect. on the bottom of the track. If the effect does not have a GUI. Right click on a track and select Attach effect from the popup. If you right clicked on an audio track there will not be anything in the shared track column if no other audio track exists. the parameters normally affect the entire duration of the effect. Finally. If there is no data on the track the effect is deleted. regardless of whether there is data. In the case of a synthesis effect. Most effects have GUI’s but some do not. you will see the effects layering from top to bottom. The attach button under each column causes anything highlighted in the column to be attached under the current track. effects may be applied to a track by a popup menu. They process the track when the track is played back. Select a plugin from the Plugins column and hit Attach under the plugins column to attach it. If shared effects or shared tracks are available. these conditions must be true. effects are processed from top to bottom. When an effect exists under a track. the effects must be video effects. it often needs to be configured.
14. the effect is applied to the entire track. Some of the effects do not process data but synthesize data. nothing pops up when the show option is selected. there merely must be another track on the timeline of the same type as the track you are applying an effect to. three requirements must be met: There must be other effects in the timeline. with no permanent storage of the output except when the project is rendered. When the track is played back. Shared effects and shared tracks allow very unique things to be done. you will want to select a region of the track so the dragging operation pastes it without deleting it.

like the reverb effects and the compressor. See Section 3. You want it to stop right before the mixing stage and give the data back to the original track. This decouples effects from editing operations. the effects follow the editing decisions. You can extend the end of an effect as much as desired without being limited. it is processed just like a normal effect except the configuration is copied from the original effect. If you drag edit in/out points.2 Editing realtime effects
Many operations exist for manipulating effects once they are in the timeline.
. and some of it happens by dragging effects. When a shared effect is played back. Do mute toggle next to each track for whom you do not want to mix on this by enabling the the output. Some effects detect when they are being shared. Some of the editing happens by dragging in/out points. If you cut from a track. if you drag the end of the effect. the effect has no source length. Unlike track editing. the shared track mixes itself into the output with it is settings for pan. This behavior can be disabled by selecting Settings->edit effects in the project window. page 29. the effects on the other tracks will be edited while the media stays the same.88
Chapter 14: Realtime effects
GUI can not be brought up. Normally when you edit tracks. Other effects. two tracks are mixing the same data on the output. In addition. In the case of video. The media the effect is bound to does not follow effect edits. the shared track itself is used as a realtime effect. the effect boundary is moved by dragging on it. the effect changes length. Once the shared track has processed the data. the methods used in editing effects are not as concise as cutting and pasting. This is because shared tracks are composited in order of their attachment. The reverb mixes tracks together to simulate ambience. All configuration of the shared effect is determined by the GUI of the original effect and only the GUI of the original effect can be brought up. When you perform a trim edit on an effect. regardless of whether it was on top of the original track. but what if you just want to edit the effects? Move the timeline cursor over the effect borders until it changes to a resize left or resize right icon. This is more commonly known as bouncing tracks but Cinelerra achieves the same operation by attaching shared tracks. and projector. the original track performs any effects which come below the shared track and then composites it on the output. The fade and any effects in the shared track are applied to the original track. Also unlike track editing. In this state. The compressor uses one of the sharing tracks as the trigger. Most of the times you do not want the shared track to mix the same data as the original track on the output. If you drag the end of an effect which is lined up to effects on other tracks. The three editing behaviors of track trimming apply to effect trimming and they are bound to the mouse buttons that you set in interface preferences. some of the editing happens through popup menus. Suppose you were making video and you did want the shared track to composite the original track’s data on the output a second time. These effects determine what tracks are sharing them and either mix the two tracks together or use one track to stage some value. Because mixing effects and media is such complex business. Since it is part of the original track it has to be composited before the original track is composited. the video from the shared track would always appear under the video from the original track. mode. do follow editing decisions made on an effect.7 [Interface]. the starting position of the drag operation does not bind the edit decision to media.
14. it performs an edit just like dragging the end of a track does. When an original track has a shared track as one of its effects. the effect shrinks. however. Thus. once the shared track has processed the output of the original track like a realtime effect.

14. The vertical direction is the output sound level in dB. To make the compressor reduce the dynamic range of the audio. make all the output values greater than the input values except 0 dB. Then press SHIFT while beginning the trimming operation. Each row can have multiple effects. In Cinelerra the compressor actually performs the function of an expander and compressor. When you are moving effects up or down. drag one point across another point to delete it. The algorithm currently limits all sound levels above 0 dB to 0 dB so to get an overloaded effect put a gain effect before the compressor to reduce all the levels and follow it with another gain effect to amplify all the levels back over 0 dB. The compressor works by calculating the maximum sound level within a certain time period of the current position. To make the compressor expand the dynamic range of the audio. the reference will usually point to the wrong effect afterwards.Chapter 14: Realtime effects
89
What happens if you trim the end of an effect in. Right click on an effect to bring up a menu for the effect. The most recent point selected has its vales displayed in textboxes for more precise adjustment.3 Realtime audio effects
14. any references will be pointing to a different effect after the move operation. Every track can have a stack of effects under it. to change the effect or change the reference if it is a shared effect. Dragging effects works just like dragging edits. The Move up and Move down options move the effect up or down. Be aware if you drag a reference to a shared effect. the audio compressor does not reduce the amount of data required to store the audio. By moving an effect up or down you change the order in which effects are processed in the stack. there is dragging of effects. In addition to trimming. The gain at the current position is adjusted so the maximum sound level in the time range is the user specified value. Reaction secs: This determines where in relation to the current position the maximum sound level is taken and how fast the gain is adjusted to reach that peak. This can be achieved by first positioning the insertion point on the start or end of the effect. The effects snap to media boundaries.1 Compressor
Contrary to computer science experience. make all the output values except 0 dB less than the input values. If it
.. effect boundaries. Click in the graph to create a point. Select attach.. If 2 points exist. Finally. be aware that if they are shared as shared effects. Realtime effects are organized into rows under the track. you can move effects up or down. You must select the arrow to enter drag and drop mode before dragging effects. For every input sound level there is an output sound level specified by the user. and tracks.3. In some cases you will want a trimming operation to change only one row of effects. Go to an effect and right click to bring up the effect menu. This causes the operation to change only one row of effects. The horizontal direction is the input sound level in dB. The audio compressor reduces the dynamic range of the audio. leaving a lot of unaffected time near the end of the track? When you drag an effect in from the Resource Window you can insert the effect in the portion of the row unoccupied by the trimming operation. It is notated in seconds. The user specifies output sound levels by creating points on the graph. The maximum sound level is taken as the input sound level. The compressor has a graph which correlates every input sound level to an output level.

90
Chapter 14: Realtime effects
is negative the compressor reads ahead of the current position to get the future peak. This allows a track which is not even heard to determine the loudness of the other tracks. the compressor ramps the gain up to the peak value.2 Delay audio
Just tell how much seconds you want to delay the video track. This track is specified by the Trigger. By sharing several tracks and playing with the trigger value.
14. Trigger type: The compressor is a multi-channel effect. Smooth only: For visualizing what the compressor is doing to the sound-level. this option causes it to replace the sound wave with just the current peak value. Trigger: The compressor is a multichannel effect. Normally only one track is scanned for the input peak. Then if a future peak is less than the current peak it ramps the gain down. you can make a sine wave on one track follow the amplitude of a drum on another track for example.4 DenoiseFFT
FIXME
. This is the most natural sounding compression and is ideal when multiple tracks are averaged into single speakers.3. The time taken to ramp the gain down can be greater than the time taken to ramp the gain up. Decay secs: If the peak is higher than the current level.3 Denoise
FIXME
14.3. If the reaction time is positive the compressor scans only the current position for the gain and ramps gain over one reaction time to hit the desired output level. The Maximum trigger takes the loudest track and uses it as the input for the compressor. Several tracks can share one compressor. The Total trigger type adds the signals from all the tracks and uses the total as the input for the compressor. The Trigger type uses the value supplied in the Trigger textbox as the number of the track to use as input for the compressor. How the signal from many tracks is interpreted is determined by the trigger type. Several tracks can share one compressor.
14.3. It hits the output level exactly one reaction time after detecting the input peak. This allows it to hit the desired output level exactly when the input peak occurs at the current position. This ramping down time is the decay seconds. The gain is ramped to that peak over one reaction time. It makes it very easy to see how reaction secs affects the detected peak values.

This effect reads audio directly from the soundcard input. It replaces any audio on the track so it is normally applied to an empty track. To use Live Audio, highlight a horizontal region of an audio track or define in and out points. Then drop the Live Audio effect into it. Create extra tracks and attach shared copies of the first Live Audio effect to the other tracks to have extra channels recorded. Live Audio uses the sound driver selected in Settings->Preferences->Playback->Audio Out for recording, but unlike recording it uses the playback buffer size as the recording buffer size and it uses the project sample rate as the sampling rate. These settings are critical since some sound drivers can not record in the same sized buffer they play back in. Live audio has been most reliable when ALSA is the recording driver and the playback fragment size is 2048. Drop other effects after Live Audio to process soundcard input in realtime. Now the bad news. With live audio there is no read-ahead, so effects like compressor will either delay if they have read-ahead enabled or playback will under-run. Another problem is sometimes the recording clock on the soundcard is slightly slower than the playback clock. The recording eventually falls behind and playback sounds choppy. Finally, live audio does not work in reverse.

14.3.13 Loop audio

FIXME

14.3.14 Overlay

FIXME

14.3.15 Pitch shift

Chapter 14: Realtime effects

93

Like the time stretching methods, there are three pitch shifting methods: Pitch shift, Resample, and Asset info dialog. Pitch shift is a realtime effect which can be dragged and dropped onto recordable audio tracks. Pitch shift uses a fast Fourier transform to try to change the pitch without changing the duration, but this introduces windowing artifacts. Because the windowing artifacts are less obtrusive in audio which is obviously pitch shifted, Pitch shift is mainly useful for extreme pitch changes. For mild pitch changes, use Resample from the Audio->Render Effect interface. Resample can change the pitch within 5% without a noticeable change in duration. Another way to change pitch slightly is to go to the Resources window, highlight the media folder, right click on an audio file, click on Info. Adjust the sample rate in the Info dialog to adjust the pitch. This method also requires left clicking on the right boundary of the audio tracks and dragging left or right to correspond to the length changes.

14.3.16 Reverse audio

Apply reverse audio to an audio track and play it backwards. The sound plays forward. Be aware when reversing audio that the waveform on the timeline does not reflect the actual reversed output.

14.3.17 SoundLevel

FIXME

14.3.18 Spectrogram

FIXME

14.3.19 Synthesizer

FIXME

14.3.20 Time stretch

FIXME

94

Chapter 14: Realtime effects

14.4 Realtime video effects
14.4.1 1080 to 480

Most TV broadcasts are received with a 1920x1080 resolution but originate from a 720x480 source at the studio. It is a waste of space to compress the entire 1920x1080 if the only resolvable details are 720x480. Unfortunately resizing 1920x1080 video to 720x480 is not as simple as shrinking it. At the TV station the original 720x480 footage was first converted to fields of 720x240. Each field was then scaled up to 1920x540. The two 1920x540 fields were finally combined with interlacing to form the 1920x1080 image. This technique allows a consumer TV to display the re-sampled image without extra circuitry to handle 720x480 interlacing in a 1920x1080 image. If you merely deinterlace the 1920x1080 images, you would end up with resolution of 720x240. The 1080 to 480 effect properly extracts two 1920x540 size fields from the image, resizes them separately, and combines them again to restore a 1920x480 interlaced image. The scale effect must then be applied to reduce the horizontal size to 960 or 720 depending on the original aspect ratio. The tracks to which 1080 to 480 is applied need to be at 1920x1080 resolution. The project settings in settings->format should be at least 720x480 resolution. The effect does not know if the first row in the 1920x1080 image belongs to the first row of the 720x480 original. You have to specify what the first row is in the effect configuration. The output of this effect is a small image in the middle of the original 1920x1080 frame. Use the projector to center the output image in the playback. Finally, once you have 720x480 interlaced video you can either apply frames to fields of inverse telecine to further recover original progressive frames.

14.4.2 Aging TV

This effect is the one to use if you want to achieve an "old movie" or TV show look. It will put moving lines up and down the movie as well as putting "snow" on the video. Use is along with Brightness/Contrast and Color Balance to make your movie look like a really old black and white movie.

14.4.3 Blur

This effect blurs a video track. The parameters are: Horizontal and vertical Those parameters are used to tell which one of field blurring affects. It can be be both fields.

Chapter 14: Realtime effects

95

Radius Use this slider to define the amount of blur to apply. Blur alpha, red, green, blue Specifies which color channels has to be blurred.

14.4.4 Brightness/contrast

If you want to brighten a dark shot, or add light, this is the tool to use. Do not overuse the effect or you risk degrading your video quality. Use the effect along with Keyframing to brighten a long shot that is dark at the beginning but bright at the end. Generally you will want to change the brightness and contrast about the same amount (eg darkness 28 contrast 26) so that your original colors are kept intact.

14.4.5 Burning TV

The video burning effect makes your video "burn" where there are small light colored patches of video, on the edge of a white T-shirt for example. It can be a great asset to a music video and just a great outlet to help free your imagination in your video.

14.4.6 Chroma key

This effect erases pixels which match the selected color. They are replaced to black if there is no alpha channel and transparency if there is an alpha channel. The selection of color model is important to determine the behavior. Chroma key uses either the lightness or the hue to determine what is erased. Use value singles out only the lightness to determine transparency. Select a center color to erase using the Color button. Alternatively a color can be picked directly from the output frame by first using the color picker in the compositor window and then selecting the Use color picker button. This sets the chroma key color to the current color picker color. Be aware that the output of the chroma key is fed back to the compositor, so selecting a color again from the compositor will use the output of the chroma key effect. The chroma key should be disabled when selecting colors with the color picker. If the lightness or hue is within a certain threshold it is erased. Increasing the threshold determines the range of colors to be erased. It is not a simple on/off switch, however. As the color approaches the edge of the threshold, it gradually gets erased if the slope is high or is rapidly erased if the slope is low. The slope as defined here is the number of extra values flanking the threshold required to go from opaque to transparent. Normally threshold is very low when using a high slope. The two parameters tend to be exclusive because slope fills in extra threshold.

It is just like the color balance effect on a picture editing program. such as GIMP.97. It can only do so much without greatly lowering the quality of the video.
14. The deinterlace effect offers several variations of line replication to eliminate comb artifacts in interlaced video.97 fps progressive video to 23.97 fps film. This is usually applied to a DVD to convert the 29. To convert 29.
14. Set the decimate input rate to 29. A popular softening technique is to use a maximum slope and chain a blur effect below the chroma key effect to blur just the alpha.
. apply a decimate effect to the track.97 fps film rate but this decimate effect can take any input rate and convert it to any lower output rate. Yellow) or RGB (Red.8 Color balance
Video Color Balance is a great effect to use along with Brightness/contrast and Hue/Saturation to try and compensate for possible errors in filming (low lighting.4.10 Deinterlace
The deinterlace effect has evolved over the years to deinterlacing and a whole lot more.4.97 and the project rate to 23.7 Chroma key (HSV)
FIXME
14. It also has some line swapping tools to fix improperly captured video or make the result of a reverse effect display fields in the right order. With it you can change the colors being sent to output CMY (Cyan. Green.97 fps video to the 23. however.4. etc). Magenta. The output rate of decimate is the project frame rate. Blue).9 Decimate
This effect drops frames from a track which are most similar in order to reduce the frame rate.4. are separate effects. Computationally intensive effects should come below decimate.96
Chapter 14: Realtime effects
The slope tries to soften the edges of the chroma key but it does not work well for compressed sources. Be aware every effect layered before decimate processes video at the decimate input rate and every effect layered after decimate processes video at the project frame rate. The input rate is set in the decimate user interface. In fact two of the deinterlacing methods.
14. Inverse Telecine and Frames to Fields.

Apply the difference key to the track with the action and apply a shared copy of it to the track with the background. One track contains the action in front of a constant background and another track contains the background with nothing in front of it.4. Change threshold in the difference key window to make more pixels which are not the same color transparent. Note: Currently this effect is known to crash when using YUV modes. Pixels which are different between the background and action track are treated as opaque. Change slope to change the rate at which the transparency tapers off as pixels get more different.13 Denoise video2
FIXME
14.4.14 Difference key
The difference key creates transparency in areas which are similar between 2 frames.
.4. Use value causes the intensity of pixels to be compared instead of the color.11 Delay video
FIXME
14.12 Denoise video
FIXME
14. Applying a blur to the top track with just the alpha channel blurred can soften the transparency border. The Difference key effect must be applied to 2 tracks. Pixels which are similar are treated as transparent. A high slope is more useful with a low threshold because slope fills in extra threshold. The track with the background should be muted and underneath the track with the action and the colormodel should have an alpha channel. The slope as defined here is the number of extra values flanking the threshold required to go from opaque to transparent.4.Chapter 14: Realtime effects
97
14.

The debobber which converts 720x480 interlaced into 1920x1080 interlaced or 1280x720 progressive seems to degrade the vertical resolution to the point that it can not be recovered. reducing sampling rate. up to down.15 DotTV
Another effect by Kentaro (effectTV). Fields to frames needs to know what field corresponds to what lines in the output frame. right to left.17 Fields to frames
This effect reads frames at twice the project framerate. since only the vertical and horizontal parameters are needed. Each input frame is called a field. The dialog window is simple. down to up.
.4.
14. the wrong setting results in blurrier output. If the input fields are the result of a line doubling process like frames to fields.18 Flip
This effect permits to flip a video track (or a portion of) from left to right. If the input fields are the result of a standards conversion process like 1080 to 480. Effects preceding fields to frames process frames at twice the project frame rate.98
Chapter 14: Realtime effects
14.16 Downsample
Downsampling is the process of reducing the size of an image by throwing out data.
14. combining 2 input frames into a single interlaced output frame. the wrong setting will not make any difference. Parameters refers to: Horizontal Horizontal offset Vertical Vertical offset Channels
14.4.4.4. The easiest way to figure it out is to try both options in the window.

19 Frames to fields
This plugin applies the operation reverse to the "Fields to Frames" plugin: it extracts the two interlaced fields stored in alternating lines of interlaced source footage and outputs them as separate full frames.4. Processing interlaced footage without deinterlacing 1. This is helpful if your intended target format is interlaced. Render your project to a intermediate clip. Typical values beeing "bottom field first" for DV and "top field first" for HDV. Now. Apply the "Fields to frames" effect to the intermediate clip. This plugin is only usefull if its output is pulled with doubled framerate with respect to the source footage. drop the freeze frame effect on the highlighted region. This will combine two adjacent fields into one interlaced field with the original frame rate. If on the other hand. you create video for display on computer monitor solely) then it is much more convienient to de-interlace the source material prior to any further processing.e make it 50fps if your source footage is 25i 2.20 Freeze frame
In its simplest form. slow motion. e. Create a new project with doubled frame rate.
. Be sure to choose the correct field order. 5. you just want to target a progressive display (e. scaling. Regions of a freeze frame effect which are enabled repeat the lowest numbered frame since the last keyframe.Chapter 14: Realtime effects
99
14. This has unique possibilities. including translations. (The naming of this pair of plugins is obviously misleading with respect to the common usage of the terms "field" and "frame". 6. scaling and translating on interlaced footage without the need to destroy the additional temporal information contained in such source material. 3.g. Insert the intermediate clip into your original project. highlight a region of the track to freeze. Freezeframe has an enabled option which can be keyframed.4. Motion-JPEG-A or even uncompressed yuv if you have plenty of storage. One typical usage secenario is to do masking. "fields" denotes the interlaced half images and "frame" denotes the full image). The alternating lines missing on each output frame are interpolated. 8. Be sure to choose a rather lossless video codec. Then apply any further effects afterwards. Do the final render on your original project
14. Cinelerra will playback each frame of your footage twice. Apply the "Frames to Fields" effect. I. and the lowest numbered frame in the affected area will play throughout the entire region. normally. 4. Make sure the doubled framerate has been detected correctly by Cinelerra (by looking in the clip’s media info in the media resources folder) 7.g. precise frame-wise masking or use of the motion tracker plugin. Insert your source footage onto a video track in the timeline.

you can choose the angle. The gamma effect has 2 more parameters to simplify gamma correction. The graphics card and most video codecs store colors in a linear scale but Cinelerra keeps raw camera images in their original logarithmic scale when it renders them.
. This can result in a circular fill showing up elliptical. which is stored as 1440x1080 pixels. the frame in the middle is repeated in the entire effect. The Gradient effect can generate linear or circular color fills. logarithmic. every time a keyframe is encountered the frame under it becomes the frozen one. If a freeze frame effect alternates between enabled and disabled. The automatic option causes it to calculate max from the histogram of the image. This is necessary because the raw image parser can not always decode the proper gamma values for the images.21 Gamma
Raw camera images store colors in a logarithmic scale. Moreover.
14. Note that every time you pick a color from the compositor window. It also does its processing in 16 bit integers. A common example is the HDV 1080i format.0 in the output corresponds to maximum brightness in the input. for partially filtering or for adding moving highlights. Cinelerra won’t do any internal corrections for this. Use this when making a preview of a long list of images since it changes for every image.22 Gradient
The gradient effect overlays a smooth color gradient on top of every video frame. The use color picker option uses the value currently in the color picker to set the max value. The gamma value determines how steep the output curve is and the maximum value is where 1. you can control the slope of the color transition by selecting a transition function (linear. If a freeze frame effect has several keyframes. but displayed as 1920x1080 (16:9 aspect ratio).
14. squared) and by changing the "start" and "stop" radius. you need to hit use color picker to apply the new value. All parameters can be keyed and will be interpolated between keyframes. As Cinelerra does its calculations on a 1440x1080 pixel bitmap. For linear fills. Note the following well known problems: When using limited color models in your project. The gamma effect converts the logarithmic colors to linear colors through a gamma value and a maximum value. It is usefull for all sorts of backgound fills. The disabled regions play through.4. When using a project format with anamorphotic storage. The blacks in these images are nearly 0 and the whites are supposed to be infinity. any circular fill will be streched out horizontally when displaying the final output. Note that both colors used in this color transition can contain an arbitrary Alpha value (transparency).4. for circular fills the center of the created gradient pattern. the Gradient fill can create color bands or steps. each set to enabled. which takes away a lot of information. each time an enabled keyframe is encountered the frame under it is replicated until the next disabled keyframe.100
Chapter 14: Realtime effects
If a freeze frame effect has a keyframe in the middle of it set to enabled.

Chapter 14: Realtime effects
101
14. green. The scaled red. blue histograms show the input histograms for red. The output transfer is simply a minimum and maximum to scale the input colors to. The number of pixels permitted to pass through is set by the Threshold textbox. The histogram has two sets of transfer parameters: the input transfer and the output transfer. It also has textboxes to enter values into. green. and B channels but not the value. green. then it is translated so output values now equal the output values for each input value on the input graph. After the input transfer. Points can be deleted by first selecting a point and then dragging it to the other side of an adjacent point. Select which transfer to view by selecting one of the channels on the top of the histogram. Automatic input transfer is calculated for the R. The input transfer is defined by a graph overlaid on the histogram. The value transfers are applied uniformly to R. Input values of 100% are scaled down to the output’s maximum. blue and multiply them by an input transfer to get the output red. Video entering the histogram is first plotted on the histogram plot. They can also be deleted by selecting them and hitting delete.4. The input and output color of the point can be changed through these text boxes. The active point’s input and output color are given in text boxes on top of the window. Enable the automatic toggle to have the histogram calculate an automatic input transfer for the red. Click and drag anywhere in the input graph to create a point and move it. It is always performed in floating point RGB regardless of the project color-space. G. Smaller thresholds permit fewer pixels to pass through and make the output look more contrasty. Click on an existing point to make it active and move it. G. blue. green. Input values of 0% are scaled up to the output minimum. green. A threshold of 0. Click and drag on the output gradient’s triangles to change it. The input graph is edited by adding and removing any number of points.4. blue is scaled by an output transfer. green. The horizontal direction corresponds to every possible input color. The active point is always indicated by being filled in. The value histogram thus changes depending on the settings for red. 4 histograms are possible in the histogram viewer.99 scales the input so 99% of the pixels pass through. It does this by scaling the middle 99% of the pixels to take 100% of the histogram width. Plot histogram Split output
14.23 Histogram
This shows the number of occurrences of each color on a histogram plot. blue. blue is converted into a value and plotted on the value histogram. Input values below 0 are always clamped to 0 and input values above 100% are always clamped to 100%. B after their color transfers are applied. blue but not the value.24 HolographicTV
By Kentarou effectTV
. The red. Then the output red. the image is processed by the output transfer. green. The vertical direction corresponds to the output color for every input color.

This causes input frames to be taken at even intervals. The IVTC effect is primarily a way to convert interlaced video to progressive video. Here the film was converted from 24 fps to 60 fps.25 Hue saturation
With that effect you can change hue.4.
14. You can specify an input frame rate which is lower than the project frame rate.27 Interpolate pixels
Note: this effect works only for float color models. The value control makes any given colors more bright or more subdued.4. normally resulting in "false" colors. The input frames are at different times. The hue control shifts the colors circularly in the color plane. It undoes three patterns of interlacing. It averages two input frames for each output frame.
A AB BC CD D AB CD CD DE EF Automatic
.28 Inverse telecine
This is the most effective deinterlacing tool when the footage is a video transfer of a film. FIXME
14. The parameters are modified using 3 simple sliders.4. saturation and value. Then the 60 fps was down-sampled to 30 fps by extracting odd and even lines and interlacing the lines.
14.26 Interpolate video
The interpolate effect tries to create the illusion of a higher frame rate from source footage of very low framerates by averaging frames over time. The saturation control can be used to reduce color footage to black and white. In this mode the output frame rate is used as the input frame rate and you just create keyframes wherever you want to specify an input frame.4. resulting in a dissolve for all output frames between the input frames. There are two ways of specifying the input frames. You can also specify keyframe locations as the positions of the input frames.102
Chapter 14: Realtime effects
14.

Click on the wrench to set the video compression. It replaces any video on the track so it is normally applied to an empty track. the compression must be Motion JPEG A.30 Linear blur
Blur has three styles: Linear.
. the compression must be DV. For the IEC 61883 driver. Blue. for linear blur Steps Number of blur steps Channels Which channel(s) to blur. Alpha)
14. The three parameters refer to channels (Red.4. For the Video4Linux2 recording driver. Go to the Video In section where it says Record driver. In order to smooth out the timing. For live video.Chapter 14: Realtime effects
103
The first two options are fixed patterns and affected by the pattern offset and odd field first parameters. Green. Go to Settings->Preferences->Recording to set up the capture card.
14. It is a brute force algorithm. you need to follow inverse telecine with a decimate effect.31 Live video
This effect reads video directly from the capture card input. This gets the driver to generate output in a colormodel that the timeline can use. The file format must be Quicktime for Linux and video recording must be enabled for it. Radial.4. The video compression depends on the recording driver. and Zoom Parameters refer to: Length Distance between original image and final blur step Angle Angle of motion.4.29 Invert video
Also called invert video. The last option creates several combinations of lines for each frame and picks the most progressive combination. The configuration for the capture card is taken from the recording preferences. it is a method of reversing the colors of a video track.
14. This technique does not rely on a pattern like other techniques and is less destructive but the timing is going to be jittery because of the lack of a frame rate reduction. Other video drivers have not been tested with Live Video and probably will not work. the selection for File Format and Video needs to be set to a format the timeline can use. It must be set to either Video4Linux2 or IEC 61883.

you should use OpenGL and a video card which supports GL shading language.4. It can not be shared by more than one track.33 Motion
The motion tracker is almost a complete application in itself. the loop effects can be rendered where the settings->loop playback option
Sections of video can be looped by dropping a loop effect on them. Any channels the capture card supports need to be configured in the Video in interface since the same channels are used by the Live Video effect. the keyframe becomes the beginning of the region to loop. Go to File->Record to bring up the recording interface and the Video In window. With the video recording configured. or discarded. For best results. recalled from a previous calculation. The loop effects are also convenient for short regions. Only one Live Video effect can exist at any time on the timeline. The effect takes a long time to precisely detect motion. Although the motion tracker is applied as a realtime effect. highlight a horizontal region of a video track or define in and out points. Contrary to the settings-
can not be. The motion tracker works by using one region of the frame as the region to track. It can stabilize motion or cause one track to follow the motion of another track.32 Loop video
>loop playback option. a number of things can be done with that motion vector. To save time the motion result can be saved for later reuse. Then drop the Live Video effect into it. It can do 1/4 pixel tracking or single pixel tracking. This specifies the length of the region to loop starting from either the beginning of the effect or the latest keyframe. Once the motion between 2 frames has been calculated. It can track both simultaneously or one only.104
Chapter 14: Realtime effects
Some cards provide color and channel settings. Values set in the Video in window are used by Live Video.
14. It can be scaled by a user value and clamped to a maximum range. Setting several keyframes in succession causes several regions to loop. no matter where the keyframe is. Drop other effects after Live Video to process the live video in realtime.4. The region is replicated for the entire effect. Live video takes the color settings from the values set in the Video In window.
. Setting a single keyframe causes the region after the keyframe to be looped throughout the effect.
14. Go to Settings->Preferences->Playback->Video Out to enable the OpenGL driver. This region can be defined anywhere on the screen. The loop effects have one option: the number of frames or samples to loop. It compares this region between 2 frames to calculate the motion. It can be thrown away or accumulated with all the motion vectors leading up to the current position. The motion tracker tracks two types of motion: translation and rotation. The end of an effect can be looped from the beginning by setting the keyframe near the end. it usually must be rendered to see useful results. Every time a keyframe is set in a loop effect.

This is the size of the rotation block.
. The center of the block should be part of the image which is visible at all times. the rotation search radius is divided into a finite number of angles and only those angles compared to the starting frame. Then the search area is narrowed and rescanned by the same number of search steps until the motion is known to 1/4 pixel accuracy. The motion tracker tracks X and Y motion in the master layer and adjusts X and Y motion in the target layer. Rotation search steps Ideally every possible angle would be tested to get the rotation. a block is compared to a number of neighboring blocks to find the one with the least difference. Here is a brief description of the motion tracking parameters: Track translation Enables translation operations. Translation search steps Ideally the search operation would compare the translation block with every other pixel in the translation search radius. To speed up the rotation search.Chapter 14: Realtime effects
105
The motion tracker has a notion of 2 tracks. the master layer and the target layer. Thus the rotation search radius is half the total range scanned. Rotation search radius This is the maximum angle of rotation from the starting frame the rotation scanner can detect. Rotation block size For rotation operations a single block is compared to equally sized blocks. Translation block size For the translation operations. Track rotation Enables rotation operations. Then the search radius is narrowed and an equal number of angles is compared in the smaller radius until the highest possible accuracy is achieved. Settling speed The motion detected between every frame can be accumulated to form an absolute motion vector. The motion tracker in Cinelerra is not as sophisticated as some world class motion trackers but it is enough to sweeten some camcorder footage. The size of the block to search for is given by this parameter. The rotation scan is from this angle counterclockwise to this angle clockwise. If the settling speed is less than 100 the absolute vector is downscaled by the settling amount before being added to the next frame. Y These coordinates determine the center of the translation block based on percentages of the width and height of the image. The master layer is where the comparison between 2 frames takes place. a subset of the total positions is searched. If it is under 100 the amount of motion is limited by that percentage of the image size. Block X. The intricacies of motion tracking are enough to sustain entire companies and build careers around. If the settling speed is 100 the absolute vector is added to the next frame. The motion tracker tracks rotation in the master layer and adjusts rotation in the target layer. Maximum absolute offset The amount of motion detected by the motion tracker is unlimited if this is 100. Translation search radius The size of the area to scan for the translation block. To speed this operation up. each rotated by a different amount. The target layer is where motion is applied either to track or compensate for the motion in the master layer.

Another box outside the translation block represents the extent of the translation search radius. Track single frame When this option is used the motion between a single starting frame and the frame currently under the insertion point is calculated. Playback can start anywhere on the timeline since there is no dependence on previous results. the block position is shifted to always cover the same region of the image. Track previous frame Causes only the motion between the previous frame and the current frame to be calculated. Instead of adjusting the block position to reflect the new location of the image. the motion calculation is loaded from a previous save calculation. 2 boxes are drawn on the frame. Master layer This determines the track which supplies the starting frame and ending frame for the motion calculation. This is useful for matching titles to objects in the frame. In the center of these boxes is an arrow showing the translation between the 2 master frames. The motion operations can be accurate to single pixels or subpixels by changing the action setting. If it is Do Nothing the target layer is untouched. If it is Don’t Calculate the motion calculation is skipped. Draw vectors When translation is enabled. In this mode the motion between the previous frame and the current frame is calculated. Thus a new region is compared for each frame. If it is Save the motion calculation is always performed but a copy is also saved.
. If it is Bottom the bottom track of all the tracks sharing this effect is the master layer. Settling speed has no effect on it since it does not contain any previous motion vectors. The motion calculated this way is taken as the absolute motion vector. a new motion calculation is performed. If there is no previous save calculation on disk. The starting frame is specified in the Frame number blank. After every frame processed this way. the block position is unchanged between each frame. Since the rotation scanner scans the rotation search radius in two directions. If it is Recalculate the motion calculation is performed every time each frame is rendered. the target layer is moved by the same amount as the master layer. This is added to an absolute motion vector to get the new motion from the start of the sequence to the current position. The absolute motion vector for each frame replaces the absolute motion vector for the previous frame.. When rotation is enabled a single box the size of the rotation block is drawn rotated by the amount of rotation detected. This is useful for stabilizing an object in the frame.. the target layer is moved opposite to the motion vector. The top track of all the tracks is the target layer..106
Chapter 14: Realtime effects
Normally you need one search step for every degree scanned.. If it is Load. If it is Stabilize. like Track Previous Frame does. Action Once the motion vector is known this determines whether to move the target layer opposing the motion vector or following the motion vector. If it is Track. One box represents the translation block. you need two steps for every degree in the search radius to search the complete range. Previous frame same block This is useful for stabilizing jerky camcorder footage. Playback must be started from the start of the motion effect in order to accumulate all the necessary motion vectors. Calculation This determines whether to calculate the motion at all and whether to save it to disk.

The slower method is to calculate the motion vectors and apply them simultaneously. applying a blur effect before the motion tracking can improve accuracy.33.y settings. Set Calculation -> Don’t calculate.4. Either save the motion vectors in a tracking pass and disable the blur for the action pass or apply the blur just to the master layer.33. The results of the save coords operation are saved to the hard drives on the render nodes. Usually it is a frame near the middle of the sequence. Enable which of translation motion or rotation motion vectors you want to track.3 Using blur to improve motion tracking
With extremely noisy or interlaced footage. Future rendering operations on these nodes will process different frames and read the wrong coordinates from the node filesystems. If the motion tracker is used on a render farm.4. Finally enable playback for the track. First disable playback for the track to do motion tracking on. set the motion action to perform on the target layer and change the calculation to Load coords. Save coords and previous frame mode will not work.33.2 2 pass motion tracking
The method described above is 2 pass motion tracking. Either save the motion vectors in a tracking pass and disable the histogram for the action pass or apply the histogram just to the master layer. Then enable playback of the track to see the motion tracking areas. block size. disable Draw vectors. When using a single starting frame to calculate the motion of a sequence.4. This is faster than a single pass because errors in the motion vector calculation can be discovered quickly. Once this is done. This suffers the disadvantage of not being practical for extremely long sequences where some error is acceptable and the picture quality is lousy to begin with. This is rarely frame 0. like stabilizing camcorder footage. The fact that render nodes only visualize a portion of the timeline also prevents previous frame from working since it depends on calculating an absolute motion vector starting on frame 0.Chapter 14: Realtime effects
107
14. Then set search radius. not the master node.
14. Set Action -> Do Nothing.33. This is useful for long sequences where some error is acceptable. set the calculation to Save coords and do test runs through the sequence to see if the motion tracker works and to save the motion vectors. This also allows the motion tracking to use a less demanding colormodel like RGB888 in the scanning step and a more demanding colormodel like RGB Float in the action step. One pass is used just to calculate the motion vectors. Once this is configured. Enable Draw vectors.4 Using histogram to improve motion tracking
A histogram is almost always applied before motion tracking to clamp out noise in the darker pixels.
. center the block on the part of the image you want to track. Then drop the effect on a region of video with some motion to track.4. A second pass is used to apply the motion vectors to the footage.1 Secrets of motion tracking
Since it is a very slow effect. The scanning step takes much longer than action. Then rewind the insertion point to the start of the region. By watching the compositor window and adjusting the Block x.
14. This method can use one track as the motion vector calculation track and another track as the target track for motion vector actions. This way the search radius need only reach halfway to the full extent of the motion in the sequence. and block coordinates for translation and rotation. disable playback for the track. there is a method to applying the motion tracker to get the most out of it. the starting frame should be a single frame with the least motion to any of the other frames.
14.

108
Chapter 14: Realtime effects
14.33. Here is a quick shot of what it will look like when working:
.5 Motion tracking in action
First. Drag it from the resource window and drop it directly over the video in Cinelerra’s main window. You will see some new boxes overlaid on the video. add a motion effect to the track. You should see something similar to this:
Then right-click on the motion effect marker in the timeline and select show to see the motion tracker dialog:
Start by looking at your Compositor.4. These are important to control the motion tracker.

Go on to knobs three and four . Do not worry if it does not cover the object yet. Adjust it to the size of the target (the object you want to track). More on this later. The left pointing vector indicates the motion tracker attempting to find the target. The middle larger box is the search range for the tracker. because the handle is far right of the center of frame. Look at the small inside box.
. In this example. It should contain the full range of motion for the tracking target.Block X and Block Y. but quickly:
The middle small box is the target of the tracker. We have failed in this video frame. Move to the beginning of your video clip Make sure the motion tracker dialog is open Look at the Compositor Start adjusting these four knobs:
Make sure you check Track Translation Uncheck Track Rotation Start with knob two . Use these to put the target designator over the target itself. We will talk about this more later.Translation block size .Chapter 14: Realtime effects
109
The image above shows the motion tracker losing track of the object because of a search window that is too small. we are trying to track the hanging handle. Notice that both boxes resize.and spin it to get an idea for what is changing.

Translation search radius. Track Single Frame For this example it is set with a Frame number of 0 (first frame) Master Layer If the effect is shared between two tracks it specifies which of those tracks will be the one where motion is tracked (Master Layer) and which layer will be affected by the resulting translation vectors (Target Layer). Those settings are control by knobs two through four. here are other settings needed to see the effect:
Draw vectors Uncheck this to prevent rendering of the target boxes and motion vectors in your rendered video. Expand it to include the full range of travel you expect for the target.
. You can test this by playing the timeline and viewing the results (if your machine is fast enough for realtime) or by rendering and viewing the stabilized handle in the output. the vectors and boxes are rendered in output video. Action Select the stabilize options to have the rendered video follow the motion of the target. If there is no second track sharing motion tracker then the Master=Target. use the top knob . Make the first video frame look similar to:
This image shows a lot of detail. Notice that the small frame is centered over the handle and sized to just include it. Select a Track option to run motion tracking without adjusting the video. If you look back at my original action shot. the outer frame is larger than the back and forth movement of the handle in the entire video clip. the search radius was to small and the target moved outside the range.110
Chapter 14: Realtime effects
Finally. Finally. If checked. Finally.

14. we will explain how to stabilize a video. Such a need can arise when the video was taken from a vehicle for example.4.33.4. and render the part of the video where the motion effect is applied. Then. Each frame gets an separate file in /tmp directory which contains its vector.
. Enlarge the block and select almost half the size of the video. The more your footage is jerky.34 Motion blur
FIXME
14. and import it into your project. Recalculate Perform motion tracking and update video per Action setting. Reduce the "Maximum absolute offset" value to limit the stabilization amplitude. the more you have to zoom in to discard the black borders. Intensity of colors can be chosen as option. using the in and out points. Save and Load Saves/Loads the translation/rotation vectors (absolute or relative) to/from files. Its goal is not to "follow" an object. than having a very large black border on one side of the picture during big shakes.35 Oil painting
This effect makes video tracks appears as a painting. That option is recommended for stabilizing jerky camcorder footage. It can be controlled by Radius slider. You will notice the video is stabilized but there are black borders which appear on sides of the frame.Chapter 14: Realtime effects
111
Calculation Don’t Calculate select this option to turn off adjustment of video. Make sure the "Draw vectors" option is selected. Select the "Previous frame same block" option. If the result is good. The block stays exactly at the same place during all the effect length. in order to remove those black borders. render your video to a ‘. Select the "Stabilize subpixel" option: it will give a finer stabilization.6 Tracking stabilization in action
14.4. deselect the "Draw vectors" option. That is why the result is better with HDV footage than with DV footage. You have to zoom in and define projector keyframes to move the projector around the screen.dv’ file. Then apply the motion effect on that part of the video. You probably prefer to get a non-perfect stabilization on some places on the video.
In this section. Increasing that value will not give a better result. Set the "Translation search steps" value to 128. but will considerably increase the rendering time. The block and vectors were not drawn anymore on the video. First select on the timeline the part of the footage you want to stabilize.

112

Chapter 14: Realtime effects

14.4.36 Overlay video

This effect can combine several tracks by using the so called Overlayer. This is a basic internal device normally used by Cinelerra to create the (dissolve) transitions and for compositing the final output of every track onto the output bitmap. The Overlayer has the ability to combine one or several image layers on top of a "bottom layer". It can do this combining of images in several different (and switchable) output modes: Normal, Additive, Subtractive, Multiply (Filter), Divide, Max and Replace. For a detailed explanation of the several overlay modes See Section 8.2 [Compositing], page 62. Now, the overlay plugin enables the use of this Overlayer device in the middle of any plugin stack, opening endles filtering and processing possibilities. It is only usefull as a shared plugin (i.e. a multitrack plugin). So, to use the overlay plugin 1. Add the effect to Track A. 2. Choose "attach effect" from the context menu of another track (Track B). 3. Choose "Track A:Overlay" as a shared plugin. 4. Manipulate the plugin parameters in Track A. In the Overlay Plugin’s parameter window you can choose the overlay order, i.e. which track plays the role of the "bottom layer" and which plays the role of the "top layer". For some overlay modes, this can make quite a difference, e.g. the top layer is subtracted from the bottom layer for "Subtractive" mode. Further on, you can choose on which of the tracks to overlay the combined output. (Hint: in most cases, you will want to mute the other track and only retain this combined output).

14.4.37 Perspective

The perspective effect allows you to change the perspective of an object, and is perfect for making objects appear as if they are fading into the distance.

14.4.38 Polar

The Polar effect bends and warps your video in weird ways. Mathematically, it converts your video from either polar coordinates to rectangular coordinates, or the reverse.

14.4.39 RGB-601

Chapter 14: Realtime effects

113

For analog video or MPEG (including DVD) output, the maximum range for R,G,B is [16, 235] (8-bit). For YUV, the maximum range for intensity (Y) is [16, 235] (8-bit). This range corresponds to gray levels from 6% to 92%. When rendering, values outside of these ranges will be clipped to these limits. To render to MPEG, add the RGB-601 effect to all video tracks where material uses the full intensity scale (0-100%), and enable RGB -> 601 compression. Consider adding the Videoscope effect after RGB-601 to see how RGB-601 affects your dynamic range. See Section 14.4.57 [Videoscope], page 120. (To preview how your rendered MPEG would look without RGB-to-601 compression, instead enable 601 -> RGB expansion – you will observe a noticable contrast increase.) Although RGB-601 will reduce contrast in your video tracks, the contrast will be restored during MPEG playback.

14.4.40 Radial blur

It creates a whirlpool blur that simulates a swirling camera. You can vary the location, type, and quality of the blur.

14.4.41 ReframeRT

ReframeRT changes number of frames in a sequence of video directly from the timeline. It has 2 modes, selected by the 2 toggles in the GUI. Stretch mode multiplies the current frame number of its output by the scale factor to arrive at the frame to read from its input. If its current output frame is #55 and the scale factor is 2, frame #110 is read from its input. The stretch mode has the effect of changing the length of output video by the inverse of the scale factor. If the scale factor is greater than 1, the output will end before the end of the sequence on the timeline. If it is less than 1, the output will end after the end of the sequence on the timeline. The ReframeRT effect must be lengthened to the necessary length to accommodate the scale factor. Change the length of the effect by clicking on the endpoint of the effect and dragging. Although stretch mode changes the number of the frame read from its input, it does not change the frame rate of the input. Effects before ReframeRT assume the same frame rate as ReframeRT. The ReframeRT in stretch mode can be use to create a fast play effect. Select Stretch mode and enter a value greater than 1 to get accelerated playback. For slow motion effect, use a ReframeRT effect in stretch mode with a value less than 1. Example: you have a clip that you want to put in slow motion. The clip starts at 33.792 seconds and ends at 39.765. The clip is 5.973 seconds long. You want to play it at 4/10ths normal speed. You divide the clip length by the playback speed (5.973/.4) to get a final clip length of 14.9325 seconds. You create an in point at the start of your clip: 33.792 seconds. You put an out point 14.9325 seconds later, at 48.7245 seconds (33.792 + 14.9325). You attach a ReframeRT effect, set it to .4 and stretch. You change the out point at 48.7245 to an in point. You start your next clip after the slow motion effect at the 48.7245 out point.

114

Chapter 14: Realtime effects

You can also change the frame rate of the clip if you right click on it in the media viewer and go to Info. If you do not hit the drop down first, you can type in a desired frame rate as well. Cinelerra will pick the right frames out for the project frame rate, effectively doing the time-lapsing as well Downsample mode does not change the length of the output sequence. It multiplies the frame rate of the output by the scale factor to arrive at a frame rate to read the input. This has the effect of replicating the input frames so that they only change at the scaled frame rate when sent to the output. It does not change the length of the sequence. If the scale factor is 0.5 and the output frame rate is 30 fps, only 15 frames will be shown per second and the input will be read at 15 fps. Downsample is only useful for scalefactors below 1, hence the name downsample. Downsample mode changes the frame rate of the input as well as the number of the frame to read, so effects before ReframeRT see the frame rate * the scale factor as their frame rate. If the scale factor is 2 and the output frame rate is 30, the input frame rate will be 60 and the input frame number will by doubled. This will not normally do anything but some input effects may behave differently at the higher frame rate.

14.4.42 Reroute

FIXME It enables to selectively transfer the Alpha channel or the Components (RGB or YUV) or both from a source track to a target track, partially overwriting the target’s contents. It works as a shared plugin. The typical usage scenario is to build up a possibly animated Mask in one track and then to transfer the Alpha channel to another content track.

14.4.43 Reverse video

Media can be reversed on the timeline in realtime. This is not to be confused with using the reverse playback on the transport. The reverse effects reverse the region covered by the effect regardless of the transport direction. The region to be reversed is first determined by what part of the track the effect is under and second by the locations of keyframes in the effect. The reverse effects have an enabled option which allows you to set keyframes. This allows may possibilities. Every enabled keyframe is treated as the start of a new reversed region and the end of a previous reversed region. Several enabled keyframes in succession yield several regions reversed independent of each other. An enabled keyframe followed by a disabled keyframe yields one reversed region followed by a forward region.

14.4.44 Rotate

The Rotate filter can rotate the video in 90 degree increments, reverse and flip the video.

Chapter 14: Realtime effects

115

14.4.45 SVG via Inkscape

FIXME

14.4.46 Scale

FIXME

14.4.47 Selective temporal averaging

This plugin is designed to smooth out non-moving areas of a video clip. The smoothing is performed by averaging the color component for each pixel across a number of frames. The smoothed value is used if both the standard deviation and the difference between the current component value and the average component value are below a threshold. The average and standard deviation are calculated for each of the components of the video. The type of components averaged is determined by the color model of the entire project. The average and standard deviation of the frames can be examined by selecting the specific radio button in the plugin options window. The region over which the frames are averaged is determined by either a fixed offset or a restart marker system. In a restart marker system, certain keyframes are marked as beginning of sections. Then for each section, the frames surrounding the current frame are used as the frames to average over, except when approaching the beginning and end of a section, whereby the averaging is performed over the N beginning or ending frames respectively. Common usage: If you have to select the number of frames you wish to average. 1. Enter a reasonable number of frames to average (e.g. 10). 2. Select the Selective Temporal Averaging method and enter 1 and 10 for all the Av. Thres. and S.D. Thres. respectively. This basically causes all pixels to use the average value. 3. Turn the mask for a the first component on. This should make the whole frame have a solid color of that specific component. 4. Slowly reduce the S.D. Thres. value. As you do so, you will notice that the regions vastly different from the average will have a flipped mask state. Continue to reduce the threshold until you reach the point at which non-moving regions of the video have a flipped masked state. This value is known as the noise-floor and is the level of natural noise generated by the CCD in the camera. 5. Repeat the same procedure for the Av. Thres. 6. Turn off the mask 7. Repeat this for all channels

116

Chapter 14: Realtime effects

14.4.48 Sharpen

FIXME

14.4.49 ShiftInterlace

FIXME

14.4.50 Swap channels

FIXME

14.4.51 Threshold

Threshold converts the image to pure luminance, and replaces pixels with one of three colors based on the luminance. Pixels with luminance values in the low range are replaced with black, pixels in the middle range are replaced with white, and pixels in the high range are replaced with black. Color and alpha for each range are configurable and interpolate according to keyframes. The threshold window shows a histogram of luminance values for the current frame. Click dragging inside the histogram creates a range to convert to white. SHIFT-clicking extends one border of this range. Values for the threshold range can also be specified in the text boxes. This effect is basically a primitive luminance key. A second track above the track with the threshold effect can be multiplied, resulting in only the parts of the second track within the threshold being displayed.

14.4.52 Time average

and re-enabling playback for the track.4. This forces it to reread the accumulation buffer when other effects change. Inside the time average effect is an accumulation buffer and a divisor. it is best applied by first disabling playback for the track.pdf
14. Average This causes the accumulation buffer to be divided before being output.huji. and average them using time average and you will have a super high quality print. Disable subtraction In order to represent the accumulation of only the specified number of frames. It would run out of memory if it had to accumulate thousands of frames.
14.il/videowarping/HUJI-CSE-LTR-2005-10_etf-tr. This results in the average of all the frames. Frames to average This determines the number of frames to be accumulated in the accumulation buffer. Reprocess frame again If an effect before the time average is adjusted the time average normally does not reread the accumulation buffer to get the change.53 TimeFront
This is a warping framework plugin based on this article:
http://www. A number of frames are accumulated in the accumulation buffer and divided by the divisor to get the average. By disabling subtraction the previous frames are not stored in memory and only the average function is affected by the frame count. In combination with motion tracking it allows entire sequences to be combined to form panoramas. dropping the time average in it. Because the time average can consume enormous amounts of memory.54 Title
. It is main use is reducing noise in still images. For extremely large integrations it is easier to edit the EDL in a text editor and put in the number of frames. the time average retains all the previous frames in memory and subtracts them out as it plays forward.vision. configuring time average for the desired number of frames. time average can increase the dynamic range of lousy cameras. Merely point a video camera at a stationary subject for 30 frames. In floating point colormodels. Accumulate This outputs the accumulation buffer without dividing it.Chapter 14: Realtime effects
117
Time average is one effect which has many uses besides creating nifty trail patterns of moving objects. Inclusive Or This causes the accumulation buffer to be replaced by any pixels which are not transparent. capture the frames.4.ac.

alignments etc. See http://bugs.. a new keyframe is created each time you edit the text. This allows text to be justified while at the same time letting you push it within the title safe region.
. page 166. When using this. The titler also has options you will only find in moving pictures. texture. If the fade seconds are 0. for more information. you may want to import your title as a PNG image (alpha channel transparency is possible).13 [Adding subtitles]. normal font like Arial in a large size. The text scrolls on and scrolls off. For improving playback performances of titles with effects. the text may disappear. with no interpolation. Check View -> Plugin autos to make them visible on the timeline.118
Chapter 14: Realtime effects
While it is possible to add text to movies by importing still images from The Gimp and compositing them. For moving your title use the compositor projector. The best font is a generic. Usually white is the only practical color.cinelerra. If you enable the automatic keyframes toggle . In addition to the scrolling. transparency. you need to use multiple tile effects. Right click on the track and select Resize track.. The motion type scrolls the text in any of the four directions. no fading is done. Enter the smallest resolution that still keeps the title visible. Note: For adding subtitles on a separate stream. shape over time. in pixels per second. you can reduce the size of the dedicated track. Make sure the speed is set to a reasonably high value (try 150) and move the insertion point along the timeline until the text is far enough along the animation to reappear. The titler has standard options for font.org/show_bug. To included graphical elements like logos. size. To add subtitles to your movie can set a single title effect and then define keyframes. To correct an existing subtitle. the X and Y offset is applied. and style. Without loop the text scrolls off and never reappears. Color picks the color to draw the text in. This is useful when drawing text over changing video to keep the border always visible. the Titler allows you to add text from within Cinelerra. Thanks to keyframing you can animate your title and make it change position. the automatic keyframes toggle must be off. but they will likely cause lock-ups. Drop shadow draws a black copy of the text to the bottom right of the original text. To adjust the timing of subtitles simply drag the keyframes. size. The Justify operation justifies the text relative to the entire frame.. Fade in/Fade out are a second type of animation. To create special effects for your title you can place it on a dedicated track and insert other realtime video effects just under the title effect or/and use camera and projector. colour. styles. Text options can only be applied to all the text as a whole.cgi?id=155|bug 155 to know more. In the text input box you will see the subtitle displayed under the insertion point. If you want your title text formatted with a mixture of fonts. Setting loop causes the text to scroll completely off and repeat. The title effect supports keyframes only for Justify and Text. Once justified. Set it higher to speed up the animation. you need an external subtitle editor. See Section 21. Titles longer than 1023 characters will be accepted by the software. sizes. The titler input is limited to 1023 characters. The speed of the animation is determined by speed. move it with camera and projector or add effects. Stamp timecode replaces the text with the current position on the timeline in seconds and frames.

In that directory run ttmkfdir && mv fonts. With different parameter values.scale fonts. Make sure when adding tool active in the compositor window. To add true type fonts.4.
14. that for interlaced footage horizontal displacements are likely to destroy the field order.
. copy the ‘.Chapter 14: Realtime effects
119
The X Window system originally did not have a suitable font renderer for video. Be forewarned though.
14. Identical values for both In and Out that are less than the source dimension will simply crop the source. this can be used to soften or to sharpen the image. You can use this effect for many things such as having a cropped inset clip move across the screen. The easiest way we have found to support fonts in the titler is to have a directory for them at ‘/usr/lib/cinelerra/fonts’.55 Translate
This effect allows displacing. Its parameters are: Amount Moving the slider to the right makes dark areas get darker and light areas get lighter. It does not have a convenient way to know which fonts work with the suitable font renderer in the desired bit depth. The In and Out parameters operate similar to the camera and projector functions in the Compositor: In X/Y specifies how many pixels from the left/top of the source you want to start (camera) while Out X/Y defines where on the screen you want the output to start (projector) In W/H defines how many pixels of the source you want to include in each direction while Out W/H defines how many pixels on the screen you want that source to take up. true type fonts.2 The title-safe region
If the video is displayed on a consumer TV. Different values will stretch (or compress if Out > In) the source in that direction (and crop if In is less than the source dimension. the so called unsharp mask to every video frame.
14. cropping.) This effect supports keyframes so these parameters can change smoothly over time.54. The new fonts should appear. resulting in all sort of flickering and jumping movements. The usage of ttmkfdir changes frequently so this technique might not work.dir and restart Cinelerra.56 Unsharp
This effect applies a traditional darkroom techique.1 Adding fonts to the titler
14. It supports others but TTF are the most reliable.4. It also is restricted to the current bit depth. Moreover.TTF’ files to the ‘/usr/lib/cinelerra/fonts’ directory. or have it change size or stretch while doing so. The text should not cross titles to have the title-safe the inner rectangle.4. and/or scaling the source video horizonally and/or vertically.4. the outer border is going to be cropped by 5% on each side.54. The titler supports mainly TTF. text which is too close to the edge looks sloppy.

conformance (to normalize various videos shot under different light settings). The Videoscope contains two displays: the waveform scope and the vectorscope
14. The Waveform Scope appears on the left side of the Videoscope window.
The color bar test image is plotted in the waveform display as a stair step set of lines. The Videoscope can be used in conjunction with other Cinelerra plugins such as YUV. clarity. Color Balance or Histogram to accurately correct video for contrast. The display is calibrated vertically from 0% intensity (black) at the bottom up to 100% intensity at the top. Threshold This slider permits to control how big is a difference between a pixel in the blurred copy and the original copy is needed before any darkening or lightening will be applied. the waveform display and the test image are aligned to show that each stair step corresponds with one color bar. The human eye is not specialized to match precise level of light and color.120
Chapter 14: Realtime effects
Radius This slider controls how much blurring is used in the edge-finding stage.1 The waveform scope
The Waveform Scope displays image intensity (luminance) versus image X position. In this example. The practical effect of this is to specify how large a region is darkened or lightened. Some thought is being given to having a video scope for recording. or for cinematic purposes. but Videoscope is.
. Each column of pixels in the image corresponds to one column of pixels in the Waveform Scope.
14.57 Videoscope
The Videoscope summarizes intensity and color on a calibrated display.4. Brightness. HUE.57. Unfortunately. this would require a lot of variations of the video scope for all the different video drivers.4.

Gray values are close to the center. 4. Anything above 100% is over saturated. Adjust the effect while observing the waveform to match the desired light level. This range corresponds to gray levels from 6% to 92%. In more complex images.709) The maximum pixel range for HDTV or sRGB is [0. For YUV.601) For analog video or MPEG (including DVD). 255]. The white bar has the highest luminance because it contains all color components. the maximum range for intensity (Y) is [16.39 [RGB-601].
. The distance from the center is the color saturation. If it is not.57. or to conform images to look the same. Adjusting light levels (adjusting luminance): 1. adjust the Brightness/Contrast levels to align the darkest point on the scope with the 0% level and the brightest portion with 100%. the maximum range for RGB is [16. YUV.4.5% (indicated by the “7. or another video adjustment effect on your track.5% and 100%. See Section 14. The Waveform scope helps correct image light levels for contrast range or for conforming light levels on various scenes originally shot on different light settings. Insert the Videoscope effect on the track below. keep the intensity between 7. 235] (8-bit). NTSC Television broadcast If you are producing a video for NTSC television broadcast. Each pixel in the source image is drawn as a point on the color wheel. The minimum black value which can be broadcast is IRE 7. right-click and move it down. 235] (8-bit). 2. Insert the Brightness/Contrast. page 112. MPEG or Analog video (ITU-R BT.Chapter 14: Realtime effects
121
The waveform display shows the white bar at the 75% level because the colors in the test image are 75% values.
If you are looking for maximum contrast range. image correction.
14. This range corresponds with levels 0% and 100%.2 The vectorscope
The Vectorscope displays color and color saturation. The Vectorscope is used with other plugins to correct color. multiple levels in the same column are represented with multiple pixels on the scope. Make sure that it is placed below so it can see the adjustment effect’s results. and apply other effects for cinematic effects. and values below this level are no darker. Limits which may be highlighted with checkbox controls:
HDTV or sRGB (ITU-R BT. adjust image tint.4. 3. Show both the effect and Videoscope. and high saturation values are near the perimeter.5” level).

Vectorscope shows many pixels in the yellow region and few in the white region. To remove the yellow tint. the Color Balance effect is used to first shift the vectorscope plot towards magenta (Mg). and then towards blue (B) until the region previously near the center surrounds the center. The Vectorscope can also be used to verify that the video output will display properly on various monitors.122
Chapter 14: Realtime effects
In this example.
You can adjust the following parameters:
. In the bottom image. will probably not be correctly displayed on the screen. Any points along the inner radius will be displayed as pure white and any points above the 100% radius. the top image is white balanced. yellow highlights have become white highlights (arrows).
14.4. Note that the corresponding features in waveform also appear whiter (arrows).58 Wave
The wave effect adds waves on the image.

the effect is re-initialized at every label. The difference here is that the realtime effects are rendered to disk and not applied under the track. The magnifying glass allows file selection from a list. All other tracks are ignored. either audio or video. Define a file to render the effect to in the Select a file to render to box. If you have a CD rip on the timeline which you want to divide into different files. every 0. Transitions in the affected track are applied. There is also an option for creating a new file at each label. the labels would become dividing points between the files if this option were selected. Select a file format which can handle the track type. the entire region after the insertion point is treated as the affected region. Secondly. the region between the in/out points or the highlighted region is the affected region. Highlight an effect in the list to designate it as the one being performed. Finally there is an insertion strategy just like in the render dialog. it calls the GUI of the effect.5. This allows the new data to be pasted into the existing position without changing the nudge value. If no tracks of the type exist. a second GUI appears to prompt for acceptance or rejection of the current settings. When the timeline is divided by labels. The output file’s sample rate is set to the project sample rate but its length is changed to reflect the scaled number of samples.
. A region of the timeline to apply the effect to must be defined before selecting Render effect.1 Rendered audio effects
15.
15. Rendered effects apply to only one type of track. If the effect is also a realtime effect.. Each of these menu options brings up a dialog for the rendered effect. the effect is processed.5 input samples will be stretched to 1 output sample and the output file will have twice as many samples as the input sequence. The rendered effects are not listed in the resource window but instead are accessed through the Audio->Render effect and Video->Render effect menu options. the insertion strategy applies to all tracks just like a clipboard operation. If it is 0.1 Resample
This multiplies the number of each output sample by a scale factor to arrive at the number of the input sample. the rendered affect processes certain track attributes when it reads its input data but not others.. The result is usually pasted into the track to replace the original data.Chapter 15: Rendered effects
125
15 Rendered effects
Another type of effect is performed on a section of the track and the result stored somewhere before it is played back. Normalize operations take the peak in the current file and not in the entire timeline. Otherwise.1.. It also filters the resampled audio to remove aliasing. Finally. Nudge is not and effects are not. It should be noted that even though the effect applies only to audio or video. every 2 input samples will be reduced to 1 output sample and the output file will have half as many samples as the input sequence. If no in/out points and no highlighted region exist. The wrench allows configuration specific to the file format. If the scale factor is 2. an error pops up. After accepting the settings. When you click OK in the effect dialog. In the render effect dialog is a list of all the realtime and all the rendered effects. the tracks to apply the rendered affect to need to be armed.

select Reframe 5. Select the video clip you wish to re-frame and put it on a video track 2. From the Video menu.5 to run at half speed
15.126
Chapter 15: Rendered effects
15. The new length is 1/scale factor as big as the original sequence.2. Select the area you wish to reframe 3. It produces a file of scaled length and equal frame rate as the project. and . Unlike ReframeRT. enter the scale factor 2 to run twice as fast. At the popup menu. this must run from the Video menu and render its output. Press ok 7. Enter the output format and insertion strategy for the new clip to be created 6.1 Reframe
.2 Rendered video effects
This does exactly the same thing as ReframeRT in Stretch mode. Be aware Reframe does not write the scaled frame rate as the frame rate of the rendered file. It multiplies the output frame number by the scale factor to arrive at the input frame number and changes the length of the sequence. From the effect list. To create a slow-motion of fast moving video: 1. select the Render Effect option 4.

Some crash and some can only be applied to one track due to a lack of re-entrancy. to signify that they are Plugins for GNU/Linux Audio Developers.Chapter 16: Ladspa effects
127
16 Ladspa effects
LADSPA effects are supported in realtime and rendered mode for audio. The LADSPA plugins you get from the internet vary in quality. Although Cinelerra implements the LADSPA interface as accurately as possible. Most can not be tweeked in realtime very easily and work better when rendered. you can get a lot of plugins using apt:
. multiple tracks of realtime. apt-cache search ladspa apt-get install jack-rack cmt blop swh-plugins
If you use Debian. simultaneous processing go beyond the majority of LADSPA users.
Ladspa audio effects in the audio folder LADSPA Effects are enabled merely by setting the LADSPA_PATH environment variable to the location of your LADSPA plugins:
export LADSPA_PATH=/usr/lib/ladspa or putting them in the ‘/usr/lib/cinelerra’ directory. LADSPA effects appear in the audio folder as the hammer and screwdriver.

.

Video transitions in the resources window
Transitions may only apply to the matching track type. Releasing it over the second edit applies the transition between the first and second edit.
Cinelerra supports audio and video transitions. all of which are listed in the resource window.1 Using transitions
When one edit ends and another edit begins. Make sure the edit boundary between the two edits is visible on the timeline. Transitions under video transitions can only apply to video tracks. Transitions are a way for the first edit is output to become the second edit is output with different variations.
Load two video files.
.Chapter 17: Transitions
129
17 Transitions
17. A box highlights over where the transition will appear. Alternatively load a single video file and cut away a section from the center so that you make two edits out of a single file. Drag a transition from the transition list onto the second video edit on the timeline. Go to the Resource window and click on the Video transitions folder. Transitions under audio transitions can only apply to audio tracks. the default behavior is to have the first edit’s output immediately become the output of the second edit when played back.

. If the last frame shown on the timeline is the last frame of the source file Cinelerra will lengthen the first edit using the last frame only. with the unpleasant result of having the first edit freezing into the transition. the beginning of the edit is covered by the transition. In fact. if you set a duration of 1 second for a dissolve transition. the U and u keys will paste the same transition. The exact point in time when the transition takes effect is the beginning of the second edit. Transitions make two edits overlap for a certain amount of time. the detach option removes the transition from the timeline. it can be edited similarly to an effect. This behavior is not possible on multitrack editors where the synchrony among track is vital.5 second into the second edit. On the timeline a brown bar over the transition symbol visually represents the position and the duration of the transition. Fortunately. The show option brings up specific parameters for the transition in question if any. The U key pastes the last video transition and the u key pastes the last audio transition on all the recordable tracks. If the insertion point or the in point is over an edit.5 second of the first edit and continue 0. Spare data duration should be equal or greater than the length of the transition effect set in the Length parameter of the transition popup menu. they are applied to future transitions until they are changed again. The most important consequence of this behavior is that the first asset needs to have enough spare data after the end boundary to fill the transition into the second edit. The length option adjusts the length of the transition in seconds. it will not start at the last 0. once you drag a transition from the Resource window. For example. Cinelerra don’t move edits during transitions.130
Chapter 17: Transitions
Dragging a dissolve transition to the timeline You can now scrub over the transition with the transport controls and watch the output in the Compositor window. Dragging and dropping transitions from the Resource window to the Program window can be really slow and tiring. Finally. Instead it uses spare frames from the source file to lengthen the first edit enough to make it overlap the second edit for the duration of the transition. it will start exactly at the beginning of the second edit and last for 1 second into that second edit. The transition lasts a set amount of time into the second edit. Some consumer single track applications literally move backward the second edit to make it partially overlay the first edit. Move the pointer over the transition and right click to bring up the transition menu. Once the transition is in place. Once these two parameters are set.

2 Dissolve video transition
This is a soft dissolve transition between two video segments. a menu will pop-up with the following controls Show: Pop up the transition specific menu (not available on this transition) On: Toggle on/off the transition effect Transition length: Set the span in seconds for the transition to complete Detach: Remove the transition from the timeline
. The in segment turns increasingly transparent while the out segment materializes into its place. the hardware acceleration will usually be turned off momentarily during the transition and on after the transition in order to render the transition.
17. Available controls: By right-clicking on the transition icon in the timeline. which we call in and out segments. The length of time for the full effect to take place can be controlled by the "Transition Length" control.Chapter 17: Transitions
131
It should be noted that when playing transitions from the timeline to hardware accelerated video device. Using an un-accelerated video device for the entire timeline normally removes the disturbance.

.

or other parameters for a track. The word keyframe has since been used to suggest similar concepts in other fields. In Cinelerra the term keyframe can be misleading: it doesn’t refer to a frame.g. The only way change occurs over time is if additional keyframes are created. Typically this would be a starting or an ending point of a smooth transition in a series of frames. Then merely by clicking and dragging on the curve you can create a keyframe at the position. they stay by default the same for the entire duration of the project. Normally you need to move the camera around or change mask positions e. This affects the sharpness of the curve. Defining additional keyframes to the default one is a very convenient technique for creating dynamic changes. The keyframes would be drawn by the more senior artists and their assistants would draw the "inbetweens". When you change the fade. camera.. The default keyframe applies to the entire duration only if no other keyframes are present and it is not drawn on the timeline. The keyframe it is stored in by default is known as the default keyframe. A keyframe is used to manipulate changes made to the signal over time.1 Curve keyframes
Many parameters are stored in Bezier curves. After the keyframe is created. Parameters can be graphically represented in many forms: curves. CTRL-dragging on a keyframe changes the value of either the input control or the output control. When parameters are selected. In Cinelerra. the horizontal movement is purely for legibility and is not used in the curve value. When you click-drag a second keyframe on the curve.
18. While the input control and the output control can be moved horizontally as well as vertically. Go to view->fade or view->.g.g. A faster way to toggle multiple parameters types is to bring up Window -> Show Overlays. The relative keyframe can be represented on the timeline as a little square on a curve (e. it creates a smooth ramp. Their value is stored in a keyframe. mask). projector. In non-linear digital video editing and video compositing software a keyframe represents a certain value set by the user at a certain point in the timeline. This window allows toggling of every parameter in the view menu.g. Cinelerra interpolates the intermediate values making the change happen smoothly and gradually over time. In either arrow editing mode or i-beam editing mode. if a mask needs to follow an object. modes. essentially acting as a control point for the user to set parameters e. How to handle the different types of keyframes is described here.Chapter 18: Keyframing
133
18 Keyframing
The term "keyframe" is borrowed from the world of animation where it refers to an essential (key) drawing in a sequence. and so on. For example you could use keyframes to fade in a clip by setting the transparency to 100% at the first keyframe and adding another keyframe 5 seconds later in the timeline with a transparency of 0%. but to a point between two frames. click drag on it again to reposition it.zoom to show curves on the timeline for those parameters. Setting static parameters with the default keyframe is only useful if you don’t want to change anything over time.
. toggles. To display the graphical representation of parameters and the relative keyframes use the View menu.. of effects. move the cursor over the curves in the timeline until it changes shape. fade) or as a symbol (e. there are keyframes for almost every compositing parameter and effect parameter. they are drawn on the timeline over the tracks they apply to.

18. This is done by 2 tools: the automation fit . several automatic keyframes will be generated as you change the parameter. If two fade keyframes exist and the insertion point is between them. Click on its tumbler to change the zoom. Unlike curves. ALT-f also performs automation fitting. It is useful to go into the View menu and make the desired parameter visible before performing a change. ALT-UP and ALT-DOWN change the automation zoom from the keyboard. In automatic keyframe mode. depending on which exists.
18. changing the fader changes the first keyframe. Automatic keyframe mode is usually more useful than dragging curves. The faders themselves can set keyframes in automatic keyframe mode. Since automatic keyframes affect many parameters.1 Navigating curve keyframes
There is not much room on the timeline for a wide range of curve values. If a region of the timeline is highlighted by the cursor.1. This lets you set a constant curve value without having to copy the next or previous keyframe. There are many parameters which can only be keyframed in automatic keyframe mode. it is best enabled just before you need a keyframe and disabled immediately thereafter. the keyframe snaps to the value of either the next or previous keyframe. since compositing is highly dependant on the ability to change over time. Enable automatic keyframe mode by enabling the automatic keyframe toggle . If the timeline is playing back during a tweek. When automatic keyframe mode is disabled. Effects are only key-framable in automatic mode because of the number of parameters in each individual effect.3 Automatic keyframes
You may have noticed when a few fade curves are set up. a similarly strange thing happens. button and automation zoom menu The automation fit button scales and offsets the vertical range so the selected curve area appears in the timeline. Click-drag on these curves to create a keyframe. Mute keyframes determine where the track is processed but not rendered to the output.
. This is not just to look cool. In/out points do not affect the zoomed region. You need to zoom the curves in and out vertically to have any variability. the toggle keyframe has only two values: on or off. The location where the automatic keyframe is generated is under the insertion point.
18. Adjusting a parameter adjusts the keyframe immediately preceding the insertion point. every time you tweek a key-framable parameter it creates a keyframe on the timeline.2 Toggle keyframes
Mute is the only toggle keyframe. CTRL and SHIFT do nothing on toggle keyframes. These are parameters for which curves would take up too much space on the track or which can not be represented easily by a curve. It is here that we conclude the discussion of compositing.134
Chapter 18: Keyframing
You may remember that The Gimp and the Compositing masks all use SHIFT to select control points so why does the timeline use CTRL? When you SHIFT-drag on a timeline curve. The automation zoom menu manually changes the vertical scaling of the curves in multiples of 2. Camera and projector translation can only be keyframed in automatic keyframe mode while camera and projector zoom can be keyframed with curves. moving the insertion point around the curves causes the faders to reflect the curve value under the insertion point. only that region is scaled.

move the insertion point to the beginning of the track and enable automatic keyframe mode. to make a stereo pair. more boxes are created. Go to keyframes->copy keyframes to copy them to the clipboard.
. Only the keyframes selected in the view menu are affected by keyframe editing operations. CTRLdrag to set either the in or out control point of the preceding keyframe. the current position between the keyframes. After the in or out control points are extrapolated from the keyframe. If you are halfway between two keyframes. but if multiple keyframes are set. Move the projector a long distance to create another keyframe and emphasize motion. Keyframes can be shifted around and moved between tracks on the timeline using similar cut and paste operations to editing media. is always centered while the camera boxes move. with a line joining the two boxes. Once all the desired keyframes are created. Therefore it is cumbersome to adjust with curves. With a video track loaded. Then go forward several seconds. Move the projector slightly in the compositor window to create a keyframe. Then either set in/out points or highlight the desired region of keyframes. but it can be curved with control points. Once again. The current camera box does not move during a drag. The most popular keyframe editing operation is replication of some curve from one track to patch by the other.5 Editing keyframes
IMPORTANT: when copying and pasting keyframes. If you create more keyframes. This is because the camera display shows every other camera position relative to the current one. every camera box except the current keyframe appears to move. Click-drag when automatic keyframes are off to adjust the preceding keyframe. Solo the destination track’s record patch by SHIFT-clicking on it and go to keyframes->paste keyframes to paste the clipboard. disable automatic keyframe mode.Chapter 18: Keyframing
135
18. At any point between two keyframes. CTRL-dragging anywhere in the video adjusts the nearest control point. the video does not appear to move in step with the first keyframe. Camera automation is normally used for still photo panning. the first projector box is adjusted while the second one stays the same. The first step is to solo the source track’s record SHIFT-clicking on it.4 Compositor keyframes
Camera and projector translation is represented by two parameters: x and y. By default the motion path is a straight line. It is debatable if this is a very useful feature but it makes you feel good to know what keyframe is going to be affected by the next projector tweek. In order to set the second keyframe you will need to scrub after the second keyframe. Cinelerra solves this problem by relying on automatic keyframes. the motion path is red for all time before the insertion point and green for all time after the insertion point.
18. however. The joining line is the motion path. make sure there is no IN or OUT point defined on the timeline. the behavior of the camera boxes is slightly different. The media editing commands are mapped to the keyframe editing commands by using the SHIFT key instead of just the keyboard shortcut. This is because halfway between two keyframes the projector translation is interpolated. This creates a second projector box in the compositor. Furthermore. The situation becomes more intuitive if you bend the motion path between two keyframes and scrub between the two keyframes. A control point can be out of view entirely yet still controllable. the video projection moves over time. we depart from The Gimp because SHIFT is already used for zoom. Now when scrubbing around with the compositor window’s slider. The division between red and green. When editing the camera translation.

Finally. The keyframes->paste keyframes function may then be used to paste the clipboard as a non-default keyframe. The default keyframe is not drawn because it always exists. no matter what region of the timeline is selected.136
Chapter 18: Keyframing
This leads to the most complicated part of keyframe editing. Keyframes->copy default keyframe copies the default keyframe to the clipboard.This is the only way you can simultaneously delete keyframes on ganged tracks.
. there is still a default keyframe which stores a global parameter for the entire duration. If you have copied a non-default keyframe. you will not see the value of the default keyframe reflected until all the non-default keyframes are removed. the default keyframe. After using paste default keyframe to convert a non-default keyframe into a default keyframe. What if the default keyframe is a good value which you want to transpose between other non-default keyframes? The keyframes->copy default keyframe and keyframes->paste default keyframe allow conversion of the default keyframe to a non-default keyframe. Remember that when no keyframes are set at all. there is a convenient way to delete keyframes besides selecting a region and calling keyframes->clear keyframes. Merely click-drag a keyframe before its preceding keyframe or after its following keyframe on the track. it can be stored as the default keyframe by calling keyframes->paste default keyframe.

In reality. In Settings->preferences are a number of recording parameters described in configuration See Section 3. The batch list displays all the defined batches. all media would be stored on hard drives. the record window and the record monitor pop up. Once that is done. page 25. Go to File->record to record a dumb I/O source. The confirmation area lets you determine how the output files are imported into the timeline and quit.1 Cinelerra recording functions
.1.5 [Recording]. The output format area describes the format of the output file and the current position within it. These parameters apply to recording no matter what the project settings are.1 Capturing using Cinelerra
Ideally.Chapter 19: Capturing media
137
19 Capturing media
19. The edit batch area lets you change parameters in the current batch. because the recording parameters are usually the maximum capability of the recording hardware while project settings come and go. While many parameters change depending on if the file has audio or video. the discrete sections are always the same. The first step in recording is to configure the input device.
19. or DVD and loading it into Cinelerra would be a matter of loading a file. The record window has discrete sections. These media types are imported into Cinelerra through the Record dialog. The transport controls start and stop recording different ways. very few sources of media can be accessed like a filesystem but instead rely on tape transport mechanisms and dumb I/O mechanisms to transfer the data to computers. CD-ROM. This prompts for an output format much like rendering does. flash.

The filename specified in the record dialog is the name of the first batch. A batch essentially defines a distinct output file for the recording. choose an insertion method from the Insertion Strategy menu and hit close. It only has meaning if the Mode of the batch is Timed. Click the list row under On to enable or disable batches. Batches try to make the dumb I/O look more like a filesystem. the only use of batches now is recording different programs during different times of day. By default. Path It is the file the batch is going to be recorded to.
19. On Determines whether the batch is included in batch recording operations. In the case of a video file. If you change out of the current batch after recording. to simplify interactive recording. Once the recording length reaches duration the recording stops. the file is opened.2 Batch recording
Now we come to the concept of batches. This is a very important attribute since there is no confirmation dialog if the file exists. Use the stop button to stop the recording. the file is closed. there is a single frame record button which records a single frame. the start time is the time the batch starts recording. Interactive recording starts immediately and uses the current batch to determine everything except start time. The first time you hit record. News Shows whether the file exists or not. Because of the high cost of developing frame-accurate deck control mechanisms. For now you can ignore the batch concept entirely and record merely by hitting the record button . When enough media is recorded. In batch recording. This is still useful for recording TV shows or time lapse movies as anyone who can not afford proper appliances knows. Interactive recording happens when the record button is pressed. the news should say Open. The record button opens the current output file if it is not opened and writes captured data to it. First. but the filename may be changed in the record window for any batch in the edit batch area. Recording can be resumed with the record button without erasing the file at this point. the file will be erased. Every time you resume recording in the same batch. indicating the file is already opened and will not be erased in the next record button press. Each batch has certain parameters and methods of adjustment.1.138
Chapter 19: Capturing media
Recording window areas Recording in Cinelerra is organized around batches. the current batch is configured to behave like tape. Start time It is the 24 hour time of day the batch will start recording if in batch mode. The record window supports a list of batches and two recording modes: interactive and batch recording. The start time may become a time of tape and reel number if deck control is implemented but for now it is a time of day. Batch recording happens when the start button is pressed. News says File exists if it exists and OK if it does not. you will want to create some batches. If the file exists at this point it is erased. whether in interactive or batch mode. Batches are traditionally used to divide tape into different programs and save the different programs as different files instead of recording straight through an entire tape. Duration This is the length of the batch. Next time you change into the batch.
.

you will need to define and select tuner channels to either record or play back to. If the record button is pressed. the source item in the record window can be used to select channels for recording. edit. In batch and interactive recording modes. the input menu selects these. the source is changed to what the next batch is set to. rewind button. The current batch text is colored red in the batch list. any batch can be edited by highlighting it. Usually the source is a tuner channel or input. In this window you add. If the device supports multiple inputs. For some drivers an option to swap fields may be visible. tuner channels define the destination. All recording operations take place in the current batch. The title of the channel appears in the channel list. The norm and frequency table together define which frequency table is selected for defining sources. highlight the desired batch and hit activate to make it the current batch. If the start button is pressed. you can adjust the picture quality. All future recording is done in batch mode. The source of the channel is the entry in the physical tuner’s frequency table corresponding to the title. the rewind Finally there is the button causes the current batch to close its file.1. This way multiple TV stations can be recorded at different times. By coloring the current batch red. tuner channels define the source. The add operation brings up a channel editing box. the current batch is recorded immediately in interactive mode. When the current batch finishes and the next batch begins recording.
19. and sort channels. Defining tuner channels is accomplished by pushing the channel button. To sort channels. In either interactive or batch recording. Also. it. highlight the channel in the list and push move up or move down to move
Once channels are defined. Be aware channel selections in the record monitor window and the record window are stored in the current batch. These drivers do not get the field order right every time without human intervention.
The record window has a notion of the current batch. The next recording operation in the current batch deletes the file.Chapter 19: Capturing media
139
Source This has meaning only when the capturing hardware has multiple sources. This brings up the channel editing window. When the first batch finishes. the current batch flashes to indicate it is waiting for the start time in batch mode. The same channel selecting ability also exists in the record monitor window. In the case of the Video4Linux and Buz recording drivers. The current batch is not the same as the batch which is highlighted in the batch list. when the current batch finishes recording the next batch is activated and performed.
. Fine tuning in the channel edit dialog adjusts the physical frequency slightly if the driver supports it. The highlighted batch is merely displayed in the edit batch section for editing. Toggle this to get the odd and even lines to record in the right order. the next batch flashes until its start time is reached.3 Editing tuner information
Sometimes in the recording process and the configuration process. for certain video drivers. Interrupt either the batch or the interactive operation by hitting the stop button. without changing the batch to be recorded. When the Buz driver is also used for playback. If there are multiple batches.

It will not work when grabbing from a analog/digital converter such as a Canopus ADVC110. Capturing videos in four easy steps: 1.140
Chapter 19: Capturing media
19.2 Capturing using dvgrab
dvgrab is great and simple to use a command line tool to capture videos from a DV camcorder. However. deb. When invoked it will automatically put your camera in play mode. Type: dvgrab --buffers 500 and RETURN 4. etc) or refer to the dvgrab webpage. that only works when grabbing from a DV camcorder. as: ‘001. cd to that directory 3. Video files will be labeled sequentially. Create a directory where you want the capture videos to be stored 2. use your distribution preferred installation mechanism (apt.avi’.avi’ and so on. rpm. It splits scenes according to the timecode. ‘002. and start storing the videos on your hard disk. Press CTRL-C to stop capturing video The ‘--autosplit’ option is very useful. Read the dvgrab manual to get more information about dvgrab features. To install dvgrab.
.

You can then delete all the source assets. and stores it in a pure movie file. If Render audio tracks or Render video tracks is selected and the file format does not support it. play the rendered file in a movie player. Select the magnifying to bring up a file selection dialog. The rendering functions define the region based on a set of rules. page 85.1 Single file rendering
The fastest way to get media to disk is to use the single file rendering function. The navigation section describes methods of defining regions. If the file format can not store audio or video the compression parameters will be blank. This determines the filename to write the rendered glass file to and the encoding parameters. You need to define this region on the timeline.
The render window In the render dialog select a format from the File Format menu. The format of the file determines whether you can render audio or video or both.2 Separate files rendering
The Create new file at each label option causes a new file to be created when every label in the timeline is encountered. or bring it back into Cinelerra for more editing. It is very difficult to retouch any editing decisions in the pure movie file. the affected region is rendered. All rendering operations are based on a region of the timeline to be rendered. When a region is highlighted or in/out points are set. Select the Render audio tracks toggle to generate audio tracks and Render video tracks to generate video tracks.Chapter 20: Rendering files
141
20 Rendering files
Rendering takes a section of the timeline. Go to File->render or press SHIFT-R to bring up the render dialog.
20. This is useful for dividing long audio recordings into individual
. trying to render will pop up an error. Select the wrench next to each toggle to set compression parameters. the entire track is rendered. however. When no region is highlighted. effects and compositing.
20. so keep the original assets and XML file around several days after you render it. Merely by positioning the insertion point at the beginning of a track and unsetting all in/out points. performs all the editing. See Chapter 13 [Timebar]. everything after the insertion point is rendered.

Cinelerra automatically concatenates a number to the end of the given filename for every output file. you specify one or more Cinelerra project XML files (EDL) to render and the unique output files for each. If you render only audio and have some video tracks armed. It should be noted that even if you only have audio or only have video rendered. The insertion modes are the same as with loading files.
. or positioning the insertion point before it. If no 2 digit number is given. See Chapter 7 [Editing]. In the filename ‘/hmov/track01. The interface for batch rendering is a bit more complex than for single file rendering. combined with the settings for rendering an output file. are called a batch. The batch renderer requires a separate Cinelerra project file for every batch to be rendered. with no need for the user to manually interact with the Cinelerra user interface. for example. In this case if you select insert nothing the file will be written out to disk without changing the current project. the 2 digit number is overwritten with a different incremental number for every output file. a paste insertion strategy will behave like a normal paste operation.xml’ file (EDL). go to File->batch render. To create a Cinelerra project file which can be used in batch render. It even allows for Cinelerra to be completely driven by external programs. When using the renderfarm. Each Cinelerra project XML file.
20. however.wav001’ and so on and so forth. The first thing to do when preparing to do batch rendering is to create one or more Cinelerra projects (EDL) to be rendered and save them as normal Cinelerra project (‘myproject.3 Insertion strategy of rendered files
Finally the render dialog lets you select an insertion mode. For other insertion strategies be sure to prepare the timeline to have the output inserted at the right position before the rendering operation is finished.
20. Filename regeneration is only used when either renderfarm mode is active or creating new files for every label is active. the video tracks will get truncated while the audio output is pasted into the audio tracks. When Create new file at each label is selected. Create new file at each label causes one renderfarm job to be created at every label instead of using the internal load balancing algorithm to space jobs.cin. It allows you to eliminate manual repetitive keystrokes and mouse clicks. This allows a huge amount of media to be processed and greatly increases the value of an expensive computer. Define as many projects as needed this way. erasing any selected region of the timeline and pasting just the data that was rendered. This brings up the batch rendering dialog. set up a Cinelerra project and define the region to be rendered either by highlighting it. without any user intervention. would become ‘/hmov/track.4 Batch rendering
Batch Rendering is one of Cinelerra’s great but lesser-known strengths.wav’ the ‘01’ would be overwritten for every output file. a new filename is created for every output file. Editing describes how to cause output to be inserted at the right position. batch rendering is the function to use. and automate the rendering of audio-video files. In this function.wav’. The filename ‘/hmov/track. Then Cinelerra loads each project file and renders it automatically. page 49.xml’) files. If you want to render many projects to media files without having to repeatedly attend to the Render dialog. If the filename given in the render dialog has a 2 digit number in it. Then save the project in the normal way to a ‘myproject.cin.142
Chapter 20: Rendering files
tracks. creating the same output video in different file formats. The batch renderer takes the active region from the EDL file for rendering. With all the Cinelerra project files (EDL) prepared with active regions. setting in/out points around it. You can use the same Cinelerra project file if you are rendering to different output files.

hit Stop. To exit the batch render dialog whether or not anything is being rendered.xml (changing ‘myrenderlist. If the batches to render list is empty or nothing is highlighted.batchrender. If it is blank. hit Cancel. In the Cinelerra batch render dialog. If it is checked. The highlighted batch is always synchronized to the information displayed. Do this by setting the EDL path.
cinelerra -r myrenderlist. the batch is skipped. This way batches can be skipped without being deleted.
You can automate Cinelerra batch renders from other programs.cin. hit Start. Click on the Enabled column in the list box to enable or disable a batch. To stop rendering before the batches are finished and close the batch render dialog. Audio. Set the output path. once you have created your list of batch render jobs. Hit delete to permanently remove the highlighted batch.cin. Once rendering. execute: your batch render list).xml’. Elapsed The amount of time taken to render the batch if it is finished. Highlight any batch and edit the configuration on the top of the batch render window. Click and drag batches to change the order in which they are rendered. EDL The source EDL of the batch.xml’ to whatever filename you chose for saving
. The new batch will contain all the parameters you just set.batchrender. We suggest you use a filename like ‘myrenderlist. Output The output path of the batch.batchrender. the elapsed column in the batch list is updated and the next batch is rendered until all the enabled batches are finished. hit Cancel. The table of batches appears on the bottom of the batch render dialog and is called batches to render. To stop rendering before the batches are finished without closing the batch render dialog. Make sure that no files with the same name as the output files exist before starting the render. Repeatedly press the New button to create more batches with the same parameters. Video. The other columns in the batch list are informative. Cinelerra will start up and perform the rendering jobs in that list.xml’).cin. The batch render will simply fail. A batch is simply a pairing of a Cinelerra project file with a choice of output file and render settings. When invoked with these parameters. The currently rendering batch is always highlighted red. the main window shows the progress of the batch. In this case. without creating its usual windows. In the list box is a column which enables or disables the batch. In addition to the standard rendering parameters. you can click the button Save List and choose a file to save your batch render list to. Once you have created this file. and Create new file at each label parameters as if you were rendering a single file.cin. click New to create a new batch. These parameters apply to only one batch. EDL path has nothing to do with EDL files as created by File/Export EDL. Once the batch finishes. Use the magnifier to bring a drop down menu with your files or write manually the path to your regular Cinelerra project file (‘myproject. To start rendering from the first enabled batch. From a shell prompt (or from a script.Chapter 20: Rendering files
143
A list of batches must be defined before starting a batch rendering operation. Cinelerra in batch render mode will not overwrite an existing output file. you must select the Cinelerra project file (‘myproject.cin.xml’) to be used in the batch. the batch is rendered. file format. you can start up a batch render without needing to interact with the Cinelerra user interface. Above this are the configuration parameters for a single batch. or other program).

MPEG files can be concatenated with cat. It is important for all the nodes to have access to the same filesystem on the same mount point for assets. the track dimensions. Run a slave node from the command line with That is the simplest configuration. Because of this. Renderfarm support even in the simplest form brings HDTV times back in line with SD while making SD faster than real-time. thus they do not need hard drives. It means that if you can create valid Cinelerra project xml files and Cinelerra render list files from other programs (which requires just a small amount of skill with your favourite XML library). You can leverage the power of Cinelerra and incorporate it into your own programs. Renderfarm is invoked transparently for all file->render operations when it is enabled in the preferences. all
cinelerra -d
. If this option is selected when no labels exist. Type cinelerra -h to see more options. Cinelerra load balances on a first come first serve basis. The default port number may be overridden by passing a port number after the ‘-d’. the time needed to produce the simplest output became unbearable even on the fastest dual 1. output dimensions. it is simple enough to use inside an editing suite with less than a dozen nodes without going through the same amount of hassle you would with a several hundred node farm. Files which support direct copy can be concatenated into a single file by rendering to the same file format with renderfarm disabled. then you can gain full automated access to all of Cinelerra’s functionality without needing to interact with the Cinelerra user interface. Also to get direct copy. It should be noted that in the render dialog. The possibilities for this are endless. The slave nodes traditionally read and write data to a common filesystem over a network.6. the jobs are left in individual files. If the last segment is dispatched to the slowest node. Also some file formats like MPEG can not be direct copied.
20. By trial and error. Cinelerra divides the selected region of the timeline into a certain number of jobs which are then dispatched to the different nodes depending on the load balance. page 28. The output files are not concatenated. The slave nodes are anywhere else on the network and are run from the command line. The master node is the computer which is running the GUI. please note: this is a powerful feature indeed.5 The render farm
When bicubic interpolation and HDTV was first done on Cinelerra. Most of the time you will want to bring in the rendered output and fine tune the timing on the timeline. you’ll be able to generate valid Cinelerra xml files for projects and batch render lists. It’s a good idea if you can create simple Cinelerra project files and batch render files and study the XML format. and asset dimensions must be equal. and thus create your own Cinelerra automation library in your favourite programming language. Ideally all the nodes on the renderfarm have similar CPU performance.7 GHz Xeon of the time. If a node can not access an input asset it will display error messages to its console but probably not die.144
Chapter 20: Rendering files
Programmers. the Create new file at each label option causes a new renderfarm job to be created at each label instead of by the load balancer. Configuration of the renderfarm is described in the configuration chapter See Section 3. A Cinelerra renderfarm is organized into a master node and any number of slave nodes. only one job will be created.2 [Renderfarm]. MPEG files or files which do not support direct copy have to be concatenated with a command line utility. You can load these by creating a new track and specifying concatenate to existing tracks in the load dialog. If it can not access an output asset it will cause the rendering to abort. The nodes process the jobs and write their output to individual files on the filesystem. While the renderfarm interface is not spectacular.

we recommend to render it as a Quicktime4linux file. Attempting to use anything but the defaults is very involved so it has not been tested. first run Cinelerra in graphical mode.3:naq:v4mv:mbd=2:\ trell:cmp=3:subcmp=3:mbcmp=3:aspect=4/3:sc_threshold=1000000000:\ vmax_b_frames=2:vb_strategy=1:dia=3:predia=3:cbp:mv0:preme=2:\ last_pred=3:vpass=1:cgop -ofps 25 -of avi movie. This is useful if you are planning on crashing X repeatedly or want to do rendering on the other side of a low bandwidth network. Go to file->batch render.1 Encoding a video in MPEG4 format for the internet
To get the best quality. They give output video files whose weight is around 13 Mb for 3 minutes.mov -ovc xvid -xvidencopts bitrate=600:pass=2 \ -vf scale=320:240 -oac mp3lame -lameopts abr:br=64 -o output. First pass:
mencoder input. you should encode your Quicktime4linux file with mencoder in two passes.Chapter 20: Rendering files
145
the fastest nodes may end up waiting for the slowest node to finish while they themselves could have rendered it faster. Other parameters exist for specifying alternative files for the preferences and the batches.
20. First pass:
mencoder -oac pcm -sws 2 -vf scale=${width}:${height}. The Quicktime4linux file rendered from Cinelerra must have the following properties: Audio option Two Complements 16bits (PCM) Video option DV
20. set with the ‘-vf scale=’ option. This saves the batches in a file. On the command line run: cinelerra -r to processes the current batch jobs without a GUI. That is why the command line aborts if any output files already exist. You might have access to a supercomputer in India but still be stuck in America. These settings are used the next time command line rendering is used.mov -ovc xvid -xvidencopts bitrate=600:pass=1 \ -vf scale=320:240 -oac mp3lame -lameopts abr:br=64 -o output.01:scplx_mask=0.avi Do not forget to change the output size of the video. Setting up all the parameters for this operation is hard. and then encode that file in MPEG4 of FLV formats.
20. A command line interface is ideal for this.
Here are some other command lines.7 Rendering videos for the internet
If you want to encode a video in order to put it on the internet.avi
Second pass:
mencoder input. Set up the desired renderfarm attributes in settings>preferences and exit Cinelerra. Create the batches you intend to render in the batch window and close the window.6 Command line rendering
The command line rendering facility consists of a way to load the current set of batch rendering jobs and process them without a GUI. To perform rendering from the command line.hqdn3d=2:1:2 \ -ovc lavc -lavcopts vcodec=mpeg4:vbitrate=${video_bitrate}:vlelim=-4:\ vcelim=7:lumi_mask=0. exiled you might say.7.mov -o /dev/null\ -ffourcc DIVX
.05:dark_mask=0.

or the Apache Licenced FlowPlayer http://flowplayer. That format is really useful when one wants to share a video with a wide audience over the internet.05:dark_mask=0.flv
There are a number of options for embedding the flv file in a web page.jeroenwijering. The audio sampling frequency to use is 22050 and the ‘-ar’ parameter must be used for the video to be properly encoded.mov -b 430k -s 320x240 -aspect 4:3 -pass 2 -ar 22050 movie.01:scplx_mask=0. FLVTool2 (http://www. That is why you must scale the image to the right aspect ratio.avi -ffourcc DIVX
You probably have to adapt those command lines if your material is noisy. and have controls for stopping and playing the movie etc. First pass:
ffmpeg -i movie.7. Or you can use the Creative Commons Non-Commercial licenced JW FLV Player http://www.2 Encoding a video in FLV format for the internet
FLV files (FLash Video) weight is very small and the only thing needed to play those files is an internet browser with flash plugin version 7 or higher installed.
.hqdn3d=2:1:2 -ovc lavc\ -lavcopts vcodec=mpeg4:vbitrate=${video_bitrate}:vlelim=-4:vcelim=7:\ lumi_mask=0. Those are the recommended resolutions for 4/3 PAL material: 384:288. The duration has to be written in the metadata information in order for some flash players to display a progress bar. the ‘-ffourcc’ parameter is needed for the video codec to be recognized as Divx. the media player running on Windows will loose A/V sync if a VBR audio bitrate is used instead of CBR.html has detailed instructions for ming and http://search.geekserver. 448:336.de/flvtool2) can be used to insert that information:
cat input_file.com/?item=JW_FLV_Player.17/bin/flv2swf can be installed with cpan> install FLV::ToSWF.cpan.flv | flvtool2 -U stdin output_file. Ffmpeg does not write metadata information in the flv file.org.146
Chapter 20: Rendering files
Second pass:
mencoder -srate 32000 -oac mp3lame -lameopts cbr:br=${audio_bitrate}:\ aq=0 -sws 2 -vf scale=${width}:${height}. Both of these allow you to use the flv as created above.3:naq:v4mv:mbd=2:trell:\ cmp=3:subcmp=3:mbcmp=3:aspect=4/3:sc_threshold=1000000000:\ vmax_b_frames=2:dia=3:predia=3:cbp:mv0:preme=2:last_pred=3:vpass=3:\ cgop -ofps 25 -of avi movie. You can use ming or flv2swf to create an swf file.flv
Second pass:
ffmpeg -i movie.net/flash/streaming.mov -o movie.org/~clotho/FLV-Info-0. http://klaus.mov -b 430k -s 320x240 -aspect 4:3 -pass 1 -ar 22050 movie. If you want your video file to be displayed properly on a well know media player which runs on Windows you should be aware that:
the aspect ratio information contained in the AVI header will not be taken into account by that player.
20. 512:384 or 704:528.flv
Pay attention to the output file extension.inlet-media. Width and height must be multiples of 16. Ffmpeg uses it to determine the output format. have a look at mencoder’s pre-processing filters. The * mask parameters are really important when encoding at low bitrate.

m2v file.1.8 Quicktime for GNU/Linux compatibility chart
Scott Frase wrote a Quicktime for GNU/Linux compatibility chart. authoring to DVD according to the sections below.net/video/qtcompatibility. page 151. Copy in ‘~/cine_render. and playing it in your cheapest standalone player to really see wether it is foolproof or displays errors. Deselect Render audio tracks and select Render video tracks
. This method allows you to precisely set the encoding option you want and produces an mpeg2 file which is 100% compatible with all DVD standalone players.Chapter 20: Rendering files
147
20. one based on an HDV resolution-formatted project and another based on a DV resolution-format project.97 frames per second.ods
Some interesting notes: Mplayer does behave better with smaller. Create a script ‘~/cine_render. It contains an exhaustive list of all the Quicktime compression schemes available and their compatibility in Cinelerra. Press SHIFT-R 6. Select the YUV4MPEG Stream file format 7. Mplayer and some other media players. Audio and video are rendered seperately and combined later in a procedure external to Cinelerra. make sure you properly defined your Cinelerra project format before rendering your video (menu Settings->Format. Within Cinelerra.).ac3. preferably even before loading any raw footage.sh
4.9.sh file’ the following lines:
#/bin/bash mpeg2enc -v 0 -K tmpgenc -r 16 -4 1 -2 1 -D 10 -E 10 -g 15 -G 15 -q 6 -b 8600 -f 8 -o $1 3. DV resolution video Cinelerra compatibility with files rendered from a DV project is not much different than its compatibility with files rendered from an HDV project. Put the execute permissions on that file: chmod 777 ~/cine_render. one or the other variant may produce better results.
20.. Both variants are described in detail below. 1. Comparison chart of DV/HDV mplayer/cinelerra compatibility included
20. NTSC is 720x480 at 29.serveftp. That document has two main sections.1 Rendering to mpeg2
20. and video is rendered into a yuv4mpeg stream which is piped through either mpeg2enc or ffmpeg into a . (Apparently depending on footage and player engine.9.1 yuv4mpeg pipe through mpeg2enc
The mplex program from mjpegtools must be installed.3 [Authoring a DVD].9 Making a DVD
Here is a method to export mpeg2 video for DVD.) In both cases. It is available here:
http://content.9. For how to make a DVD from the output See Section 20. TV standards: PAL is 720x576 at 25 frames per second. Check out which one works best for you by rendering a short test edit of a few seconds length.. Audio is rendered into .sh’ 2. The mjpegtools package is built in the hvirtual distribution and the mplex utility may be extracted from there. and select the part of the project you want to render with the [ and ] points 5.

m2v your_audio_file. and render an AC3 file at 224kbits 13.9. the quality increases. Click OK to close the second window. If your material is noisy (Hi8 analog material for example).. you can add some mjpegtools in the command line written in ‘~/cine_render. combine video and audio with this command:
mplex -f 8 your_video_file. Click OK. Click on the wrench 9. Finally. Rendering audio is much faster than rendering video but might still take some seconds. 6.ac3). 2. But the bitrate will increase. those commands added at the beginning of the command line in ‘cine_render. Audio is rendered.
20. the compression dialog disappears. 10. Look at the mpeg2enc manpage.1. or press SHIFT-R.mpeg If you obtain errors while using mplex. In the render dialog. see below). and OK again to render your ‘m2v’ file 12. indicate the name of the ‘m2v’ file you want to create. from In-point "[" to Out-point "]".
You can modify the mpeg2enc parameters if you want to. 8. Click on Use pipe and write this command: /home/<your_user>/cine_render. 4. 7. 3. Select File->Render. click OK. or 2..148
Chapter 20: Rendering files
8. Example: yuvdenoise -F | yuvmedianfilter -T 3 |
Denoising is a complex task. Specify the audio output file name and path (example: your-movie. increase the quantizer (‘-q’ option. and the options given above are just an example. ‘-K tmpgenc’ : invokes the TMPGEnc matrices.2 yuv4mpeg pipe through ffmpeg
1. Please read the mjpegtools’manual and subscribe to its mailing-list for more information. If you reduce it (do not go below 3). the entire project. A new dialog "Cinelerra: Audio Compression" pops up. The render dialog pops up. Watch the progress bar in the main window’s lower right corner.ac3 -o video_audio_file.sh’: y4mshift and y4mscaler can be used to remove the noisy borders around the video. ‘-q 6’ : This is the quantizer setting. you have the choice to render 1. It reduces the average bitrate by about 10% compared to the default tables. you can try removing this option. the highlighted selection. open the rendering window again.
. or 3. Select the AC3 audio output file format. In the newly opened window. the dialogu disappears. Click on the wrench next to "Audio:". 9. We recommend you to do not increase that value or you could get errors when mplexing the video and the audio.sh’ remove the black borders around a Hi8 video:
yuvscaler -v 0 -I ACTIVE_700x560+8+8 | y4mshift -n 2 | yuvdenoise and yuvmedianfilter can help removing noise. In the render dialog. When the m2v file is rendered. Some details about the settings: ‘-b 8600’ : This is the maximum bitrate of your ‘m2v’ file (it does not include the audio bitrate). 10.sh % 11. For example. That file will contain video only. 5. Make sure the Insertion strategy is "Create new resources only". For very-high quality material. It’s recommended to keep the medium bitrate achieved (that’s displayed when mplexing the audio and video files) around 10% lower than the bitrate defined with the ‘-b’ setting. Select Render audio tracks and deselect Render video tracks. Set the bitrate to 128 kbps (or leave it there).

The -y option allows to overwrite existing target files (of course. 15. producing a dvd-compatible mpeg stream:
ffmpeg -i your-movie. Specify the video output file name and path (example: your-movie. however with erroneous syntax of the interlacing flags or without the flags.m2v). dvdwizard or tovid. it is safer to omit this. Some Cinelerra versions suggest a similar command line in the ffmpeg pipe presets for DVD. using those GUI is not perfect for the moment. 16. this means from Cinelerra’s render
Note on ffmpeg command line options:
20. 19. Select the YUV4MPEG Stream file format.mpg (Yes. Click on the wrench next to "Video:". you might want to watch and check your-movie. The resulting . 13. 17.ac3 audio with the following shell command. If you prefer to use a GUI. press SHIFT-R.
-i tells ffmpeg to read from standard input (in our pipe.m2v -target dvd -flags +ilme+ildct your-movie. The +ilme+ildct flags are for proper interlacing. The method we explain below is more complicated than using a GUI. A new dialog window "Cinelerra: YUV4MPEG stream" pops up. 12. but then you must make sure to rename or delete previous results each time you want to render a new version). tested with PAL footage.2 Making a DVD menu
A DVD menu is composed of: a background (still image or video) buttons sound/music You can build a menu with a GUI such as qdvdauthor. Fill the following command line into the second textbox:
ffmpeg -f yuv4mpegpipe -i .m2v can be further processed together with the . bottom fields first.ac3 -i your-movie. Before proceeding to put your rendered mpeg2 data on DVD.9.-y -target dvd -flags +ilme+ildct %
18. but its author fixed some of them recently. Deselect Render audio tracks and select Render video tracks. 14. The render dialog pops up again.wikia. Click OK in the yuv4mpeg dialog and in the render dialog to render video output. since they are bugged or limited for the moment.Chapter 20: Rendering files
149
11. The first textbox should already contain the output filename and path you had specified in the render dialog. which makes QDVDAuthor more usable. dvdstyler.)
stream). Select "Use Pipe:".com/wiki/Main_Page
QDVDAuthor contained a lot of bugs sometime ago. we recommend you to try tovid:
http://tovid. however it: produces DVD playable on all standalone players is not subject to bugs will save you a lot of time since all you have to do to author a new DVD is to modify text files
Here are the steps needed to create your DVD menu: create the menu background with cinelerra
. However.mpg in mplayer or xine/kaffeine. the stream is sent through ffmpeg a second time.
Again.

This is the simpliest method. There is one line per button. We have to use spumux to define each button position in that mpeg2 file. y0: upper left corner x1. If you already made the buttons in Cinelerra. To draw the buttons. If you did not draw the buttons in Cinelerra. That way. Spumux is a command line utility which takes 2 arguments: an XML file explaining where the buttons are the mpeg2 file name (the one you rendered for the menu) Here is a spumux example XML file:
<subpictures> <stream> <spu start="00:00:00.png" highlight= "buttons_highlight.png" This png image contains the buttons as they should appear
when they are not selected nor highlighted. such as a video thumbnail for each part of your video. you will be able to put them in with spumux. You can add a music if you wish to.png"> <button name="1" x0="94 " y0="234 " x1="253 " y1="278" down="2" right="4" /> <button name="2" x0="63 " y0="287 " x1="379 " y1="331" up="1" down="3" right="5" /> </spu> </stream> </subpictures> image="buttons normal. Combine the audio and video with mplex as you would do with any "normal" video. you have to specify empty (100% transparent) PNG images here. and some buttons displayed above it if you added them in Cinelerra.png" This png image contains the buttons in their highlighted state. highlight="buttons highlight.150
Chapter 20: Rendering files
add the buttons by creating PNG images combine the menu and the buttons with spumux We suppose that you want to create a menu with an animated background. Launch Cinelerra and create a project containing what you want to be the background of the menu.png" select="buttons_select. you will be able to make animated buttons. but you won’t be able to display animated buttons. Pay attention to the fact that this menu will play in loop.png This png image contains the buttons in their selected state. you have two possibilities: display them in Cinelerra. y1: bottom right corner
. Each line contains the button coordinates.0" image="buttons_normal. You can easily convert an image to 4 indexed colors using Gimp. select="buttons select. Render that video into m2v and ac3 using the cine_render.sh method explain above. do not draw the buttons in Cinelerra. The PNG images used in spumux must: contain an alpha channel (ie support transparency) be in 4 indexed colors. You will add them later on. You obtain a mpeg2 file containing the menu background. from PNG images "added" to the mpeg2 menu file. a button having a rectangular shape: x0.

mpeg > menu_with_buttons.xml
Now. dvdauthor uses XML files to describe the DVD structure. but not on all ofthem. </post>
To author the DVD. If the "Right" key is pressed on the remote. There are no menus. </post> </pgc> </titles> </titleset> </dvdauthor>
20.
After having rendered to mpeg2 your video files.Chapter 20: Rendering files
151
You also have to set which button to move to when using the up.mpeg’. it will jump to the first chapter of the video (which dvdautor assumes to be the beginning of the video since chapters haven’t been defined). you need to "author" the DVD with dvdauthor. a menu is displayed and you can choose to play any of 4 videos. that is another command line application. It is an mpeg2 files with buttons.
<dvdauthor dest="/path/to/the/folder/which/will/contain/the/dvd"> <vmgm /> <titleset> <titles> <pgc> <vob file="/the/mpeg/file.mpeg" /> <post> jump chapter 1. You should really pay a lot of attention to the . To help you start using dvdauthor. left and right buttons of the DVD remote. and having prepared a menu with spumux.xml < menu..xml file syntax since it is very rigorous.3 Authoring a DVD
This is a very simple dvdauthor XML file. You need to create an XML file in a text editor and save it as ‘simple_example..
<post> jump chapter 1..mpeg’ mpeg2 video file. down. so the video file ‘/the/mpeg/file. Replace example filenames and paths with the ones right for your project.9. Here is an example:
<button name="3" . then button 4 will be highlighted. you have to type this command:
spumux menu. remove the following lines from your XML file.mpeg That will make a ‘menu_with_buttons. When the DVD player reaches the end of the video. When the DVD is inserted. The risk is the DVD to be readable on some standalone players. go to the folder that contains the video and the XML file and type the following command:
dvdauthor -x simple_example. When you have finished editing your spumux XML file.mpeg’ will be played as soon as you insert the DVD in the player. up="1" down="5" left="2" right="4" />
When button 3 is selected.coordinates. here are some example XML files you can copy and paste into your ‘simple_example. The command in <post> tag means the video should be played in a loop.. let’s have a look at a more complex example.xml’ in the same folder of your ‘/the/mpeg/file.
. if the "Up" key is pressed on the remote then the button 1 will be highlighted.xml’ file. To make the video play only once without jumping from the end to the beginning.

iso .Chapter 20: Rendering files
153
When you have finished authoring the DVD. It continuously renders temporary output./ && eject /dev/dvd nice -n -20 mkisofs -dvd-video -V VIDEO -o . any size video can be seen in real-time merely by creating a fast enough network with enough nodes. This sets the point where background rendering begins to where the in point is. This way. Background rendering is enabled in settings->preferences->performance. a red bar appears in the timeline showing what has been background rendered. cd into this folder. It is often useful to insert an effect or a transition and then select settings->set background render right before the effect to preview it at full framerates. Use good quality DVD-R only.9. It has one interactive function: settings->set background render. This ‘. To test your DVD before burning it./dvd. it is time to burn it.iso && eject /dev/cdrom
If you have a lot of copies to do. and type:
xine dvd:‘pwd‘
20.iso’ file can be burnt using this command: nice -n -20 growisofs -dvd-compat -speed=2 -Z /dev/dvd=. you will find in the destination folder the following directories: ‘AUDIO_TS’ and ‘VIDEO_TS’.4 Burning a DVD
If your DVD plays fine on your computer./dvd. type this command (adjusting for your dvd burner device.. First.
20. When renderfarm is enabled. When you are in the folder containing ‘AUDIO_TS’ and ‘VIDEO_TS’. format your DVD-RW using this command:
dvd+rw-format -lead-out /dev/dvd
Then. To test your DVD on a standalone player without wasting several DVD-R..
. If any video exists.. burn the DVD-RW using the commands above.iso master in the parent folder using this command:
We recommend you do not burn at a speed higher than 4x. background rendering uses the renderfarm continuously. eg /dev/dvdrw):
nice -n -20 growisofs -dvd-compat -speed=2 -Z /dev/dvd -dvd-video -V VIDEO ./dvd. you can first make an .10 Using background rendering
Background rendering allows impossibly slow effects to play back in real-time shortly after the effect is pasted in the timeline. you can burn on DVD-RW.

.

Now on the timeline use Settings->Format to set a YUV colorspace.
21. Capture it using MJPEG or uncompressed Component Video if possible.5 seconds and delay either the front left or front right by 0. Pan the new track to orient the signal in the front speakers.
21. drop a synthesizer effect. you probably record analog TV more than you record digital TV. and set the frequency below 60 Hz. Create another 2 audio tracks as above: the left channel panned left and the right channel panned right. The picture quality on analog TV is horrible but you can do things in Cinelerra to make it look more like it did in the studio. First.2 Cleaning analog TV
Unless you live in a rich nation like China or are a terrorist. the signal in the back speakers must be delayed by at least 0. Create 2 audio tracks. First. Other tricks you can perform to separate the speakers are parametric equalization to play only selected ranges of frequencies through different speakers and lowpass filtering to play signals through the subwoofer. when capturing the video. Center it with the pan control and the signal will come out of the center speaker. Next.05 seconds and a single new track should be created. RGB should be a last resort. Do not bother adjusting the brightness or contrast in the recording monitor. create a new track. Rudimentary Dolby pro logic encoding can be achieved with clever usage of the effects. If those are too demanding. capture it in the highest resolution possible. then capture it using JPEG. create the front left and right channels. Next. Drop a Downsample effect on the footage. although maxing out the color is useful. This section is arranged in order of the problems and what tools are used to solve them. Pan the left channel to the left and the right channel to the right with pan. If the same signal is desired in all the speakers except the center speaker. each carrying either the left or right channel. create the center channel by creating a single audio track with monaural audio from a different source.2 seconds. If you want to hear something from the subwoofer. Following sections are arranged in order of the tools and their uses. create the rear left and right channels. delay the back speakers by 0. If a copy of the signal in the back speakers is desired in any single front speaker.1 Encoding into Dolby Pro Logic
Dolby pro logic is an easy way to output 6 channel audio from a 2-channel soundcard with degraded but useful results.Chapter 21: Tips
155
21 Tips
In this section. you will find ways to solve common problems using Cinelerra. Then apply invert audio to both new channels and the signals will come out of the rear speakers. For Europeans it is 720x576 and for North Americans it is 720x480. The subwoofer merely plays anything below 60Hz or so. select a range. Set it for
Horizontal: Horizontal offset: Vertical: Vertical offset: red x green 2 0 2 0
.

Line averaging The Deinterlace effect when set to Average even lines or Average odd lines does exactly what line doubling does except instead of making straight copies of the lines it makes averages of the lines. In the second frame it puts a line-averaged copy of the odd lines. It leaves the footage intact. so it must be used on a timeline whose project frame rate is twice the footage’s frame rate. There is an option for adaptive line averaging which selects which lines to line average and which lines to leave interlaced based on the difference between the lines. The Frames to Fields effect converts each frame to two frames. This is actually useful for all scaling. That is why they made deinterlacing effects in Cinelerra. It does not cause jittery timing. When applied to a track it reduces the vertical resolution by 1/2 and gives you progressive frames with stairstepping. Interlacing is here to stay. They are either irreversible or do not work. We do not believe there has ever been a perfect deinterlacing effect. Line Doubling This one is done by the Deinterlace effect when set to Odd lines or Even lines. This improves compression ratios. In the first frame it puts a line-averaged copy of the even lines.28 [Inverse telecine]. perceptually at least.
21. As for progressive scanning camcorders. This is the difference we are looking for:
blue alpha
If you have vertical blanking information or crawls which constantly change in each frame. This is only useful when followed by a scale effect which reduces the image to half its size. block them out with the Mask tool.4. It does not reduce resolution.3 Defeating interlacing
Interlacing is done on most video sources because it costs too much to build progressive scanning cameras and progressive scanning CRT’s. page 102. Time base correction The first three tools either destroy footage irreversibly or do not work at times. Inverse Telecine This is the most effective deinterlacing tool when the footage is an NTSC TV broadcast of a film. It does not work. When played back at full framerates it gives the illusion of progressive
. Many a consumer has been disappointed to spend 5 paychecks on a camcorder and discover what horrible jagged images it produces on a computer monitor. This is about all you can do without destroying more data than you would naturally lose in compression. The more invasive cleaning techniques involve deinterlacing. forget it. Cinelerra cuts down the middle by providing deinterlacing tools that are irreversible sometimes and do not work sometimes but are neither one or the other. Time base correction is last because it is the perfect deinterlacing tool. See Section 14. Cost factors are probably going to keep progressive scanning cameras from ever equaling the spatial resolution of interlaced cameras.156
Chapter 21: Tips
x
Use the camera tool to shift the picture up or down a line to remove the most color interference from the image.

and horizontal only. HDTV exceptions 1920x1080 HDTV is encoded a special way. it should only be used for low quality video. page 94. Be aware that frames to fields inputs frames at half the framerate as the project. If it is a broadcast of original HDTV film. While you can not really do that. Effects before frames to fields process at the reduced framerate. Set Average Empty Rows to on and play through the video a few times to figure out which field is first. We are still figuring out the easiest way to support warnings for field glitches but for now you need to go back to the normal framerate to do editing or play test to make sure the fields are right. Mind you. Drop a Frame to Fields effect on the same track. Render just the video to the highest quality file possible. no interlacing. the motion is shaky.97 fps. the output of Frames to Fields can not be compressed as efficiently as the original because it introduces vertical twitter and a super high framerate. 2.4. Even if you can afford to briefly go somewhere where there is blue sky. horizon shots usually can stand for more depth. If it is a rebroadcast of a 720x480 source. Apply a Sharpen effect. This entire procedure could be implemented in one non-realtime effect. 1. Drop the gradient effect on hazy tracks.
21. but the biggest problem with that is you will most often want to keep the field based output and the 24 fps output for posterity. Set the following parameters: Angle: 0 Inner radius: 0 Outer radius: 40 Inner color: blue 100% alpha
. Set the project framerate to 24.1 [1080 to 480]. Set project framerate to twice the video framerate. any editing in the doubled frame rate may now screw up the field order. The new track should now display more filmish and sharper images than the original footage. since this procedure can degrade high quality video just as easily as it improves low quality video. A non-realtime effect would require all that processing just for the 24 fps copy. 5. Interlaced 29. Best of all. If the wrong field is first. you can get pretty close for the money. Import the video back to a new track.94 fps footage to 23. 3. an inverse telecine works fine. Still debating that one. Lately the best thing you can do for dirt cheap consumer camcorder video is to turn it into progressive 24 fps output. Set it to sharpness: 25.Chapter 21: Tips
157
video with no loss of detail. This is what the gradient effect is for.4 Making video look like film
Video sweetening is constantly getting better. you need to use a time base and line doubling algorithm to deinterlace it. This produces no timing jitter and the occasional odd field gives the illusion of more detail than there would be if you just line averaged the original.5 Clearing out haze
You probably photograph a lot of haze and never see blue sky. See Section 14. this effect can be reversed with the Fields to frames effect.
21. Unfortunately. That one combines two frames of footage back into the one original interlaced frame of half the framerate.97 fps footage can be made to look like film by applying Frames to Fields and then reducing the project frame rate of the resulting 59. 4. Secondly.

. Click the wrench or 32.
21.. Go to Settings->Format change Channels to 1 and Samplerate to 16000 or 22050. Set the file format to for Audio and set Layer to III and Kbits per second to 24 MPEG Audio..7 Time stretching audio
It may appear that time stretching audio is a matter of selecting a region of the audio tracks. Some scenes may work better with orange or brown for an evening feel.1 Decay secs: 0. To improve sound quality on the cell phone.158
Chapter 21: Tips
It is important to set the 0% alpha color to blue even though it is 0% alpha. Hit OK to render the file.. Highlight Audio 1: Compressor and hit Attach.
Go to File->Render. Either highlight a region of the timeline or set in/out points to use for the ringtone. enabling recording for the desired tracks. going to Audio->Render Effect. Right click on track Audio 2 and select Attach effect. Go to File->Load files. The graph should look like this. Specify the name of an mp3 file to output to. the phone’s web browser must download the ‘. and load a sound file with Insertion strategy: Replace current project. The color of the outer alpha is still interpolated with the inner color. Right click on track Audio 1 and select Attach effect.1 Trigger Type: Total Trigger: 0 Smooth only: No to bring up the compressor GUI... The resulting ‘. Highlight the Compressor effect and hit Attach in the attachment popup. Click anywhere in the grid area and drag a new point to 0 Output and -50 Input. Then. and applying Time
.
Outer color: blue 0% alpha
21. Click the Audio1 Compressor’s magnifying glass Set the following parameters: Reaction secs: -0.mp3’ file must be uploaded to a web server. Check Render audio tracks and uncheck Render video tracks.mp3’ file directly from the URL. This is a generally applicable setting for the gradient.
Click Clear to clear the graph. There also may be a size limit on the file. you need the maximum amplitude in as many parts of the sound as possible. Make sure the insertion point or highlighted area is in the region with the Compressor effect.6 Making a ringtone
This is how we made ringtones for the low end Motorola V180’s and it will probably work with any new phone.

21. First. you have to render it. convert the ‘file1. Encode the jpeg files using those commands: First pass:
mencoder "mf://*. you have to record the video with xvidcap. Time Stretch applies a fast Fourier transform to try to change the duration without changing the pitch. network-only programs strategically designed to counteract one Microsoft server feature or another and not to perform very well at
.mpeg" --gui no --audio no ffmpeg -r 10 -i file1. highlight the media folder. Make sure you properly set the video format of your project (size.. Another way to change duration slightly is to go to the Resources window. Adjust the sample rate in the Info dialog to adjust the duration. Resample should be used. Render it as a jpegsequence. but this introduces windowing artifacts to the audio. in the range of 5%. Then.8 Video screen captures
We explain here how to record video screen captures and edit them in Cinelerra.net xvidcap --fps 10 --cap_geometry 1280x1024+0+0 --file "file1.mpeg’ file you obtained into an mpeg file suitable for Cinelerra: You can now load that file into Cinelerra.jpg" -mf fps=25 -oac pcm -sws 2 -vf \ scale=1280:1024.mpeg -s 1280x1024 -b 3000 -aspect 1. Resample does not introduce any windowing artifacts. This method also requires left clicking on the right boundary of the audio tracks and dragging left or right to correspond to the length changes. You can find that utility in most distributions’ repositories.hqdn3d=2:1:2 -ovc lavc -lavcopts \ vcodec=mpeg4:vbitrate=800:aspect=4/3:vpass=2 -ofps 10 -of avi \ -o . and Asset info dialog.hqdn3d=2:1:2 -ovc lavc -lavcopts vcodec=mpeg4:\ vbitrate=800:aspect=4/3:vpass=1 -ofps 10 -of avi -o /dev/null \ -ffourcc DIVX
Second pass:
mencoder "mf://*. For smaller changes in duration.sourceforge. capture the screen:
Do not forget to change the geometry option according to your screen size. Most of what you will find on modern GNU/Linux distributions are faceless.Chapter 21: Tips
159
Stretch. In actuality there are 3 audio effects for time stretching: Time Stretch.33 -r 25 file2.avi -ffourcc DIVX
You can also render the video to mpeg4 directly from Cinelerra if you wish to. open a shell window. Then.9 Improving performance
For the moment GNU/Linux is not an excellent desktop. Resample. aspect-ratio) When you have finished editing your video. It is more of a server. since there probably will have a lot of files created. It is only useful for large changes in time because obvious changes in duration make windowing artifacts less obtrusive. or download it here:
http://xvidcap. click on Info. frame-rate.
21.mpeg
First./rendered_file. right click on an audio file. It is recommended to write the jpeg files in a new folder. This changes the pitch of the audio but small enough changes are not noticeable. and cd into that folder.jpg" -mf fps=25 -oac pcm -sws 2 -vf scale=\ 1280:1024. so this is most useful for slight duration changes where the listener is not supposed to know what is going on.

i++) for (j = 0. j++)
for (i = 0. without a swap space the kswapd tasklet normally spins at 100%. we have a hack for at least one driver. which ordinary people can adjust to make it behave more like a thoroughbred in desktop usage. If the Total bytes available is under 131072. In a 4 GB system. There are a number of parameters on GNU/Linux. While many drivers differ. If you want to do recording. Theoretically it should be a matter of running
swapoff -a
Unfortunately.h’. This is where you need to hack the kernel. To see if your sound buffers are suitable. */
Then recompile the kernel. This only applies to the OSS version of the Soundblaster Live driver.1 Disabling swap space
On systems with lots of memory. you start waiting for page swaps after using only 2 GB. i++) for (j = 0. run the included soundtest program with nothing playing or recording. the audio buffers for all the GNU/Linux sound drivers were limited from 128k to 64k.160
Chapter 21: Tips
user interaction. before it says
/* * Kswapd main loop. Beyond that. Application of low latency and preemptible kernel patches make it possible to record more audio recordings but it does not improve recording video with audio.2 Enlarging sound buffers
change it to:
if (bufsize > 0x40000)
Where it says change it to:
for (i = 0. This allocates the largest possible buffers and displays them. For recording audio and video simultaneously and for most audio recording this causes dropouts. you are probably better off without a swap space. it starts searching for free pages to swap. you should probably disable swap space in any case. you should keep the swap. i < 8. change #define MAXBUFSIZE 65536
to
#define MAXBUFSIZE 262144 Finally.9.
21. j < 4. In order to improve realtime performance. There is a reason for this. If you have 512MB of RAM.h’. If you have 4 GB of RAM. in ‘linux/drivers/sound/emu10k1/cardwi. j < 4. To eliminate this problem. GNU/Linux only allows half the available memory to be used. in order to cache more disk access. Since every sound card and every sound driver derivative has a different implementation you will need to do some searching for other sound cards. Edit ‘linux/drivers/sound/emu10k1/audio.9. j++) In ‘linux/drivers/sound/emu10k1/hwaccess. i < 16. change
. you need to see about getting the buffers enlarged in the driver.c’. The question then is how to make GNU/Linux run without a swap space. In this file. Cinelerra sometimes runs better without a swap space. edit ‘linux/mm/vmscan.c’ Where it says
if (bufsize >= 0x10000)
21. put a line saying return 0.

d/crond’ put exit before the first line not beginning in #. This frees up even more CPU time. This normally does not work due to
inept kernel support for most IDE controllers. the kernel source code for
21.9. to avoid restarting your computer. add to the ‘/etc/sysctl.3 Freeing more shared memory
The GNU/Linux kernel only allows 32MB of shared memory to be allocated by default.9. When launched. which is too low. but who uses that command anyways? Gamers like high resolution mice. Put exit before the first line not beginning in #. If you get lost interrupt or SeekComplete errors.9. ‘-k1’ prevents GNU/Linux from resetting your settings in case of a glitch. This needs to be increased to do anything useful.
hdparm -c3 -d1 -u1 -k1 /dev/hda ‘-c3’ puts the hard drive into 32 bit I/O with sync. In ‘/etc/rc.6 Reducing USB mouse sensitivity
. We have a way to reduce USB mouse sensitivity but it requires editing the kernel source code.4 Speeding up the hard drive
This is a very popular command sequence among GNU/Linux gurus. use the following command as root:
sysctl -p
21.9. ‘-d1’ enables DMA of course. Cinelerra may remind you that with the following error message:
The following errors occurred: void MWindow::init_shm0: WARNING:/proc/sys/kernel/shmmax is 0x2000000. quickly use ‘-c0’ instead of ‘-c3’ in your command.shmmax = 2147483647
For the first time. XFree86 once allowed you to reduce PS/2 mouse sensitivity using commands like xset m 1 1 but you are out of luck with USB mice or KVM’s. You can not use the at command anymore.
21. Before running Cinelerra do the following as root: echo "0x7ffffff">/proc/sys/kernel/shmmax For a permanent change. but this can be painful for precisely positioning the mouse on a timeline or video screen.conf’ file the following line: kernel/shmmax=0x7fffffff
or if you prefer:
kernel. Disable these operations by editing ‘/etc/rc. This frees up the CPU partially during data transfers.Chapter 21: Tips
161
#define WAVEIN_MAXBUFSIZE
65536 262144
to
#define WAVEIN_MAXBUFSIZE
Then recompile the kernel modules.
21.d/init. Even though USB mice have been supported for years.d/init. These may be acceptable background tasks while compiling or word processing but not while playing video. Then reboot. ‘-u1’ allows multiple interrupts to be handled during hard drive transactions.5 Disabling cron
GNU/Linux runs some daily operations like compressing man pages.d/anacron’. which is not done by default on GNU/Linux distributions.

fvwm2rc’ and put
Mouse 0 T A move-and-raise-or-raiselower #Mouse 0 W M move Mouse 0 W 4 move Mouse 0 W 5 move Mouse 0 F A resize-or-raiselower Mouse 0 S A resize-or-raiselower
in place of the default section for moving and resizing.9 Improving Zoran video
Video recorded from the ZORAN inputs is normally unaligned or not completely encoded on the right.9.
keycode 115 = Hyper_L keycode 116 = Hyper_R add mod4 = Hyper_L add mod5 = Hyper_R
The actual changes to a window manager to make it recognize window keys for ALT are complex. 788. 788.
21.c’ more structures near line 76 affect
alignment and encoding. they will cause the driver to lock up before capturing the first frame. 0x0000}. Append the following to it. 0x0000}. using the mke2fs command. 0x00f8. 523. You can make the window keys provide ALT functionality by editing ‘/etc/X11/Xmodmap’. A slightly slower file system. you can edit ‘/etc/X11/fvwm/system.4. 17 }. 480.24. In ‘/usr/src/linux/drivers/media/video/zr36067. This can be slightly compensated by adjusting parameters in the driver sourcecode. For NTSC
{858 . could be changed to {868 . gigantic disk array separate from your boot disk. Other window managers seem to slow down video with extra event trapping and are not as efficient in layout. By far the fastest file system is This has no journaling. and accesses the largest amount of data per block possible. 720.9. which is easier to recover after power failures is
mke2fs -j -i 65536 -b 4096 my_device tune2fs -r0 -c10000 my_device
This adds a journal which slows down the writes but makes filesystem checks faster. In ‘/usr/src/linux/drivers/media/video/bt819. 0x00f8.
.24. You will thus have to manually install an EXT filesystem on this disk array.Chapter 21: Tips
163
How about those windows keys which no GNU/Linux distribution even thinks to use. reserves as few blocks as possible for filenames. 523. 57. the 2. In FVWM at least. static struct tvnorm f60ccir601 = { 858.
21. 16 }.
Adjusting these parameters may or may not move your picture closer to the center.20 version of the driver could be improved by changing
to static struct tvnorm f60ccir601 = { 858. 525. 1. 480.8 Speeding up the file system
mke2fs -i 65536 -b 4096 my_device tune2fs -r0 -c10000 my_device
You will often store video on an expensive. 525. 57. Your best performance is going to be on FVWM. 2.c’ the structures defined near line 623 affect alignment. 1. At least for NTSC. More of the time. 720. 2.

1. You can see that the camera smoothly flows from keyframe point to next keyframe point. One possibility is to run the test-mpeg2 command available with the sources of libiec61883. Rename the file to ‘(lang_prefix).
21.0.11 Panning and zooming still images
Cinelerra’s powerful keyframe features allow you to use pan and zoom effects on still pictures. submit the diff file to the cinelerra-CV team. Each video track must have Camera Automation set to 2. You should not run any heavy ressources consuming task on your computer since the lack of caching in test-mpeg2 causes framedrops.
21. After rendering. the project file can be back-transformed into a proxy version. Minimal example: Syntax:
dvgrab -format mpeg2 myclip
.Chapter 21: Tips
165
21. run after .mpeg
21./configure: Then. and use HDV material only for the final rendering. 2. Make the clip 10 seconds long. Moreover.1 Overview
For each HDV file a proxy is created with a scale of 0.
21.po’ and add the language prefix to ‘po/LINGUAS’. Using the transport controls. Thus simple dissolve transition is slowed down to unacceptable level. 1440x1080 and 16/9 aspect. New resources are created with both proxies as well as HDV files. it is usually not able to play several tracks simultaneously. Now. Using the transport controls. requires a lot of processing power. Dragging on the compositing camera move the camera center to a new position further 7. edit the ‘cinelerra.10. and simple cut requires decoding the whole GOP in less then 1/25s.3 Creating a new translation
cd po && make
To create a new translation.py and reopen the project. The workflow presented below was first proposed by Hermann Vosseler.5.pot’ file located in ‘po/’ and add the appropriate translated strings.12. as Cinelerra automatically adjusts the camera movement in straight lines from point to point. Editing is performed with the proxy files. Thus. Even if the system is able to play a single track at full framerate. Using the compositing camera control set the clip’s initial position 5. one of the possibilities is to perform all edits on low resolution files. New version of dvgrab seems to support HDV.
There is no perfect solution so far. if further editing is required.12. Load and create a clip from a still image as described above. Finally. move forward a couple of seconds on the clip 6. rewind to the beginning of the clip and play it. Use this syntax:
test-mpeg2 > hdv_tape. exit Cinelerra and convert the project file with proxychange. Activate the automatic generation of keyframes 3. which typically comes from HDV camcorders. The project is created with HDV resolution e. go to the beginning of the clip 4.2 Grabbing HDV from camcorder
and press Play on the camcorder.12 HDV 1080i editing using proxy files
Working with high definition video.g. For HDV rendering. HDV is in GOP based format.

To convert your HDV files into I-frame based mjpeg files with 50% scaling.5 HDV -> Proxy (e.13 Adding subtitles
There are two methods available for adding subtitles in a video: Use Cinelerra’s Titler effect.5 Converting HDV and proxy files
http://cvs.g.mpeg.xml
Update: Recent version of Cinelerra seems to produce valid XML.0
The project XML file is not a perfectly valid XML file.py projectfile. Cinelerra works better when editing non-GOP based formats. after rendering if you want go back to editing): .toc‘ -scale 0. <TAG> is not followed by </TAG>.avi‘ -scale 2.bak’. Edit the file manually or use the following command:
cat temp001.org/docs/proxychange. for rendering): and creates copy of the original in
. and load HDV MPEG-2 files via their generated toc.3 Using TOC and WAV files
Try using WAV files for sound./proxychange.xml -from ‘proxyfiles/(\w+)\. Sometimes the tags are not closed. done
21.
21.
21.4 Making the proxy files
Proxy files can be converted in many ways and can use any format. the dissolve transistion does not work properly in RGBA or YUVA modes but it works fine in RGB or YUV.toc‘ -to ‘proxyfiles/\1.xml -from ‘hdv/(\w+)\.12.mpeg.cinelerra. and multiplex both the video and the audio with mplex.py python script converts HDV to/from proxies. do mpeg3toc $i ‘basename $i mpeg‘toc. You can download that script here: It overwrites the existing project files.do mencoder -mc 0 -noskip $i -ovc lavc -lavcopts vcodec=mjpeg -vf scale=720:540 -oac pcm -o ‘basename $i mpeg‘avi. ACODEC contains some \001 characters.xml| tr -d ‘\001‘ > /tmp/1 ./proxychange.6 Rendering the HDV project
HDV files can be rendered to an YUV4MPEG stream and then encoded to MPEG2 using a modified Mjpegtools binary. Moreover. To create toc files.12. use the following command:
for i in *.166
Chapter 21: Tips
21.g. Thus after each Cinelerra "Save". It is not be possible to display the rendered video without
.py
The proxychange. This must be corrected manually. That task is long and fastidious. use the following command:
for i in *.12. However.12.12.7 Other issues
When playing MJPEG files. some problem can occur.xml. mpeg2enc -verbose 0 -aspect 3 -format 3 -frame-rate
3 -video-bitrate 25000 -nonvideo-bitrate 384 -force-b-b-p -video-buffer 448 -video-norm n -keep-hf -no-constraints -sequence-header-every-gop -min-gop-size 6 -max-gop-size 6 -o %
Render the sound as an AC3 file. done
21.avi‘ -to ‘hdv/\1. Proxy -> HDV (e.
21. the subtitles are actually incrusted into the image.py projectfile. ‘projectfile. mv /tmp/1 temp001.

Adapt the options to your needs:
. With mplayer one can use the following syntax:
A subtitle file is a simple plain text file. Since video creation is what most of us focus on. If you plan to distribute your video over the internet. you can: Distribute it with your video. Once the subtitle text file is created. which is available here:
http://kitone. There are a lot of subtitles editor available on Linux.free. page 117. Use it with dvdauthor. for information about Cinelerra’s titler. Incrust the subtitles into the video using mencoder.fr/subtitleeditor
mplayer -sub <the_subtitles_text_file> <the_video_file>
Subtitleeditor Subtitleeditor has the huge advantage of displaying the audio waveform.4.54 [Title]. Keep in mind the synchronization would be lost if you edit your video after having added the subtitles. most of them are fine for easing translation of subtitles. but are not appropriate to actually add and synchronize new subtitles on a video. which contains the text and the time or frame number where each subtitle should be displayed on the screen. Adding subtitles should be done after the video edition is finished. This command line is an example. People will have to load the appropriate subtitle file in their video player to actually see the subtitles. If you want your video to be available with subtitles in several languages. If you want to produce a DVD. However. Read dvdauthor’s documentation for more information. See Section 14.Chapter 21: Tips
167
subtitles. that method is also the only one which is compatible with dvdauthor subtitles feature. one video file and several subtitles files is smaller than several video files. to add the subtitles in a DVD. the task we are mostly interested in is creating subtitles for a video. We highly recommend you Subtitleeditor. Add the subtitles with a subtitles editor after having rendered the video. This feature is really important to precisely synchronize subtitles and talks. you have to render it several times. one for each language. The second method is the one to use if you want your video to be available with subtitles in multiple languages. Subtitles text files can be displayed by any decent video player.

But with your project framerate at 50fps. Import into Cinelerra without loss 5. But will need to get yourself a copy of yuvmotionfps from: http://jcornet. The steps we will follow are: 1.avi’ The shell command to separate out the audio is:
ffmpeg -i myfootage. If not. Now.avi -s 720x576 -f yuv4mpegpipe -vcodec pgmyuv .fr/linux/yuvmotionfps. SD cameras produce progressive footage at framerates of 10-30fps at frame sizes like 640x480. Step through the frames. Now. and verify that you see motion change with every frame. Typically. otherwise you will get a quality loss and jerky motion after rendering. zooms etc. you should have separated audio and video files ready for Cinelerra.wav files ready to import into Cinelerra.embarrassing).yuv
The shell command to separate out the video. Select the entire duration of your video. If you’ve got this. which is 25fps interlaced.avi>
21. make sure that in your Cinelerra project options.avi -f wav myfootage. using lower quality source footage such as that sourced on the net. or from cheap cameras such as SD-cams. and add the Fields to frames effect. Now. Convert these figures below to the NTSC framerate and framesize if you are creating an NTSC-DVD project. import your new separated and converted video and audio files into Cinelerra. You will likely already have ffmpeg and mjpegtools installed on your system.168
Chapter 21: Tips
mencoder -sub <your_subtitle_file> <video_file_without_subtitles> -ovc lavc -lavcopts vcodec=mpeg4:vhq:vbitrate=1000 -oac mp3lame -lameopts br=256:vol=1 -ffourcc DIVX -o <converted_video. add one last effect to your video.| yuvmotionfps -r 50:1 > myfootage. you should be able to avoid this. This section outlines a recipe for making the most of this limited quality footage.yuv and . and create temporary . when you’re ready to render. We will perform steps 1-3 with 2 shell commands. Up-sample the framerate with motion interpolation 4. yuvmotionfps is a great "poor man’s" free/opensource equivalent of the ’Twixtor’ plugin on Adobe Premiere. mjpegtools and yuvmotionfps installed. Assume you have your source footage in the file ‘myfootage. then you’re on track.free. upsample the framesize and framerate is:
After executing both these commands. Convert to yuv4mpeg format 3. Footage sourced online can have framerates as slow as 8fps and frame sizes as small as 320x240.html. Up-sample the frame size 2. Your footage is sitting within Cinelerra as 50fps progressive. you can get them from your distro feeds.14 Creating DVD video from Lower Quality Footage
This section is for those who want to create PAL or NTSC DVD-format videos. and this
. and minimising any further quality losses. such as colour corrections. Interlace appropriately before DVD export This technique requires that you have ffmpeg. This is crucial. and make sure this effect sits at the bottom of your effect stack. 720x576 framesize. (The sad thing is that this quality loss may not even show until you have mastered your DVD and are playing it to others . you have set the framerate to 50fps. Note too that we’re assuming you want to create a PAL-DVD project.wav
ffmpeg -i myfootage. Apply effects as needed.

for sale etc . contrary to this recipe. there could be some appliances out there that handle it badly. You don’t want to release a DVD into the wild . part of the yuv4mpegpipe shell command If the motion looks weird. you should end up with video that plays on a wide variety of consumer DVD players with good. and when rendering. and delay the reinterlacing to the render stage. and changing this later if it doesn’t play properly on your DVD player. make sure to try it on as many consumer DVD appliances and TVs as possible.and have it look like crap on your customer’s TV.blogspot.Chapter 21: Tips
169
effect will correctly convert it to 25fps interlaced. In the final Fields to frames effect..com/2007/06/beginners-guide-to-exporting-video-from. Depending on your ffmpeg version. put: yuvdeinterlace -i -t | before the ffmpeg. cut out the frame size upsampling . I’d suggest setting Bottom fields first initially. To render. try different settings in the yuvmotionfps command . switch between Top fields first and Bottom fields first Disable the Fields to frames effect in Cinelerra. Conclusion With a small amount of experimentation.do yuvmotionfps -h and look at the various options.perform the size upsampling within Cinelerra using the camera/projector settings. in case you’re getting less than perfect results:
When initially separating audio/video with the ffmpeg command.ht But.. leave in the -ilme -ildct options.
Warning Before you release your DVD to anyone important. you should be able to import lower-quality video into Cinelerra.net/~mark/lavtools/.remove the -s 720x576 option . Tweaking Here are some ideas for tweaks. and those of your friends. flicker-free motion. Even if it looks good on your DVD player and TV. process it and render out to DVD-quality video and end up with the best overall quality that will look as good as possible on the greatest possible range of DVD players and TVs. Good luck!
. I’d suggest you use the recipe on the Crazed Mule Productions website:
http://crazedmuleproductions. you may need to change this to -flags +ilme+ildct
After this. Get the alternative version of yuvdeinterlace from http://silicontrip.

.

org if you do not have one. That would allow people fixing bugs to load that file themselves in Cinelerra and look at what happens.2 [Compiling with debugging symbols]. you can file a bug report.2 Playback does not stop
If playback of audio tracks doesn’t stop on the timeline and keeps going after the end of the video. Once tweeked. for compilation instructions. Zoran capture boards must be accessed using the Buz video driver in Preferences>Recording and Preferences->Playback. This checkbox is shown only if you set ALSA as audio driver.Chapter 22: Troubleshooting
171
22 Troubleshooting
22. The edit boundary selected for dragging may be next to the indended edit if those edits are too small to see at the current zoom level. See Section 2. uploading a small sample of such a file on the internet is appreciated. See Section 21.1 Reporting bugs
When you notice a bug.4 Dragging edit boundaries does not work
Sometimes there will be two edits really close together. launch cinelerra 2. Example: r959 Distribution name and version. If there is no bug report for the bug you noticed. open the recording window 3. a debugger output is welcome. the first thing to do is to go to http://bugs. That is very important since it really helps people trying to fix bugs. page 8. Zoom in horizontally. Then.
22.cinelerra. Example: 1. such as a screenshot for example.
22. If you think you can’t drag all the edits starting at the same point on armed tracks.org and check if it has not been already reported.
thread apply all bt
22.cinelerra. The gdb output is more useful when Cinelerra is compiled with debugging symbols. including the following information: Revision number of Cinelerra CV. Do not hesitate to attach any file which you think could be useful.3 Buz driver crashes
First. click on OK 4. zoom in horizontally to check if they really start at the same point. Open an account on http://bugs. if the bug you noticed concerns a problem loading a specific file into Cinelerra-CV. Make sure Preferences->Recording->Frames to buffer in device is below 10. Run:
gdb cinelerra run
(You trigger the bug and Cinelerra CV crashes) Then copy all of the information displayed into your bug report. Sometime vertical sychronization
. go to Settings -> Preferences -> Playback and click on the Stop playback locks up checkbox. Cinelerra crashes When Cinelerra CV crashes. page 159. file the bug report.3. Moreover. Some performance tweeks are available in another section.9 [Improving performance]. the Buz driver seems to crash if the number of recording buffers is too high. Example: Debian SID Steps to replicate the bug.

bcast/Cinelerra_rc’ and find THEME.6 Synchronization lost while recording
If the rate at which frames are captured during recording is much lower than the framerate of the source. using the pow function breaks later calls to the exp functions in the math library.
22.5 Locking up when loading files
The most common reason loading files locks up Cinelerra is because the codec is not supported. or right-click its taskbar icon.8 Copy/Paste of track selections does not work in the timeline
If you are running the KDE Klipper application.10 Theme Blond not found error
If the following error message appears: Aborted. Go into settings->preferences->interface and disable Use thumbnails in resource window to skip this process.g. Another reason is because Cinelerra is building picons for the Resources window. then: You should check for the file ‘defaulttheme. you need to install the plugins again.*’ in ‘/usr/lib/cinelerra’ or ‘/usr/local/lib/cinelerra’.bcast/’ directory too.
22. Delete your ‘$HOME/. the video will accumulate in the recording buffers over time and the audio and video will become well out of sync. For some reason.172
Chapter 22: Troubleshooting
of edits can be lost just because you did not set properly project attributes (e. MWindow::init_theme: Theme Blond not found. Check Settings -> Format.7 Applying gamma followed by blur does not work
The gamma effect uses the pow function while the blur effect uses a number of exp functions in the math library. select Configure Klipper and ensure Prevent empty clipboard is not selected.
22.
rm rm rm rm rm rm -f -f -f -f -f -f /usr/local/lib/libguicast* /usr/lib/libguicast* /usr/local/lib/libquicktimehv* /usr/lib/libquicktimehv* /usr/local/lib/libmpeg3hv* /usr/lib/libmpeg3hv*
22.9 Cinelerra often crashes
Do a clean install.bcast/’ directory Look into ‘$HOME/. it should be => THEME Blond
. Be sure that you do not have libraries from previous installations. If it does not exist. You need to apply gamma after blur to get it to work. If you load a large number of images. Try to delete the ‘$HOME/. it needs to decompress every single image to build a picon. either disable it.
22. PAL/NTSC). Decrease the number of frames to buffer in the device in preferences->recording so the excess frames are dropped instead of buffered.
22.

23. and the plugin pushes it to a destination. There are several types of plugins. The rendering pipeline starts at the final output and the final steps in the rendering pipeline are reading the data from disk. To get the power of rate independence. The GUI is not abstracted from the programmer. The files they include depend on the plugin type. Plugins need to know what rate the project is at. The most commonly used methods are predefined in macros to reduce the typing yet still allow flexibility. If the plugin’s output was requested to be at twice the project frame rate. each plugin requests data from the plugin before it. A source pushes data to a plugin.Chapter 23: Plugin authoring
173
23 Plugin authoring
The plugin API in Cinelerra dates back to 1997. The requested rate is arbitrary.
. When the rendering pipleline eventually requests data from a plugin chain. The requested rate is determined by the downstream plugin requesting data from the current plugin. Rate conversions are done in terms of the project rate and the requested rate. Audio plugins include ‘pluginaclient. Queries from a plugin for the current playback position are given relative to the project frame rate. The pull method also allows plugins to take in data at a higher rate than they send it out. It is determined by the settings->format window. This is less intuitive than the push method but is much more powerful.h’ and video plugins include ‘pluginvclient. the pull method requires plugins to know more about the data than they needed to under the push method.
23. but users must still define methods for PluginClient. The latest evolution in Cinelerra’s plugin design is the pull method. For 6 years this was the way all realtime plugins were driven internally but it did not allow you to reduce the rate of playback in realtime. This allows the programmer to use whatever toolkit they want and allows more flexibility in appearance but it costs more. The easiest way to implement a plugin is to take the simplest existing one out of the group and rename the symbols. These different data rates have to be correlated for a plugin to configure itself properly. the plugin does math operations on it. They inherit PluginAClient and PluginVClient respectively. Two classes of data rates were created to handle this problem.2 Common plugin functions
All plugins inherit from a derivative of PluginClient.h’. This PluginClient derivative implements most of the required methods in PluginClient. the positions need to be converted to the project rate for keyframes to match up. before the LADSPA and before VST became popular. Exactly how to use these rates is described below. this is not the way they are processed internally anymore. Every step in the rendering chain involves requesting data from the previous step. The project rate is identical for all plugins. with minor modifications to handle keyframes and GUI feedback. The push method is intuitive and simple.1 Introducing the pull method
Originally plugins were designed with the push method. It is fundamentally the same as it was in 1997. Realtime plugins written using the pull method can not only change the rate data is presented to the viewer but also the direction of playback. Keyframes for a plugin are stored relative to the project frame rate. each with a common procedure of implementation and specific changes for that particular type. While plugins can still be designed as if they are pushing data. what rate their output is supposed to be at and what rate their input is supposed to be at.

23.
In the implementation. It can either use Cinelerra’s toolkit or another toolkit. copying. All plugins define at least three objects: Processing object Contains pointers to all the other objects and performs the signal processing. User interface object This is defined according to the programmer’s discretion. If the signal processor wants to alter the GUI.
. and a back pointer to the plugin’s processing object. through a complicated sequence.1 The processing object
Load up a simple plugin like gain to see what this object looks like. Multichannel plugins in their processing function should refer to a function called PluginClient::get total buffers() to determine the number of channels. This object contains a number of queries to identify itself and is the object you register to register the plugin. is propogated from the GUI instance to the signal processor instance. Its constructor should take a PluginServer argument. the plugin must contain a registration line with the name of the processing object like
REGISTER_PLUGIN(MyPlugin)
The constructor should contain
PLUGIN_CONSTRUCTOR_MACRO
to initialize the most common variables. The processing object should have a destructor containing
PLUGIN_DESTRUCTOR_MACRO
to delete the most common variables.2. Depending on the user interface toolkit.174
Chapter 23: Plugin authoring
Cinelerra instantiates all plugins at least twice when they are used in a movie. The other instance is the signal processor. the only user interface object a developer needs to worry about is the Window. a few initialization methods. The window has pointers to a number of widgets. The processing object should inherit from the intended PluginClient derivative. Another function which is useful but not mandatory is
int is_multichannel(). it propogates data back to the GUI instance. User input. The documentation refers to the usage of Cinelerra’s toolkit. The default is 0 if it is omitted.
MyPlugin(PluginServer *server). One instance is the GUI. a user interface thread may be created to run the user interface asynchronous of everything else. There are utility functions for doing all this. Using Cinelerra’s toolkit. It shows data on the screen and collects parameters from the user. Configuration object This stores the user parameters and always needs interpolation. so the user interface thread and object are heavily supported by macros if you use Cinelerra’s toolkit. and comparison functions.
It should return 1 if one instance of the plugin handles multiple tracks simultaneously or 0 if one instance of the plugin only handles one track. Macros for the plugin client automatically call configuration methods to interpolate keyframes. Synchronizing the user interface to changes in the plugin’s configuration is the most complicated aspect of the plugin.

The source PNG image should be called ‘picon. void update gui().Chapter 23: Plugin authoring
175
To simplify the implementation of realtime plugins. the user should create a ‘picon_png.h’ header file from a PNG image using pngtoh. Loads the configuration based on surrounding keyframes and current position. This stores whatever the current configuration is inside the plugin’s configuration object and returns 1 if the new configuration differs from the previous configuration. Returns a text string identifying the plugin in the resource window. thread_name)
The commonly used members in PLUGIN CLASS MEMBERS are described below. The plugin’s configuration object is always called config inside PLUGIN CLASS MEMBERS. char* plugin title(). In addition.
PLUGIN_CLASS_MEMBERS(config_name. taking the configuration object and user interface thread object as arguments. thread_class)
int set string(). // update widgets here thread->window->unlock_window(). update gui to determine if the GUI really needs to be updated. Raises the GUI window to the top of the stack. This is implemented with
SET_STRING_MACRO(plugin_class)
void raise window(). VFrame* new picon(). int show gui(). This is implemented with
. test for a return of 1. int load configuration(). a macro for commonly used members has been created for the class header. All the plugins using GuiCast have a format like
void MyPlugin::update_gui() { if(thread) { if(load_configuration()) { thread->window->lock_window(). config_class)
to implement the default behavior for load configuration. } } }
to handle concurrency and conditions of no GUI. The macro definitions apply mainly to realtime plugins and are not useful in nonrealtime plugins.h" NEW_PICON_MACRO(plugin_class)
to implement new picon. The class definition for load configuration should contain
LOAD_CONFIGURATION_MACRO(plugin_class. Instantiate the GUI and switch the plugin to GUI mode. The return value of load configuration is used by another commonly used function. and then redraw the GUI with the new parameters. This is implemented with
SHOW_GUI_MACRO(plugin_class. pngtoh is compiled in the ‘guicast/ARCH’ directory. Creates a picon for display in the resource window. Fortunately. Changes the title of the GUI window to a certain string. The string has to be unique.png’ and can be any format supported by PNG. Should first load the configuration. Use
#include "picon_png. nonrealtime plugins are simpler.

int64_t next_position. Be aware it is not used in nonrealtime plugins. The read data functions translate the plugin configuration between the KeyFrame argument and the configuration object for the plugin. we put in three required functions and the configuration variables. int save_defaults(). The BC Hash object stores configurations in a discrete file on disk for each plugin but does not isolate different configurations for different projects. float parameter2. int64_t current_position). See any existing plugin to see the usage of BC Hash.
class MyPluginConfig { public: MyPluginConfig(). Use an object called FileXML to do the translation and some specific commands to get the data out of the KeyFrame argument. The function overriding load defaults also needs to create the BC Hash object.176
Chapter 23: Plugin authoring
RAISE_WINDOW_MACRO(plugin_class)
Important functions that the processing object must define are the functions which load and save configuration data from keyframes. The read data functions are only used in realtime plugins. void interpolate(MyPluginConfig &prev.
int equivalent(MyPluginConfig &that). The configuration object inherits from nothing and has no dependancies. void save_data(KeyFrame *keyframe). MyPluginConfig &next. the
}. The keyframes are stored on the timeline and can change for every project.
. If equivalent returns 0.
The load defaults functions are used in realtime and non-realtime plugins.
Now you must define the three functions. void copy_from(MyPluginConfig &that). See any existing plugin to see the usage of KeyFrame and FileXML. void read_data(KeyFrame *keyframe).2 The configuration object
The configuration object is critical for GUI updates.
23.
Following the name of the configuration class. Other standard members may be defined in the processing object. and default settings in realtime plugins. int64_t prev_position. It is merely a class containing three functions and variables specific to the plugin’s parameters. These functions are called by the macros so all you need to worry about is accessing the keyframe data.2. float parameter1. Equivalent is called by LOAD CONFIGURATION MACRO to determine if the local configuration parameters are identical to the configuration parameters in the arguement. int parameter3. Usually the configuration object starts with the name of the plugin followed by Config. The load defaults functions translate the plugin configuration between a BC Hash object and the plugin’s configuration.
int load_defaults(). depending on the plugin type. signal processing.

the interpolation function is called for every frame. For process buffer.prev_position) / (next_position . Then there is copy from which transfers the configuration values from the argument to the local variables. MyPluginConfig &next. int64_t current_position { double next_scale = (double)(current_position .prev_position). This is once again used in LOAD CONFIGURATION MACRO to store configurations in temporaries.
Alternatively you can copy the values from the previous configuration argument if no interpolation is desired.parameter2 * next_scale). int64_t next_position.parameter3 * prev_scale + next. this->parameter3 = (int)(prev. The GNU/Linux sound drivers can not play fragments of 1 sample. In video playback. ignore load configuration and write your own interpolation routine which loads all the keyframes in a console fragment and interpolates every sample. Then of course. This is good enough for updating the GUI while selecting regions on the timeline but it may not be accurate enough for really smooth rendering of the effect. every country has its own wierdos. the LOAD CONFIGURATION MACRO does not redraw the GUI. Then it interpolates the two configurations to get the current configuration.parameter3 * next_scale). In audio playback. this->parameter2 = (float)(prev. Normally the interpolate function calculates a previous and next fraction. For really smooth rendering of audio. using the arguments. Once LOAD CONFIGURATION MACRO has replicated the configuration.parameter1 * next_scale).prev_position). An easier way to get smoother interpolation is to reduce the console fragment to 1 sample. If equivalent returns 1.parameter1 * prev_scale + next. yielding smooth interpolation.current_position) / (next_position . the interpolation function is called only once for every console fragment and once every time the insertion point moves. The interpolation function performs the interpolation and stores the result in the local variables. you can still use load configuration when updating the GUI. yielding improvement which may not be audible. This usage of the configuration object is the same in audio and video plugins. int64_t prev_position. double prev_scale = (double)(next_position .
void MyPluginConfig::interpolate(MyPluginConfig &prev.Chapter 23: Plugin authoring
177
LOAD CONFIGURATION MACRO causes the GUI to redraw.
Then the fractions are applied to the previous and next configuration variables to yield the current values.
}
.parameter2 * prev_scale + next. however.
this->parameter1 = (float)(prev. This would have to be rendered and played back with the console fragment back over 2048 of course. This would be really slow and hard to debug. it loads a second configuration.

3 Realtime plugins
Realtime plugins should use PLUGIN CLASS MEMBERS to define the basic set of members in their headers. The plugin header must declare the window’s constructor using the appropriate arguments. the widget then needs to call plugin->send configure change() to propogate the change to any copies of the plugin which are processing data. thread_class. positioned at x and y. The close event member should be implemented using
WINDOW_CLOSE_EVENT(window_class)
Every widget in the GUI needs to detect when its value changes. In GuiCast the handle event method is called whenever the value changes.
#include "guicast. This causes a number of methods to be called during live playback and the plugin to be usable on the timeline. int close_event(). int y). window_class)
Then it is defined using
PLUGIN_THREAD_OBJECT(plugin_class. This two-class system is used in realtime plugins but not in nonrealtime plugins. It needs two methods
int create_objects().2.
The constructor’s definition should contain extents and flags causing the window to be hidden when first created. flush().
23. Now the window class must be declared in the plugin header. In handle event.3 The user interface object
The user interface object at the very least consists of a pointer to a window and pointers to all the widgets in the window.h" class MyPluginWindow : public BC_Window { public: MyPluginWindow(MyPluginMain *plugin. The following is an outline of what happens. The create objects member puts widgets in the window according to GuiCast’s syntax. The widgets are usually derivatives of a GuiCast widget and they override functions in GuiCast to handle events. These are updated in the update gui function you earlier defined for the plugin. int x. Using Cinelerra’s toolkit. All realtime plugins must define an int is realtime() member returning 1.
and a back pointer to the plugin
MyPlugin *plugin.
to make the window appear all at once.
. It is easiest to implement the window by copying an existing plugin and renaming the symbols. it consists of a BCWindow derivative and a Thread derivative.178
Chapter 23: Plugin authoring
23. window_class)
This. in combination with the SHOW GUI macro does all the work in instantiating the Window. A pointer to each widget which you want to synchronize to a configuration parameter is stored in the window class. Finally create objects calls
show_window().
This becomes a window on the screen. The Thread derivative is declared in the plugin header using
PLUGIN_THREAD_HEADER(plugin_class. Nonrealtime plugins create and destroy their GUI in their get parameters function and there is no need for a Thread. thread_class.

Inside the keyframe is an XML string. The position and size arguments are all relative to the frame rate and sample rate passed to process buffer. int64_t start_position. The channel argument is only meaningful if this is a multichannel plugin. The start position. This way plugins can read data at a different rate than they output data. The process realtime function should start by calling load configuration. It is most easily parsed by creating a FileXML object. int channel. It also needs a set of position arguments to determine when you want to read the data from. double frame_rate) read_samples(double *buffer. and len passed to a read function need not be the same as the values recieved by the process buffer function. int channel. int64_t len)
or
to request input data from the object which comes before this plugin.Chapter 23: Plugin authoring
179
Realtime plugins must override a member called process buffer This function takes different arguments depending on if the plugin handles video or audio. int sample_rate. Single channel plugins should pass 0 for channel. See an existing plugin to find out which usage applies. these functions can not be automated. void read data(KeyFrame *keyframe). and the requested output rate. This is the requested data rate and may not be the same as the project data rate. rate. there is also a size argument for the number of samples. the starting position of the output. The main features of the process buffer function are a buffer to store the output. Some of these are also needed in nonrealtime plugins. See an existing plugin to see how the read data function is implemented. int64_t start_position. Since configuration objects vary from plugin to plugin. See an existing plugin to
. The starting position of the output buffer is the lowest numbered sample on the timeline if playback is forward and the highest numbered sample on the timeline if playback is reverse. The read function needs a buffer to store the input data in. void save data(KeyFrame *keyframe). The LOAD CONFIGURATION MACRO returns 1 if the configuration changed. Call:
read_frame(VFrame *buffer. Inside the keyframe you will put an XML string which is normally created by a FileXML object. They need to read data for each track in the get total buffers() value and process all the tracks. input media has to be loaded for processing. This can either be a temporary you create in the plugin or the output buffer supplied to process buffer if you do not need a temporary. Saves data from the plugin’s configuration to a keyframe. After determining the plugin’s configuration. Additional members are implemented to maintain configuration in realtime plugins. For audio. Loads data from a keyframe into the plugin’s configuration. Read data loads data out of the XML object and stores values in the plugin’s configuration object. The direction of playback is determined by one of the plugin queries described below.

and store the parameters in plugin variables.
23.
The progress pointer allows nonrealtime plugins to display their progress in Cinelerra’s main window. The destructor. The usage of these is the same as realtime plugins. Another way the plugin gets parameters is from a defaults file. the LOAD CONFIGURATION MACRO can not be used in the plugin header. Instead. the user should create a GUI. likewise. int load defaults(). Unlike realtime plugins.180
Chapter 23: Plugin authoring
see how the save data function is implemented. Here. these are not just used for default parameters but to transfer values from the user interface to the signal processor. Save data saves data from the plugin’s configuration object into the XML object. int load defaults(). VFrame* new picon(). Like realtime plugins. int save defaults(). The load and save defaults routines use a BC Hash object to parse the defaults file. char* plugin title(). The plugin should instantiate the progress object with a line like
. wait for the user to hit an OK button or a cancel button. This function must return 0 to indicate a nonrealtime plugin.4 Nonrealtime plugins
Some references for non-realtime plugins are Normalize for audio and Reframe for video. int save defaults(). See an existing plugin to see how the BC Hash object is used. int start loop(). int get parameters(). If get parameters returned 0 for success. Unlike the realtime plugin. this is called once to give the plugin a chance to initialize processing. This way the user can cancel the effect from the GUI. This should create a defaults object and load parameters from the defaults object into plugin variables. This routine must return 0 for success and 1 for failure. It should block the get parameters function until the user selects OK or Cancel. must call save defaults and delete defaults directly instead of PLUGIN DESTRUCTOR MACRO.
MainProgressBar *progress. load defaults and save defaults must be implemented. In nonrealtime plugins. This should save plugin variables to the defaults object. Saves the configuration in the defaults object. int is realtime(). There does not need to be a configuration class in nonrealtime plugins. the following methods must be defined.
BC_Hash *defaults. The defaults object is created in load defaults and destroyed in the plugin’s destructor. The constructor for a nonrealtime plugin cannot use PLUGIN CONSTRUCTOR MACRO but must call load defaults directly. this GUI need not run asynchronously of the plugin.
It should also have a pointer to a MainProgressBar. The nonrealtime plugin should contain a pointer to a defaults object.

the progress bar has already been started. The plugin must use read samples or read frame to read the input.. the return value may abort failed rendering. you need to start the progress bar. you first need to know whether the progress bar has already started in another instance of the plugin.5 Audio plugins
The simplest audio plugin is Gain. The PluginClient defines get total len() and get source start() to describe the timeline range to be processed. If it is single channel. In addition to progress bar cancellation. int sample_rate). The processing function needs to request input samples with
int read_samples(double *buffer.Chapter 23: Plugin authoring
181
The usage of start progress depends on whether the plugin is multichannel or single channel. Then you must process it and put the output in the buffer argument to process loop.h’ and inherit from PluginAClient. process loop should return 1 when the entire timeline range is processed. process loop should return 1 to indicate a cancellation.
if it is single channel. the user needs to call get buffer size() to know how many samples the output buffer can hold. These functions are a bit different for a non realtime plugin than they are for a realtime plugin. if it is multichannel or int process_buffer(int64_t size. double **buffer.
progress = start_progress("MyPlugin progress. returns 1 or 0 if the progress bar was cancelled. PluginClient::get_total_len()). The units are either samples or frames and in the project rate. int sample_rate). int64_t start_position. It has either a samples or frames buffer for output and a reference to write length to store the number of samples processed. They take a buffer and a position relative to the start of the timeline. Realtime audio plugins need to define
int process_buffer(int64_t size. int process loop This is called repeatedly until the timeline range is processed.". In the future. The processing object should include ‘pluginaclient. If PluginClient::is interactive is 1. If it is 1. process loop must test PluginClient::interactive and update the progress bar if it is 1. Then it should delete any objects it created for processing in start loop.
. int64_t start_position. int channel. this should call stop progress in the progress bar pointer and delete the pointer.
23. Finally. If it is multichannel you always call start progress. in the timeline’s rate. write length should contain the number of samples generated if it is audio.. If PluginClient::interactive is 1. int stop loop(). If it is 0.
progress->update(total_written). int sample_rate. int64_t start_position. If this is an audio plugin. double *buffer. This is called after process loop processes its last buffer. These should return 0 on success and 1 on failure.

Since the GUI is optional. double frame_rate). The processing object should include ‘pluginvclient.6 Video plugins
The simplest video plugin is Flip. but so far no analogue to PLUGIN CLASS MEMBERS has been done for transitions. These use a subset of the default class members of realtime plugins. they must also manage a thread like realtime plugins.
for multi channel. you can use PLUGIN CONSTRUCTOR MACRO and PLUGIN DESTRUCTOR MACRO to initialize the processing object.
23.
It always returns 0. int64_t &write_length). Returning 1 causes the rendering to abort. Transitions may or may not have a GUI. Do this with the same PLUGIN THREAD OBJECT and PLUGIN THREAD HEADER macros as realtime plugins. you do not need to worry about updating the GUI from the processing object like you do in a realtime plugin. If the transition has a GUI. These are fixed to the project frame rate.7 Transition plugins
The simplest video transition is wipe and the simplest audio transition is crossfade. Return 1 if it does and 0 if it does not. hence the lack of a write length argument. for single channel or int process_loop(double **buffers. Non realtime plugins use a different set of read samples functions to request input data.182
Chapter 23: Plugin authoring
int64_t len).
if it is single channel.
for multi channel. Realtime video plugins need to define
int process_buffer(VFrame **frame. int64_t start_position. The amount of frames generated in a single process loop is always assumed to be 1. Returning 0 causes the rendering to continue.
if it is multichannel or
int process_buffer(VFrame *frame. int64_t &write_length). for single channel or int process_loop(VFrame **buffers). The nonrealtime video plugins need to define
int process_loop(VFrame *buffer). The user may specify any desired sample rate and start position.
.h’ and inherit from PluginVClient. You will also need a BC Hash object and a Thread object for these macros. double frame_rate). If they have a GUI.
23. A set of read frame functions exist for requesting input frames in non-realtime video plugins. Since there is only one keyframe in a transition. int64_t start_position. These are fixed to the project sample rate. The processing object for audio transitions still inherits from PluginAClient and for video transitions it still inherits from PluginVClient. Nonrealtime audio plugins need to define
int process_loop(double *buffer. overwrite a function called uses gui() to signifiy whether or not the transition has a GUI.

The input argument to process realtime is the data for the next edit. in the data rate requested by the first plugin.
. a void pointer to transfer information from the processing object to the GUI. VFrame *output). Render gui should have a sequence like
void MyPlugin::render_gui(void *data) { if(thread) { thread->window->lock_window(). Transitions run in the data rate requested by the first plugin in the track. } }
Send render gui and render gui use one argument. This is achieved with the send render gui and render gui methods.returns the integer length of the transition. double *input_ptr. double *output_ptr). Since process realtime lacks a rate argument. The units are either samples or frames.
int process_realtime(VFrame *input. The output argument to process realtime is the data for the previous edit. This may be different than the project data rate.8 Plugin GUI’s which update during playback
Effects like Histogram and VideoScope need to update the GUI during playback to display information about the signal.Chapter 23: Plugin authoring
183
Transitions need a load defaults and save defaults function so the first time they are dropped on the timeline they will have useful settings. PluginClient::get total len() . Routines exist for determining where you are relative to the transition’s start and end. int process_realtime(int64_t size. use get framerate() or get samplerate to get the requested rate. The most important difference between transitions and realtime plugins is the addition of an is transition method to the processing object. Send render gui goes through a search and eventually calls render gui in the GUI instance of the plugin.
23. when the processing object wants to update the GUI it should call send render gui. The user should typecast this pointer into something useful. is transition should return 1 to signify the plugin is a transition. PluginClient::get source position() .returns the current position since the start of the transition of the lowest sample in the buffers. This should only be called in process buffer. A read data and save data function takes over after insertion to access data specific to each instance of the transition. // update GUI here thread->window->unlock_window(). Normally in process buffer.
Users should divide the source position by total length to get the fraction of the transition the current process realtime function is at. Transitions process data in a process realtime function.

it is 1. For video it is the start of the frame if playing forward and the end of the frame if playing in reverse. get total len() Gives the number of samples or frames in the range covered by the effect.184
Chapter 23: Plugin authoring
23. In addition.
.9.9 Plugin queries
There are several useful queries in PluginClient which can be accessed from the processing object. This number should be used to gain parallelism. For transitions this is always the lowest numbered sample of the region to process relative to the start of the transition. get direction() Gives the direction of the current playback operation. get project samplerate() Gives the samples per second of the audio as defined by the project settings.
23. If it is a uniprocessor it is 0. get samplerate() Gives the samples per second requested by the plugin after this one. The position is relative to the start of the EDL and in the requested data rate. doing time dependant effects requires using several functions which tell where you are in the effect. get project framerate() Gives the frames per second of the video as defined by the project settings. get source start() For realtime plugins it gives the lowest sample or frame in the effect range in the requested data rate. This is the requested sample rate and is the same as the sample rate argument to process buffer.
23. This is a macro defined in transportque. This is useful for calling read functions since the read functions position themselves at the start or end of the region to read. This is the requested frame rate and is the same as the frame rate argument to process buffer. They all give information about the operating system or the project which can be used to improve the quality of the processing. Functions are provided for getting the project and requested rate. get total buffers() Gives the number of tracks a multichannel plugin needs to process. get project smp() Gives the number of CPU’s on the system minus 1.2 Timing queries
There are two rates for media a realtime plugin has to be aware of: the project rate and the requested rate. If it is a dual processor. relative to the requested data rate. This is a macro from overlayframe.9. For nonrealtime plugins it is the start of the range of the timeline to process. Some of them have different meaning in realtime and non-realtime mode.inc. It can be applied to any call to the OverlayFrame object.inc. get framerate() Gives the frames per second requested by the plugin after this one.1 System queries
get interpolation type() Returns the type of interpolation the user wants for all scaling operations. depending on the playback operation. get source position() For realtime plugins it is the lowest numbered sample in the requested region to process if playing forward and the highest numbered sample in the region if playing backward.

23. In each keyframe. another position value tells the keyframe’s position relative to the start of the timeline and in the project rate. Do this by using edl to local on the keyframe positions. The frame buffer is limited to 8 bits per channel. Unfortunately. doubling the amount of software to maintain. all the information required to do OpenGL playback is stored in the VFrame object which is passed to process buffer. read frame takes a new parameter called use opengl. The position argument can be either in the project rate or the requested rate. The Brightness plugin is a simple OpenGL plugin to copy.10. They are used to convert keyframe positions into numbers which can be interpolated at the requested data rate. To support 3D. The conversion is automatically based on frame rate or sample rate depending on the type of plugin. There are 3 main points in OpenGL rendering and 1 point for optimizing OpenGL rendering. VFrame::TEXTURE The video data is stored in texture memory. The macro defined version of load configuration automatically retrieves the right keyframes but you may want to do this on your own.1 Getting OpenGL data
The first problem is getting OpenGL-enabled plugins to interact with software-only plugins. every OpenGL routine needs a software counterpart for rendering. the frame buffer is always a PBuffer.
. it only retains 8 bits.
23. For plugins. get prev keyframe(int64 t position. Fortunately. the VFrame contains a PBuffer and a texture. Set is local to 1 if it is in the requested rate and 0 if it is in the project rate. int is local) These give the nearest keyframe before or after the position given. It must be loaded into a texture before being drawn using OpenGL routines. VFrame::SCREEN The video data is stored in a frame buffer in the graphics card. The states are: VFrame::RAM This means the video data is stored in the traditional row pointers. In the plugin’s process buffer routine. the best way to design a first OpenGL plugin is to copy an existing one and alter it. To solve this. in addition to VFrame’s original rows. If an OpenGL effect is used in a floating point project. As always. Using OpenGL to do plugin routines can speed up playback greatly since it does most of the work in hardware.Chapter 23: Plugin authoring
185
local to edl() edl to local() These convert between the requested data rate and the project data rate. In OpenGL mode. int is local) get next keyframe(int64 t position. having an OpenGL routine means the software version does not need to be as optimized as it did when software was the only way. The image on the frame buffer can not be replicated again unless it is read back into the texture and the opengl state is reset to TEXTURE. VFrame has 3 states corresponding to the location of its video data. It is ready to be drawn using OpenGL routines. The opengl state is recovered by calling get opengl state and is set by calling set opengl state. there is normally a call to read frame to get data from the previous plugin in the chain. The only way to get smooth interpolation between keyframes is to convert the positions in the keyframe objects to the requested rate.10 Using OpenGL
Realtime video plugins support OpenGL.

. get_output()->draw_texture().186
Chapter 23: Plugin authoring
The plugin passes 1 to use opengl if it intends to handle the data using OpenGL. run opengl eventually transfers control to a virtual function called handle opengl. VFrame provides some functions to automate common OpenGL sequences. The plugin must not only know if it is software-only but if its output must be software only. the layer argument retrieves a specific layer of the output buffers.2 Drawing using OpenGL
get_output()->to_texture(). the opengl state in VFrame is RAM.10. If get use opengl returns 0. This would create a recursive lockup because it would cause other objects to call run opengl. The VFrame argument to process buffer is always available through the get output(int layer) function. run opengl transfers control to the common OpenGL thread.
23. To work around this. #endif to allow it to be compiled on systems with no graphics support. This is normally called by the plugin in process buffer after it calls read frame and only if get use opengl is 1..
VFrame::init screen sets the OpenGL frustum and parameters to known values. The PBuffer of the output buffer is where the OpenGL output must go if any processing is done. VFrame::draw texture() calls the vertex functions to draw the texture normalized to the size of the PBuffer.
The next step is to draw the texture with some processing on the PBuffer. If use opengl is 0. If get use opengl is 1. The main problem with OpenGL is that all the gl. Once inside handle opengl. If the plugin is multichannel. calls need to be run from the same thread. the plugin can decide based on its implementation whether to use OpenGL. The value of use opengl is passed up the chain to ensure a plugin which only does software only gets the data in the row pointers. is to set the output’s opengl state to SCREEN with a call to VFrame::set opengl state. enable opengl makes the OpenGL context relative to the output’s PBuffer.
The last step in the handle opengl routine. Call get use opengl to determine if the output can be handled by OpenGL. The plugin should not read back the frame buffer into a texture or row pointers if it has no further
. the plugin interface has routines for running OpenGL in a common thread. get_output()->bind_texture(0). after the texture has been drawn on the PBuffer. Copy this if you want custom vertices. The return value of handle opengl is passed back from run opengl.
The sequence of commands to draw on the output PBuffer stars with getting the video in a memory area where it can be recalled for drawing:
to texture transfers the OpenGL data from wherever it is to the output’s texture memory and sets the output state to TEXTURE. Through a series of indirections... It passes 0 to use opengl if it can only handle the data using software. the plugin must pass 0 for use opengl in read frame and do its processing in software. like render nodes. VFrame::bind texture(int texture unit) binds the texture to the given texture unit and enables it. read frame can not be called from inside handle opengl. the plugin has full usage of all the OpenGL features. get_output()->enable_opengl(). handle opengl must be overridden with a function to perform all the OpenGL routines. The normal sequence of commands to draw a texture is:
get_output()->init_screen(). The contents of handle opengl must be enclosed in #ifdef HAVE GL .

Parameters are set and retrieved in the table with calls to update and get just like with defaults. the main functions are renamed and run in order. The fall through plugins must copy their parameters to the output buffer so they can be detected by the processing plugin. draw the texture starting from init screen. The normal sequence for using a shader comes after a call to enable opengl. Shaders are written in OpenGL Shading Language. This is done when Frame to Fields and RGB to 601 are attached in order. Passing YUV colormodels to plugins was necessary for speed. The shader is a C program which runs on the graphics card.3 Using shaders
Very few effects can do anything useful with just a straight drawing of the texture on the PBuffer. Plugins should only leave the output in the texture or RAM if its location results from normal processing.". These take the name of the processor plugin as a string argument and return 1 if the next or previous plugin is the processor plugin.
23. The shader source code is contained in a string. They normally define a shader.. The first and last arguments must always by 0. In aggregation. it can be much faster than running it on the CPU.4 Aggregating plugins
Further speed improvements may be obtained by combining OpenGL routines from two plugins into a single handle opengl function. If multiple main functions are in the sources. the fall through plugin must still call read frame to propogate the data but return after that. Aggregations of more than two plugins are possible but very hard to get working. glUseProgram(shader_id).
23. Aggregation is useful for OpenGL because each plugin must copy the video from a texture to a PBuffer. If it is YUV.
char *shader_source = ". And arbitrary number of source strings may be put between the 0’s. The shader program must be disabled with another call to glUseProgram(0) and 0 as the argument. 0). The fall through plugins must determine if the processor plugin is attached with calls to next effect is and prev effect is. All the shaders so far have been fragment shaders. After the shader is initialized. The source strings are concatenated by make shader into one huge shader source. Future calls to make shader with the same source code run much faster.h’. // Set uniform variables using glUniform commands
The compilation and linking step for shaders is encapsulated by the VFrame::make shader command.10. The shader id and source code is stored in memory as long as Cinelerra runs. With the YUV retained. it is offset by 0. If either returns 1. The VFrame used as the output buffer contains a parameter table for parameter passing between plugins and it is accessed with get output()->get params().
.Chapter 23: Plugin authoring
187
processing. In software there was no copy operation. There are a number of useful macros for shaders in ‘playback3d..5 just like in software. only the final compositing step needs a YUV to RGB routine.10. Since the graphics card is optimized for graphics. unsigned char shader_id = VFrame::make_shader(0. Every effect and rendering step would have needed a YUV to RGB routine. but it may be YUV or RGB depending on the project settings. shader_source. They should set the opengl state to RAM or TEXTURE if they do. one plugin processes everything from the other plugins and the other plugins fall through. Colormodels in OpenGL: The colormodel exposed to OpenGL routines is always floating point since that is what OpenGL uses. The other option was to convert YUV to RGB in the first step that needed OpenGL. It returns a shader id which can be passed to OpenGL functions.

188

Chapter 23: Plugin authoring

The processor plugin must call next effect is and prev effect is to determine if it is aggregated with a fall through plugin. If it is, it must perform the operations of the fall through plugin in its OpenGL routine. The parameters for the the fall through plugin should be available by get output()->get params() if the fall through plugin set them.

Chapter 24: Keyboard shortcuts

189

24 Keyboard shortcuts
Alex Ferrer started summarizing most of the keyboard shortcuts. Most of the keys work without any modifier like SHIFT or CTRL. Most windows can be closed with a CTRL-w. Most operations can be cancelled with ESC and accepted with RET.

Undo Re-Do Cut Copy Paste Clear Paste Silence Mute region Select all When done over an edit causes the highlighted selection to extend to the cursor position. When done over the boundary of an effect causes the trim operation to apply to one effect. Toggle between Drag-and-Drop and Cut-and-Paste editing modes Toggle In point Toggle Out point Toggle label at current position Go to Previous Label Go to Next Label

24.1.2 Editing Labels and In/Out Points shortcuts

24.1.3 Navigation shortcuts

Move the timeline right (not the insertion point) * Move the timeline left (not the insertion point) * Zoom time out * Zoom time in * Expand current curve amplitude Shrink current curve amplitude Expand all curve amplitude Shrink all curve amplitude Expand curve amplitude Shrink curve amplitude Fit time displayed to selection Make the range of all the automation types. Fit the maximum and minimum range of the current highlighted selection Make the range of the currently selected automation type fit the maximum and minimum range of the current highlighted selection Move the insertion point left one edit Move the insertion point right one edit Move the timeline up * Move the timeline down * Expand track height

190

Chapter 24: Keyboard shortcuts

Ctrl Page Dn Home End

Shrink track height Move insertion point to beginning of timeline * Move insertion point to end of timeline *

* You may have to click on the timeline to deactivate any text boxes or tumblers before these work.

Cut Copy Paste Splice Overwrite Toggle In point Toggle Out point Toggle label at current position Go to Previous Label Go to Next Label Go to beginning Go to end Undo Re-Do Zoom in Zoom out

24.3 Playback transport shortcuts
Transport controls work in any window which has a playback transport. They are accessed through the number pad with num lock disabled. 4 Frame back 5 Reverse Slow 6 Reverse + Reverse Fast 1 Frame 2 Forward Slow 3 Play Enter Fast Forward Forward 0 Stop SPACE is normal Play, Hitting any key twice is Pause. Hitting any transport control with CTRL down causes only the region between the in/out points to be played, if in/out points are defined.

24.4 Record window shortcuts
Space l

Start and pause recording of the current batch Toggle label at current position

.

) Each licensee is addressed as “you”. The precise terms and conditions for copying. the GNU General Public License is intended to guarantee your freedom to share and change free software—to make sure the software is free for all its users. or if you modify it. below. For example. a work containing the Program or a portion of it. we want its recipients to know that what they have is not the original. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish). (Some other Free Software Foundation software is covered by the GNU Lesser General Public License instead. 1991 Free Software Foundation. that you receive source code or can get it if you want it.
. any free program is threatened constantly by software patents. distribution and modification follow. you must give the recipients all the rights that you have. for each author’s protection and ours. and that you know you can do these things. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses. but changing it is not allowed. 51 Franklin Street.GNU General Public License
193
GNU General Public License
Version 2. Also. We protect your rights with two steps: (1) copyright the software. If the software is modified by someone else and passed on. Boston. By contrast. Fifth Floor. translation is included without limitation in the term “modification”. When we speak of free software. You must make sure that they. MA 02110-1301. To prevent this.
TERMS AND CONDITIONS FOR COPYING. too. These restrictions translate to certain responsibilities for you if you distribute copies of the software. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. Inc. too. we are referring to freedom. if you distribute copies of such a program. (Hereinafter. This General Public License applies to most of the Free Software Foundation’s software and to any other program whose authors commit to using it. The “Program”. whether gratis or for a fee. To protect your rights. so that any problems introduced by others will not reflect on the original authors’ reputations. And you must show them these terms so they know their rights. either verbatim or with modifications and/or translated into another language. USA
Everyone is permitted to copy and distribute verbatim copies of this license document. in effect making the program proprietary. we have made it clear that any patent must be licensed for everyone’s free use or not licensed at all. we want to make certain that everyone understands that there is no warranty for this free software. June 1991 Copyright c 1989. distribute and/or modify the software. Finally. we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. DISTRIBUTION AND MODIFICATION
0. not price. that you can change the software or use pieces of it in new free programs. and a “work based on the Program” means either the Program or any derivative work under copyright law: that is to say.) You can apply it to your programs.
Preamble
The licenses for most software are designed to take away your freedom to share and change it. and (2) offer you this license which gives you legal permission to copy. receive or can get the source code. refers to any such program or work.

Whether that is true depends on what the Program does.) These requirements apply to the modified work as a whole. distribution and modification are not covered by this License.194
GNU General Public License
Activities other than copying. c. that in whole or in part contains or is derived from the Program or any part thereof. in any medium.
. The act of running the Program is not restricted. You may copy and distribute verbatim copies of the Program’s source code as you receive it. to give any third party. and you may at your option offer warranty protection in exchange for a fee. thus forming a work based on the Program. You may modify your copy or copies of the Program or any portion of it. provided that you also meet all of these conditions: a. Accompany it with the complete corresponding machine-readable source code. You may copy and distribute the Program (or a work based on it. valid for at least three years. which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange. (Exception: if the Program itself is interactive but does not normally print such an announcement. then this License. you must cause it. do not apply to those sections when you distribute them as separate works. for a charge no more than your cost of physically performing source distribution. You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. whose permissions for other licensees extend to the entire whole. and its terms. and thus to each and every part regardless of who wrote it. keep intact all the notices that refer to this License and to the absence of any warranty. But when you distribute the same sections as part of a whole which is a work based on the Program. b. rather. 2. and give any other recipients of the Program a copy of this License along with the Program. the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. your work based on the Program is not required to print an announcement. In addition. You may charge a fee for the physical act of transferring a copy. 3. or. If the modified program normally reads commands interactively when run. and can be reasonably considered independent and separate works in themselves. it is not the intent of this section to claim rights or contest your rights to work written entirely by you. saying that you provide a warranty) and that users may redistribute the program under these conditions. Thus. they are outside its scope. and copy and distribute such modifications or work under the terms of Section 1 above. under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a. You must cause any work that you distribute or publish. mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. to be licensed as a whole at no charge to all third parties under the terms of this License. the distribution of the whole must be on the terms of this License. and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Accompany it with a written offer. when started running for such interactive use in the most ordinary way. provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty. to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else. If identifiable sections of that work are not derived from the Program. and telling the user how to view a copy of this License. b. 1.

or rights. this section has the sole purpose of protecting the integrity of the free software distribution system. they do not excuse you from the conditions of this License. unless that component itself accompanies the executable. For an executable work. sublicense or distribute the Program is void. Therefore. parties who have received copies. the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances.) The source code for a work means the preferred form of the work for making modifications to it. If any portion of this section is held invalid or unenforceable under any particular circumstance. to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange. modify. the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler. However.
6. even though third parties are not compelled to copy the source along with the object code. You are not required to accept this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims. You are not responsible for enforcing compliance by third parties to this License. nothing else grants you permission to modify or distribute the Program or its derivative works. c. then offering equivalent access to copy the source code from the same place counts as distribution of the source code. sublicense. since you have not signed it. You may not impose any further restrictions on the recipients’ exercise of the rights granted herein. Accompany it with the information you received as to the offer to distribute corresponding source code. If distribution of executable or object code is made by offering access to copy from a designated place. or. Each time you redistribute the Program (or any work based on the Program).
a complete machine-readable copy of the corresponding source code. complete source code means all the source code for all modules it contains. distribute or modify the Program subject to these terms and conditions. Any attempt otherwise to copy. These actions are prohibited by law if you do not accept this License. which is implemented by
. plus any associated interface definition files. and will automatically terminate your rights under this License. agreement or otherwise) that contradict the conditions of this License. by modifying or distributing the Program (or any work based on the Program). and all its terms and conditions for copying.
5. However. if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you. then as a consequence you may not distribute the Program at all. If. However. from you under this License will not have their licenses terminated so long as such parties remain in full compliance. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer. the recipient automatically receives a license from the original licensor to copy.GNU General Public License
195
4. conditions are imposed on you (whether by court order. you indicate your acceptance of this License to do so. as a special exception. kernel. distributing or modifying the Program or works based on it. as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues). plus the scripts used to control compilation and installation of the executable. in accord with Subsection b above. then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. For example. or distribute the Program except as expressly provided under this License. modify. You may not copy.
7. and so on) of the operating system on which the executable runs.

INCLUDING ANY GENERAL. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. INCLUDING. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces. TO THE EXTENT PERMITTED BY APPLICABLE LAW. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE. 12. SHOULD THE PROGRAM PROVE DEFECTIVE. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system. If the Program specifies a version number of this License which applies to it and “any later version”. the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. For software which is copyrighted by the Free Software Foundation. BUT NOT LIMITED TO. we sometimes make exceptions for this. EITHER EXPRESSED OR IMPLIED. this License incorporates the limitation as if written in the body of this License. OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE. write to the author to ask for permission. it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. THERE IS NO WARRANTY FOR THE PROGRAM. SPECIAL. 9. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. you may choose any version ever published by the Free Software Foundation. REPAIR OR CORRECTION. but may differ in detail to address new problems or concerns. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM “AS IS” WITHOUT WARRANTY OF ANY KIND. THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS). Each version is given a distinguishing version number. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER. write to the Free Software Foundation. you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. so that distribution is permitted only in or among countries not thus excluded. Such new versions will be similar in spirit to the present version. 8. BE LIABLE TO YOU FOR DAMAGES. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. 10. EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. In such case.
NO WARRANTY
11.196
GNU General Public License
public license practices. If the Program does not specify a version number of this License.
END OF TERMS AND CONDITIONS
. YOU ASSUME THE COST OF ALL NECESSARY SERVICING.

you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation. To do so. signature of Ty Coon. If this is what you want to do. This is free software. If the program is interactive.
. Inc. Boston. MA 02110-1301.
one line to give the program’s name and a brief idea of what it does. either version 2 of the License.. hereby disclaims all copyright interest in the program ‘Gnomovision’ (which makes passes at compilers) written by James Hacker.
Also add information on how to contact you by electronic and paper mail. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty. if any. use the GNU Lesser General Public License instead of this License. for details type ‘show w’. the commands you use may be called something other than ‘show w’ and ‘show c’. See the GNU General Public License for more details. write to the Free Software Foundation. if necessary. type ‘show c’ for details. This program is distributed in the hope that it will be useful. If your program is a subroutine library. and each file should have at least the “copyright” line and a pointer to where the full notice is found. but WITHOUT ANY WARRANTY. they could even be mouse-clicks or menu items—whatever suits your program. You should have received a copy of the GNU General Public License along with this program. President of Vice
This General Public License does not permit incorporating your program into proprietary programs. USA. Here is a sample. without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. and you are welcome to redistribute it under certain conditions. Fifth Floor. or (at your option) any later version.GNU General Public License
197
Appendix: How to Apply These Terms to Your New Programs
If you develop a new program. you may consider it more useful to permit linking proprietary applications with the library. make it output a short notice like this when it starts in an interactive mode:
Gnomovision version 69.
The hypothetical commands ‘show w’ and ‘show c’ should show the appropriate parts of the General Public License. and you want it to be of the greatest possible use to the public. to sign a “copyright disclaimer” for the program. the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. Copyright (C) yyyy name of author This program is free software. 1 April 1989 Ty Coon. Of course. You should also get your employer (if you work as a programmer) or your school. if not. alter the names:
Yoyodyne. attach the following notices to the program. Inc. Copyright (C) year name of author Gnomovision comes with ABSOLUTELY NO WARRANTY. 51 Franklin Street..