Examples

Related Examples

Programmatic Use

videoLabeler

videoLabeler opens a new session of the app, enabling you to label
ground truth data in a video or image sequence.

videoLabeler(videoFileName)

videoLabeler(videoFileName) opens the app and loads the input
video. The video file must have an extension supported by VideoReader.

Example: videoLabeler('vipmen.avi')

videoLabeler(imageSeqFolder)

videoLabeler(imageSeqFolder) opens the app and loads the image
sequence from the input folder. An image sequence is an ordered set
of images that resemble a video.

imageSeqFolder must be a string scalar or character vector that
specifies the folder containing the image files. The image files must have extensions
supported by imformats and are loaded in the order
returned by the dir function.

The images in imageSeqFolder must be the same size. If the images
vary in size, the app imports only the images that are of the same size as the first image
in the sequence. To label a collection of unordered images that vary in size, use the Image Labeler app
instead.

videoLabeler(imageSeqFolder,timestamps)

videoLabeler(imageSeqFolder,timestamps) opens the app and loads a
sequence of images with their corresponding timestamps. timestamps must
be a duration vector of the same length as the
number of images in the sequence.

For example, load a sequence of images and their corresponding timestamps into the
app.

videoLabeler(sessionFile)

videoLabeler(sessionFile) opens the app and loads a saved app
session, sessionFile. The sessionFile input
contains the path and file name. The MAT-file that sessionFile points
to contains the saved session.

Limitations

The built-in automation algorithms support the automation of rectangular ROI labels
only. When you select a built-in algorithm and click Automate, scene
labels, pixel ROI labels, polyline ROI labels, sublabels, and attributes are not imported
into the automation session. To automate the labeling of these features, create a custom
automation algorithm. See Create Automation Algorithm for Labeling.

Pixel ROI labels do not support sublabels or attributes.

The Label Summary window does not support sublabels or attributes

Tips

To avoid having to relabel ground truth with new labels, organize the labeling scheme
you want to use before marking your ground truth.

Algorithms

You can use label automation algorithms to speed up labeling
within the app. To create your own label automation algorithm to use within the app, see Create Automation Algorithm for Labeling. You can also use one of
the provided built-in algorithms. Follow these steps:

Load the data you want to label, and create at least one label definition.

On the app toolstrip, click Select Algorithm, and select one
of the built-in automation algorithms.

Click Automate, and then follow the automation instructions
in the right pane of the automation window.

ACF People Detector

Detect and label people using aggregate channel features (ACF). This algorithm is based on the
peopleDetectorACF function. To use this
algorithm, you must define at least one rectangle ROI label. You do not need to draw any ROI
labels.

To help improve the algorithm results, first click Settings. You can
change any of these settings.

The pretrained people detector model that the algorithm uses — The
'inria-100x41' model was trained using the INRIA person
data set. The 'caltech-50x21' model was trained using the
Caltech Pedestrian data set.

The overlap ratio threshold, from 0 to 1, for detecting people — When
rectangle ROIs overlap by more than this threshold, the algorithm discards one
of the ROIs.

The classification score threshold for detecting people — Increase the score
to increase the prediction confidence of the algorithm. Rectangles with scores
below this threshold are discarded.

Point Tracker

Track and label one or more rectangle ROI labels over short intervals by using the
Kanade-Lucas-Tomasi (KLT) algorithm. This algorithm is based on the vision.PointTracker
System
object™. To use this algorithm, you must define at least one rectangle ROI label, but
you do not need to draw any ROI labels.

To change the feature detector used to obtain the initial points for tracking, click
Settings. This table shows the feature detector
options.

Temporal Interpolator

Estimate rectangle ROIs between frames by interpolating the ROI locations across the time
interval. To use this algorithm, you must draw a rectangle ROI on a minimum of two
frames: one at the beginning of the interval and one at the end of the interval. The
interpolation algorithm estimates and draws ROIs in the intermediate frames.

Consider a video with 10 frames. The first frame has a rectangle ROI centered at [5, 5]. The
10th frame has a rectangle ROI centered at [25, 25]. At each frame, the algorithm
moves the ROI 2 pixels in the x-direction and 2 pixels in the
y-direction. Therefore, the algorithm centers the ROI at
[7, 7] in the second frame, [9, 9] in the third frame, and so on, up to [23, 23] in
the second-to-last frame.

ACF Vehicle Detector (requires Automated Driving Toolbox)

Detect and label vehicles using aggregate channel features (ACF). This algorithm is based on
the vehicleDetectorACF function. To use this
algorithm, you must define at least one rectangle ROI label. You do not need to draw any ROI
labels.

To help improve the algorithm results, first click Settings. You can
change any of these settings.

The pretrained vehicle detector model that the algorithm uses — The
'full-view' model was trained using unoccluded images of
the front, rear, left, and right sides of vehicles. The
'front-rear-view' model was trained using images of only
the front and rear sides of the vehicle.

The overlap ratio threshold, from 0 to 1, for detecting vehicles — When
rectangle ROIs overlap by more than this threshold, the algorithm discards one
of the ROIs.

The classification score threshold for detecting vehicles — Increase the score
to increase the prediction confidence of the algorithm. Rectangles with scores
below this threshold are discarded.

You can also configure the detector with a calibrated monocular camera by importing a monoCamera object into the MATLAB workspace. Specify the length and width ranges of the vehicle in world units, such as meters.

Lane Boundary Detector (requires Automated Driving Toolbox)

Detect and label lane boundaries using an estimated bird’s-eye-view projected image. To use
this algorithm, you must define at least one line ROI label. You do not need to draw any ROI
labels. To detect lane boundaries, the algorithm follows these steps:

It makes an initial guess at the placement of the lane boundaries in the
image.

It transforms the ROI around the lanes into a bird's-eye view image to make the
lanes parallel and remove distortion.

It uses this image to segment the lane boundaries.

To help improve the algorithm results, first click Settings. You can
change any of these settings.

The placement of the lane lines for generating the bird's-eye view
image

The ROI around the lanes, which you can expand to include more than just the
ego lane boundaries in the image

The pixel width of detected lane boundaries in the image

You can also change the number of lane boundaries that you want to detect. The default number
of lane boundaries is 2.