Some camera sensors don't include auto white balance and/or auto exposure processing, so RidgeRun offers a library with AE and AWB algorithms called librraew. This library has been developed for the DM365 platform. The DM365 VPFE has a module H3A designed to support control loops for auto focus, auto white balance and auto exposure by collecting statistics about the imaging/video data. There are two blocks in this module:

Auto focus engine

Auto exposure and auto white balance engine

The librraew only use the auto exposure and auto white balance engine. This engine divides the frames into two dimensional blocks of pixels referred as windows. The engine can provide some image/video metrics:

Accumulation of clipped pixels along with all non-saturated pixels in each window per color.

Accumulation of the sum of squared pixels per color.

Minimum and maximum pixels values in each window per color.

The AE/AWB engine can be configured to use up to 36 horizontal windows with sum + {sum of squares or min+max} output or up to 56 horizontal windows with sum output. Also can be configure to use up to 128 vertical windows. The width and height for the windows is programmable.

By now, the librraew has only been tested for the camera sensor mt9p031 but if you provide the appropriate functions for the library, it can works with any sensor. It is a plain C library and can be re-used with and integrated with any application. RidgeRun uses ipiped (see below) for testing and demonstration.

Algorithms

Auto white balance

When an image of a scene is captured by a digital camera sensor, the sensor response at each pixel depends on the illumination. Depending of the scene illumination, a distinct color cast appears over the captured scene. This effect appears in the captured image due to the color temperature of the light. If a white object is illuminated with a low color temperature light source, the object in the captured image will be reddish. Similarly, when the white object is illuminated with a hight color temperature light source, the object in the captured image will be bluish. The human eye is capable of compensate this color cast automatically through a characteristic known as color constancy, allowing the colors to be independent of the illumination. Auto white balance tries to simulate the color constancy for images capture.

Many AWB algorithms follow a two-stage process:

Illumination estimation: can be done explicitly by chosen from a known set possible illuminations or implicitly with assumptions about the effect of such illuminations. The algorithms implemented in librraew use the implicitly estimation.

Image color correction: is achieved through an independent gain adjustment of the three color signals. Commonly only the blue an red gains are adjusted.

Bellow a brief description of the AWB algorithms for the librraew is presented:

Gray World

The Gray World algorithm is based on the assumption that given an image with sufficient amount of color variation, the average reflectance of the scene is achromatic(gray). To be achromatic, the mean of the red, green, and blue channels in a given scene should be roughly equal. If the scene has a color that dominates, the results of the algorithm may not be satisfactory.

White Patch

The White Patch algorithm assumes that the maximum response in an image is caused by a perfect reflector, that represents the color of the illumination. This white balancing attempts to equalize the maximum value of the three channels to produce a white patch. This algorithm must be used on scenes that ensure the captured image does not include saturated pixels.

White Patch 2

This is a variation of the White Patch that aims to resolve the problems with the saturated pixels. Uses an average of local maximums instead of absolute maximum. The results can change depending of the amount of windows in the image.

Auto exposure

One of the main problems affecting image quality, leading to unpleasant pictures, comes from improper exposure to light. The exposure is the amount of light that reaches the image sensor. So, the exposure determines how light or dark the resulting image/video will be. If too much light strikes the image sensor, the image will be overexposed-washed out and faded. If too little light reaches the camera sensor produces an underexposed image, dark and lacking in details specially in shadow areas. The auto exposure algorithms tries the image be captured with a the best exposure. The best exposure is the exposure that enables to reproduce the most important regions (according to contextual or perceptive criteria) with a level of brightness, more or less in the middle of the possible range.

The AE algorithms process involves three process:

Light metering: its generally accomplished using the camera sensor itself or an external device as exposure detector.

Scene a analysis: use brightness metering methods to estimate the scene illumination according to image metrics. Using this value, calculates the brightness adjustments to have the best exposure.

Image brightness correction: to ensure that the correct amount of light reaches the sensor image sensor parameters concerning to illumination and shutter time are adjusted. The CMOS sensor parameter is often called exposure time. The exposure time is defined as the amount of time that the sensor integrates light. In other words, it determines how long the sensor photo diodes array is exposed to light.

Librraew includes one AE algorithm that can use five brightness metering methods. The algorithm uses an electronic-centric approach based on the mid-tone idea. This algorithm uses metrics gotten for the sensor capture image as light metering. Calculates the required exposure time to get an optimal image brightness, using an expression that relates actual scene brightness, actual exposure time and the defined optimal image brightness. The optimal image brightness is defined as the value of the mid-tone.

All the metering systems define a pixel brightness as the average of the red, blue and green component. Bellow a brief description of the brightness metering methods is presented:

Partial

Averages the brightness of the pixels in a portion in the center of the frame, the rest of the frame is ignored. It is generally used when very bright or very dark areas on the edges of the frame would otherwise influence on a bad way the scene illumination.

Center weighted

Average the light information coming from the entire frame. Get brightness average of to regions: the pixels in a portion in the center and the pixels in the rest of the frame (background). The total brightness is calculated with emphasis placed on the center area. 75% of the total brightness is given by the center brightness and 30% by the background. This algorithm can be used when you want that the whole scene be well illuminated and not be affected for the small edges brightness variations. The subjects of the picture must be at the center of the image. But, if a back light is present into the scene the central part results darker than the rest of the scene and unpleasant underexposed foreground is produced.

Segmented

This algorithm is designed for scenes that have a principal object in backlighting condition. Emphasize the luminance of the main object according to the degree of backlighting. Divides the frame on 6 pieces and weighting them.

Average

Average the light information coming from the entire frame without weighting any particular portion of the metered area. This algorithm can be used on scenes that not have a principal object and you want an average illumination. If the scene has a high contrast, the algorithm cause under or over exposure of parts of the scene.

The center area for all the metering system is defined by a percentage of the image size and can be set by a librraew parameter.

Implementation Details

Three applications are used to support the auto exposure and auto white-balance (AEW) adjustments in the RidgeRun's SDK:

Ipiped, a D-Bus server for controlling and configuring the camera sensor, the dm365 video processor and the aew library.

librraew, a library that includes auto white balance and auto exposure algorithms.

Ipipe-client, an D-Bus client that can be used to invoke any of the methods supported by the Ipiped.

Running Ipiped

Ipiped must run in background. If you are using RidgeRun's SDK and enabled ipiped it will be setup to start automatically when the system boots.

ipiped &

Ipiped registers with D-Bus and waits until ipipe-client requests to execute a method.

Running Ipipe-client

Ipipe-client is a Dbus client that use commands to invoke methods of the ipiped, so ipiped must be running to use ipipe-client. A command can required arguments depending of the functionality. Ipipe-client has two operation modes, you can ask to execute a single command or you can open an interactive console to execute a group of commands.

To execute a single command, you can use the following command line syntax

ipipe-client <command> <argument 1> ... <argument n>

To get into the interactive console, you have to run ipipe-client without any command. Then to execute a command you only need to use the command and the required arguments.

Command Description
help Displays the help text for all the possible commands or a specific command.
set-debug Enable/Disable debug messages.
init-aew Initialize AEW algorithms.
stop-aew End AEW algorithm.
shell Execute a shell command(shell_cmd) using interactive console.
ping Show if ipipe-daemon is alive.
quit Quit from the interactive console.
exit Exit from the interactive console.
get-video-processor Show the video processor that is being used.
get-sensor Show the sensor that is being used.
run-config-script Execute a group of ipipe-client commands.
set-previewer-mode Configure previewer on continuous or one-shot mode.
set-bayer-pattern Sets R/Gr/Gb/B color pattern to the previewer.
set-digital-gain Sets red (R), green (G) and blue gains (G) on the ipipe.
get-digital-gain Returns the gain value for each color component(RGB).
set-luminance Brightness(Br) and contrast(C) adjustment.
get-luminance Returns the value of the Brightness(Br) and contrast(C) adjustment.
flip-vertical Flips the image vertically(on the sensor).
flip-horizontal Flips the image horizontally (on the sensor).
set-exposure Sets the effective shutter time of the sensor for the light integration.
get-exposure Gets the exposure time of the sensor in us.
set-sensor-gain Sets red(R), green(G) and blue(B) gain directly on the sensor.
get-sensor-gain Gets sensor red(R), green(G) and blue(B).

If you want more detailed information about a command execute:

ipipe-client help <command>

Controlling librraew with ipipe

Auto exposure and auto white balance adjustments can be started with an ipipe-client's command called init-aew. Init-aew requires some arguments to define the algorithms and other parameters. To see the arguments required you can request for help that show you the list as follows:

Command: init-aew
Syntax: init-aew <WB> <AE> <G> <EM> <T[us]> <fps> <seg> <width> <height>
Description: Initialize AEW algorithms
Arguments:
WB: white balance algorithm, the options are:
G -for gray world algorithm
W -for retinex algorithm
W2 -for variant of retinex algorithm
N -for none
AE: auto exposure algorithm, the options are
EC -for electronic centric
N -for none
G: gain type, the options are:
S -to use the sensor gain
D -to use the digital
EM: exposure metering method, the options are:
P -for partial metering that take into account the light
information of a portion in the center and the rest of
the frame is ignored. The size of the center depends of
of the parameter center_percentage
C -for center weighted metering that take into account
the light information coming from the entire frame with
emphasis placed on the center area
A -for average metering that take into account the light
information from the entire frame without weighting
SG -for segmented metering that divides the frame
on 6 pieces and weighting them to avoid backlighting
T: wait time in us, specifies the time between
algorithm adjustments, max value=1s=1000000us
fps: minimum frame rate
seg: frame segmentation factor, each frame is segmented into
regions, this factor represents the percentage of the
maximum number of possible regions
width: captured video/image horizontal size
height: captured video/image vertical size
center_percentage: defines the percentage of the image width
and height to be used as the center size

Also you can stop automatic adjustments with the command stop-aew

Some of the init-aew arguments need to be explained in more detail:

T: the time between interactions defines how fast the algorithm can adjust the scene parameters. If you don't need fast changes you can use a greater time to get less CPU usage.

seg: this factor is related with the amount of CPU usage and the auto-adjustments precision. If you use a high segmentation percentage you will have greater CPU usage but you will get more precision on the adjustments.

Using librraew with other software rather than ipiped

librraew is a plain C library and can be re-used and integrated with any custom application. Please contact RidgeRun for the documentation of the librraew API.