If you do not enable install-time model downloads, the model will be
downloaded the first time you run the detector. Requests you make before the
download has completed will produce no results.

Input image guidelines

For ML Kit to accurately read barcodes, input images must contain
barcodes that are represented by sufficient pixel data. In general, the
smallest meaningful unit of the barcode should be at least 2 pixels wide
(and for 2-dimensional codes, 2 pixels tall).

For example, EAN-13 barcodes are made up of bars and spaces that are 1,
2, 3, or 4 units wide, so an EAN-13 barcode image ideally has bars and
spaces that are at least 2, 4, 6, and 8 pixels wide. Because an EAN-13
barcode is 95 units wide in total, the barcode should be at least 190
pixels wide.

Denser formats, such as PDF417, need greater pixel dimensions for
ML Kit to reliably read them. For example, a PDF417 code can have up to
34 17-unit wide "words" in a single row, which would ideally be at least
1156 pixels wide.

If you are scanning barcodes in a real-time application, you might also
want to consider the overall dimensions of the input images. Smaller
images can be processed faster, so to reduce latency, capture images at
lower resolutions (keeping in mind the above accuracy requirements) and
ensure that the barcode occupies as much of the image as possible. Also
see Tips to improve real-time performance.

1. Configure the barcode detector

If you know which barcode formats you expect to read, you can improve the speed
of the barcode detector by configuring it to only detect those formats.

Note: For a Data Matrix code to be recognized, the code must intersect the
center point of the input image. Consequently, only one Data Matrix code can be
recognized in an image.

2. Run the barcode detector

To recognize barcodes in an image, create a FirebaseVisionImage object
from either a Bitmap, media.Image, ByteBuffer, byte array, or a file on
the device. Then, pass the FirebaseVisionImage object to the
FirebaseVisionBarcodeDetector's detectInImage method.

Java

Kotlin

val image = FirebaseVisionImage.fromBitmap(bitmap)

The image represented by the Bitmap object must
be upright, with no additional rotation required.

To create a FirebaseVisionImage object from a
media.Image object, such as when capturing an
image from a device's camera, first determine the angle the
image must be rotated to compensate for both the device's
rotation and the orientation of camera sensor in the device:

Kotlin

3. Get information from barcodes

If the barcode recognition operation succeeds, a list of
FirebaseVisionBarcode objects will be passed to the success listener. Each
FirebaseVisionBarcode object represents a barcode that was detected in the
image. For each barcode, you can get its bounding coordinates in the input
image, as well as the raw data encoded by the barcode. Also, if the barcode
detector was able to determine the type of data encoded by the barcode, you can
get an object containing parsed data.

Tips to improve real-time performance

If you want to scan barcodes in a real-time application, follow these
guidelines to achieve the best framerates:

Throttle calls to the detector. If a new video frame becomes
available while the detector is running, drop the frame.

If you are using the output of the detector to overlay graphics on
the input image, first get the result from ML Kit, then render the image
and overlay in a single step. By doing so, you render to the display surface
only once for each input frame. See the CameraSourcePreview and GraphicOverlay classes in the quickstart sample app for an
example.

If you use the Camera2 API, capture images in
ImageFormat.YUV_420_888 format.

If you use the older Camera API, capture images in
ImageFormat.NV21 format.

Consider capturing images at a lower resolution. However, also keep in mind
this API's image dimension requirements.