This tutorial will be presented in two parts – the first being an introduction
to the command-line utilities that can be used to perform processing operations
with PDAL, and the second being an introductory C++ tutorial of how to use the
PDAL API to accomplish similar tasks.

PDAL is both a C++ library and a collection of command-line utilities for
data processing operations. While it is similar to LAStools in a few
aspects, and borrows some of its lineage in others, the PDAL library
is an attempt to construct a library that is primarily intended as a
data translation library first, and a exploitation and filtering library
second. PDAL exists to provide an abstract API for software developers
wishing to navigate the multitude of point cloud formats that are out there.
Its value and niche is explicitly modeled after the hugely successful GDAL
library, which provides an abstract API for data formats in the GIS raster
data space.

Conversion of one file format to another can be a hairy topic. You should
expect leakage of details of data in the source format as it is converted to
the destination format. Metadata, file organization, and data themselves
may not be able to be represented as you move from one format to another.
Conversion is by definition lossy, if not in terms of the actual data
themselves, but possibly in terms of the auxiliary data the format also
carries.

It is also important to recognize that both fixed and flexible point cloud
formats exist, and conversion of flexible formats to fixed formats will often
leak. The dimensions might even match in terms of type or name, but not in
terms of width or interpretation.

The text format, of course, is the ultimate flexible-definition format – at
least for the point data themselves. For the other header information, like
the spatial reference system, or the ASPRS LASUUID, the conversion
leaks. In short, you may need to preserve some more information as part of
your conversion to make it useful down the road.

PDAL transmits this other information in the form of Metadata that is
carried per-stage throughout the PDAL processing pipeline.
We can capture this metadata using the info utility.

$ pdal info --metadata interesting.las

This produces metadata that looks like this. You can use
your JSON manipulation tools to extract this information.
For formats that do not have the ability to
preserve this metadata internally, you can keep a .json file
alongside the .txt file as auxiliary information.

The full power of PDAL comes in the form of pipeline invocations.
While translate provides some utility as far as simple conversion of
one format to another, it does not provide much power to a user to be able
to filter or alter data as they are converted. Pipelines are the way to take
advantage of PDAL’s ability to manipulate data as they are converted. This
section will provide a basic example and demonstration of Pipeline,
but the Pipeline document contains more detailed exposition of the
topic.

The pipeline PDAL utility is one that takes in a .json file
containing pipeline description that defines a PDAL
processing pipeline. Options can be given at each pdal::Stage of
the pipeline to affect different aspects of the processing pipeline, and
stages may be chained together into multiple combinations to have varying
effects.