This page reviews how to configure libStorage to suit any environment,
beginning with the the most common use cases, exploring recommended guidelines,
and finally, delving into the details of more advanced settings.

Except when specified otherwise, the configuration examples below assume the
libStorage client and server exist on the same host. However, that is not at
all a requirement. It is fully possible, and in fact the entire purpose of
libStorage, that the client and server be able to function on different
systems. One libStorage server should be able to support hundreds of clients.
Yet for the sake of completeness, the examples below show both configurations
merged.

When configuring a libStorage client and server for different systems, there
will be a few differences from the examples below:

The examples show libStorage configured with its server component hosted
on a UNIX socket. This is ideal for when the client/server exist on the same
host as it reduces security risks. However, in most real-world scenarios
the client and server are not residing on the same host, the
libStorage server should use a TCP endpoint so it can be accessed
remotely.

In a distributed configuration the actual driver configuration sections
need only occur on the server-side. The entire purpose of libStorage's
distributed nature is to enable clients without any knowledge of how to
access a storage platform the ability to connect to a remote server that
maintains that storage platform access information.

The first example is a simple libStorage configuration with the VirtualBox
storage driver. The below example omits the host property, but the configuration
is still valid. If the libstorage.host property is not found, the server is
hosted via a temporary UNIX socket file in /var/run/libstorage.

note

Please remember to replace the placeholders in the following examples
with values valid for the systems on which the examples are executed.

The example below specifies the volumePath property as
$HOME/VirtualBox/Volumes. While the text $HOME will be replaced with
the actual value for that environment variable at runtime, the path may
still be invalid. The volumePath property should reflect a path on the
system on which the VirtualBox server is running, and that is not always
the same system on which the libStorage server is running.

So please, make sure to update the volumePath property for the VirtualBox
driver to a path valid on the system on which the VirtualBox server is
running.

The same goes for VirtualBox property endpoint as the VirtualBox
web service is not always available at 10.0.2.2:18083.

The following example illustrates how to configure a libStorage client and
server running on the same host. The server has one endpoint on which it is
accessible - a single TCP port, 7979, bound to the localhost network interface.

The following example illustrates how to configure a libStorage client and
server running on the same host. The server has one endpoint on which it is
accessible - a single TCP port, 7979, bound to all of the host's network
interfaces. This means that the server is accessible via external clients, not
just those running on the same host.

Because of the public nature of this libStorage server, it is a good idea to
encrypt communications between client and server.

Please note that in the above example the property libstorage.client has
been introduced. This property is always present, even if not explicitly
specified. It exists to override libStorage properties for the client
only, such as TLS settings, logging, etc.

For the security conscious, there is no safer way to run a client/server setup
on a single system than the option to use a UNIX socket. The socket offloads
authentication and relies on the file system file access to ensure authorized
users can use the libStorage API.

There may be occasions when it is desirable to provide multiple ingress vectors
for the libStorage API. In these situations, configuring multiple endpoints
is the solution. The below example illustrates how to configure three endpoints:

With all three endpoints defined explicitly in the above example, why leave the
property libstorage.host in the configuration at all? When there are no
endpoints defined, the libStorage server will attempt to create a default
endpoint using the value from the property libstorage.host. However, even
when there's at least one explicitly defined endpoint, the libstorage.host
property still serves a very important function -- it is the property used
by the libStorage client to determine which to which endpoint to connect.

All of the previous examples have used the VirtualBox storage driver as the
sole measure of how to configure a libStorage service. However, it is possible
to configure many services at the same time in order to provide access to
multiple storage drivers of different types, or even different configurations
of the same driver.

The following example demonstrates how to configure three libStorage services:

service

driver

virtualbox-00

virtualbox

virtualbox-01

virtualbox

scaleio

scaleio

Notice how the virtualbox-01 service includes an added integration section.
The integration definition refers to the integration interface and parameters
specific to incoming requests through this layer. In this case we defined
libstorage.server.services.virtualbox-01 with the
integration.volume.operations.create.default.size parameter set. This enables all
create requests that come in through virtualbox-01 to have a default size of
1GB. So although it is technically the same platform below the covers,
virtualbox-00 requests may have different default values than those defined
in virtualbox-01.

A very important point to make about the relationship between services and
endpoints is that all configured services are available on all endpoints. In
the future this may change, and libStorage may support endpoint-specific
service definitions, but for now if a service is configured, it is accessible
via any of the available endpoint addresses.

Between the three services above, clearly one major difference is that two
services host one driver, VirtualBox, and the third service hosts ScaleIO.
However, why two services for one driver, in this case, VirtualBox? Because,
in addition to services being configured to host different types of drivers,
services can also host different driver configurations. In service
virtualbox-00, the volume path is $HOME/VirtualBox/Volumes-00,
whereas for service virtualbox-01, the volume path is
$HOME/VirtualBox/Volumes-01.

The location of these directories can be influenced in two ways. The first way
is via the environment variable LIBSTORAGE_HOME. When LIBSTORAGE_HOME is
defined, the normal, final token of the above paths is removed. Thus when
LIBSTORAGE_HOME is defined as /opt/libstorage the
above directory paths would be:

/opt/libstorage/etc

/opt/libstorage/etc/tls

/opt/libstorage/var/lib

/opt/libstorage/var/log

/opt/libstorage/var/run

It's also possible to override any one of the above directory paths manually
using the following environment variables:

LIBSTORAGE_HOME_ETC

LIBSTORAGE_HOME_ETC_TLS

LIBSTORAGE_HOME_LIB

LIBSTORAGE_HOME_LOG

LIBSTORAGE_HOME_RUN

Thus if LIBSTORAGE_HOME was set to /opt/libstorage and
LIBSTORAGE_HOME_ETC was set to /etc/libstorage the above paths would be:

This section reviews the several supported TLS configuration options. The table
below lists the default locations of the TLS-related files.

Directory

File

Property

Description

$LIBSTORAGE_HOME_ETC_TLS

libstorage.crt

libstorage.tls.crtFile

The public key.

libstorage.key

libstorage.tls.keyFile

The private key.

cacerts

libstorage.tls.trustedCertsFile

The trusted key ring.

known_hosts

libstorage.tls.knownHosts

The system known hosts file.

If libStorage detects any of the above files, the detected files are loaded
when necessary and without any explicit configuration. However, if a file's
related configuration property is set explicitly to some other, non-default
value, the default file will not be loaded even if it is present.

TLS is disabled by default when a server's endpoint or client-side host is
configured for use with a UNIX socket. This is for two reasons:

TLS isn't strictly necessary to provide transport security via a file-backed
socket. The file's UNIX permissions take care of which users can read and/or
write to the socket.

The certificate verification step in a TLS negotiation is complicated when
the endpoint to which the client is connecting is a file path, not a FQDN or
IP address. This issue can be resolved by setting the property
libstorage.tls.serverName on the client so that the value matches the
server certificate's common name (CN) or one of its subject alternate names
(SAN).

To explicitly enable TLS for use with UNIX sockets, set the environment
variable LIBSTORAGE_TLS_SOCKITTOME to a truthy value.

The above example instructs the client-side TLS configuration to operate in
insecure mode. This means the client is not attempting to verify the
certificate provided by the server. This is a security risk and should not
ever be used in production.

This TLS configuration example describes how to instruct the libStorage client
to validate the provided server-side certificate using a custom trusted CA file.
This avoids the perils of insecure TLS while still enabling a privately signed
or snake-oil server-side certificate.

The final TLS example explains how to configure the libStorage server to
to require certificates from clients. This configuration enables the use of
client-side certificates as a means of authorization.

A typical scenario that employs the above example would also involve the
server certificate to have a dual purpose as an intermediate signing authority
that has signed the allowed client certificates. Or at the very least the
server certificate would be signed by the same intermediate CA that is used
to sign the client-side certs.

While TLS should never be configured as insecure in production, there is a
compromise that enables an encrypted connection while still providing some
measure of verification of the remote endpoint's identity -- peer verification.

When peer verification mode is enabled, TLS is implicitly configured to operate
as insecure in order to disable server-side certificate verification. This
enables an encrypted transport while delegating the authenticity of the
server's identity to the peer verification process.

The first step to configuring peer verification is to obtain the information
about the peer used to identify it. First, obtain the peer's certificate and
store it locally. This step can be omitted if the remote peer's certificate
is already available locally.

The known hosts file can be specified via the property
libstorage.tls.knownHosts. This is the system known hosts file. If this
property is not explicitly configured then libStorage checks for the file
$LIBSTORAGE_HOME_ETC_TLS/known_hosts. libStorage also looks for the user
known hosts file at $HOME/.libstorage/known_hosts.

Thus if a known hosts file is present at either of the default system or
user locations, it's possible to take advantage of them with a configuration
similar to the following:

libstorage:
client:
tls: verifyPeers

The above configuration snippet indicates that TLS is enabled with peer
verification. Because no known hosts file is specified the default paths are
checked for any known host files. To enable peer verification with a custom
system known hosts file the following configuration can be used:

The above warning occurs when a remote host's fingerprint is not what
is stored in the client's known hosts file for that host. libStorage
behaves the exact same way. If the libStorage client is configured
to verify a remote peer's identity and its fingerprint is not what
is stored in the libStorage client's known hosts file, the connection
will fail.

The above configuration snippet defines a new property,
libstorage.server.auth which contains the following child properties:
key, alg, allow, deny, and disabled.

The property libstorage.server.auth.key is the secret key used to verify
the signatures of the tokens included in API calls. If the property's value
is a valid file path then the contents of the file are used as the key. The
value of libstorage.server.auth.alg specifies the cryptographic algorithm
used to sign and verify the tokens. It has a default value of HS256. Valid
algorithms include:

Both the properties libstorage.server.auth.allow and
libstorage.server.auth.deny are arrays of strings.

The values can be either the subject of the token, the entire, encoded JWT, or
the format tag:encodedJWT. The prefix tag: can be any piece of text in
order to provide a friendly means of identifying the encoded JWT.

The allow property indicates which tokens are valid and the deny property
is a way to explicitly revoke access.

note

Please be aware that by virtue of defining the property
libstorage.server.auth.allow, access to the server is restricted to
requests with valid tokens only and anonymous access is no longer allowed.

There may be occasions when it is necessary to temporarily disable token-based
access restrictions. It's possible to do this without removing the token
configuration by setting the property libstorage.server.auth.disabled
to a boolean true value:

The above configuration defines a token configuration for the service ebs-00
but not the service ebs-01. That means that API calls to the resource
/volumes/ebs-00 require a valid token whereas API calls to the resource
/volumes/ebs-01 do not.

note

If an API call without a token is submitted that acts on multiple
resources and one or more of those resources is configured for token-based
access then the call will result in an HTTP status of 401 Unauthorized.

It is also possible to configure both global token-based access at the same
time as service token-based access. However, there are some important details
of which to be aware when doing so.

When combining global and service token configurations, only the global
token key is respected. Otherwise tokens would always be invalid either at
the global or service scope since a token is signed by a single key.

The global allow list grants access globally but can be overridden by
including a token in a service's deny list.

The global deny list restricts access globally and cannot be overridden
by including a token in a service's allow list. For example, consider the
following configuration snippet:

The above example defines a global deny list. That means that even though
the service ebs-00 includes cduchesne in its own allow list, requests
to service ebs-00 with cduchesne's bearer token are denied because
that token is denied globally.

Up until now the discussion surrounding security tokens has been centered on
server-side configuration. However, the libStorage client can also be
configured to send tokens as part of its standard API call workflow. For
example:

If libStorage is embedded into another application, such as
REX-Ray, then that application may
manage its own configuration and supply the embedded libStorage instance
directly with a configuration object. In this scenario, the libStorage
configuration files are ignored in deference to the embedding application.

The order of the items above is also the order of precedence when considering
options set in multiple locations that may override one another. Values set
via CLI flags have the highest order of precedence, followed by values set by
environment variables, followed, finally, by values set in configuration files.

Please note that while the user configuration file is located inside the user's
home directory, this is the directory of the user that starts libStorage. And
if libStorage is being started as a service, then sudo is likely being used,
which means that $HOME/.libstorage/config.yml won't point to your home
directory, but rather /root/.libstorage/config.yml.

The section Configuration Methods mentions there are
three ways to configure libStorage: config files, environment variables, and the
command line. However, this section will illuminate the relationship between the
names of the configuration file properties, environment variables, and CLI
flags.

Below is a simple configuration file that tells the libStorage client where
the libStorage server is hosted:

libstorage:
host: tcp://192.168.0.20:7979

The property libstorage.host is a string. This value can also be set via
environment variables or the command line, but to do so requires knowing the
names of the environment variables or CLI flags to use. Luckily those are very
easy to figure out just by knowing the property names.

All properties that might appear in the libStorage configuration file
fall under some type of heading. For example, take the default configuration
above.

The rule for environment variables is as follows:

Each nested level becomes a part of the environment variable name followed
by an underscore _ except for the terminating part.

The entire environment variable name is uppercase.

Nested properties follow these rules for CLI flags:

The root level's first character is lower-cased with the rest of the root
level's text left unaltered.

The remaining levels' first characters are all upper-cased with the the
remaining text of that level left unaltered.

All levels are then concatenated together.

The following example builds on the previous. In this case we have added logging
directives to the client instance and reference how their transformation in
the table below the example.

Referring to the section on defining
Multiple Services, there is also another way
to define the TLS settings for the external TCP endpoint. The same configuration
can be rewritten and simplified in the process:

The above example may look different than the previous one, but it's actually
the same with a minor tweak in order to simplify configuration.

While there are still two VirtualBox services defined, virtualbox-00 and
virtualbox-01, neither service contains configuration information about the
VirtualBox driver other than the volumePath property. This is because the
change affected above is to take advantage of inherited properties.

When a property is omitted, libStorage traverses the configuration instance
upwards, checking certain, predefined levels known as "scopes" to see if the
property value exists there. All configured services represent a valid
configuration scope as does libstorage.server.

Thus when the VirtualBox driver is initialized and it checks for its properties,
while the driver may only find the volumePath property defined under the
configured service scope, the property access attempt travels up the
configuration stack until it hits the libstorage.server scope where the
remainder of the VirtualBox driver's properties are defined.

Note that while the log level is defined at the root of the config, it's also
defined at libstorage.server.logging.level. The latter value of info
overrides the former value of warn. Also please remember that even had the
latter, server-specific value of info not been defined, an attempt by to
access the log level by the server would be perfectly valid since the attempt
would traverse up the configuration data until it found the log level defined
at the root of the configuration.

All operations received by the libStorage API are immediately enqueued into a
Task Service in order to divorce the business objective from the scope of the
HTTP request that delivered it. If a task completes before the HTTP request
times out, the result of the task is written to the HTTP response and sent to
the client. However, if the operation is long-lived and continues to execute
after the original HTTP request has timed out, the goroutine running the
operation will finish regardless.

In the case of such a timeout event, the client receives an HTTP status 408 -
Request Timeout. The HTTP response body also includes the task ID which can
be used to monitor the state of the remote call. The following resource URI can
be used to retrieve information about a task:

GET /tasks/${taskID}

For systems that experience heavy loads the task system can also be a source of
potential resource issues. Because tasks are kept indefinitely at this point in
time, too many tasks over a long period of time can result in a massive memory
consumption, with reports of up to 50GB and more.

That's why the configuration property libstorage.server.tasks.logTimeout is
available to adjust how long a task is logged before it is removed from memory.
The default value is 0 -- that is, do not log the task in memory at all.

While this is in contradiction to the task retrieval example above --
obviously a task cannot be retrieved if it is not retained -- testing and
benchmarks have shown it is too dangerous to enable task retention by default.
Instead tasks are removed immediately upon completion.

The follow configuration example illustrates a libStorage server that keeps
tasks logged for 10 minutes before purging them from memory:

libstorage:
server:
tasks:
logTimeout: 10m

The libstorage.server.tasks.logTimeout property can be set to any value that
is parseable by the Golang
time.ParseDuration function. For
example, 1000ms, 10s, 5m, and 1h are all valid values.

The libstorage.server.libstorage.storage.driver property can be used to
activate a storage drivers. That is not a typo; the libstorage key is repeated
beneath libstorage.server. This is because configuration property paths are
absolute, and when nested under an architectural component, such as
libstorage.server, the entire key path must be replicated.

That said, and this may seem to contradict the last point, the storage driver
property is valid only on the server. Well, not really. Internally the
libStorage client uses the same configuration property to denote its own
storage driver. This internal storage driver is actually how the libStorage
client communicates with the libStorage server.

The disable create feature enables you to disallow any volume creation activity.
Any requests will be returned in a successful manner, but the create will not
get passed to the backend storage platform.

The disable remove feature enables you to disallow any volume removal activity.
Any requests will be returned in a successful manner, but the remove will not
get passed to the backend storage platform.

There is a capability to preemptively detach any existing attachments to other
instances before attempting a mount. This will enable use cases for
availability where another instance must be able to take control of a volume
without the current owner instance being involved. The operation is considered
equivalent to a power off of the existing instance for the device.

By default accounting takes place during operations that are performed
on Mount, Unmount, and other operations. This only has impact when running
as a service through the HTTP/JSON interface since the counts are persisted
in memory. The purpose of respecting the Used Count is to ensure that a
volume is not unmounted until the unmount requests have equaled the mount
requests.

In the Docker use case if there are multiple containers sharing a volume
on the same host, the the volume will not be unmounted until the last container
is stopped.

The following setting should only be used if you wish to disable this
functionality. This would make sense if the accounting is being done from
higher layers and all unmount operations should proceed without control.

Currently a reset of the service will cause the counts to be reset. This
will cause issues if multiple containers are sharing a volume. If you are
sharing volumes, it is recommended that you reset the service along with the
accompanying container runtime (if this setting is false) to ensure they are
synchronized.

In order to optimize Path requests, the paths of actively mounted volumes
returned as the result of a List request are cached. Subsequent Path
requests for unmounted volumes will not dirty the cache. Only once a volume
has been mounted will the cache be marked dirty and the volume's path retrieved
and cached once more.

The following configuration example illustrates the two path cache properties:

Volume path caching is enabled and asynchronous by default, so it's possible to
entirely omit the above configuration excerpt from a production deployment, and
the system will still use asynchronous caching. Setting the async property to
false simply means that the initial population of the cache will be handled
synchronously, slowing down the program's startup time.

When volumes are mounted there can be an additional path that is specified to
be created and passed as the valid mount point. This is required for certain
applications that do not want to place data from the root of a mount point.

The default is the /data path. If a value is set by
linux.integration.volume.operations.mount.rootPath, then the default will be
overwritten.

Normally an incoming POST request with an opts field does not copy the
key/value pairs in the opts map into the fields that match the request's
schema. For example, take a look at the following request:

The above request is used for creating a new volume, and it appears that the
intention is to create the volume as encrypted. However, the encrypted field
is part of the free-form opts piece of the request. The opts map is for
data that is not yet part of the official libStorage API and schema. Certain
drivers may parse data out of the opts field, but it is driver specific.
The libStorage server does not normally attempt to parse this field's keys
and match it to the request's fields. A proper request would look like this:

In order to make things a little easier on clients, a server can be configured
to treat the first JSON request as equal to the second. This feature is disabled
by default due to the possibility of side-effects. For example, a value in the
opts map could unintentionally overwrite the intended value if both keys are
the same.

To enable this feature, the libStorage server's configuration should set the
libstorage.server.parseRequestOpts property to true. An example YAML snippet
with this property enabled resembles the following:

libstorage:
server:
parseRequestOpts: true

With the above property set to true, values in a request's opts map will be
copied to the corresponding key in the request proper.

Search

From here you can search these documents. Enter
your search terms below.