Fabric composes a couple of other libraries as well as providing its own layer
on top; user code will most often import from the fabric package, but
you’ll sometimes import directly from invoke or paramiko too:

The most basic use of Fabric is to execute a shell command on a remote system
via SSH, then (optionally) interrogate the result. By default, the remote
program’s output is printed directly to your terminal, and captured. A basic
example:

Meet Connection, which represents an SSH connection and provides the core of
Fabric’s API, such as run. Connection objects need at least a
hostname to be created successfully, and may be further parameterized by
username and/or port number. You can give these explicitly via args/kwargs:

Connection(host='web1',user='deploy',port=2202)

Or by stuffing a [user@]host[:port] string into the host argument
(though this is purely convenience; always use kwargs whenever ambiguity
appears!):

Connection objects’ methods (like run) usually return
instances of invoke.runners.Result (or subclasses thereof) exposing the sorts
of details seen above: what was requested, what happened while the remote
action occurred, and what the final result was.

Need to run things as the remote system’s superuser? You could invoke the
sudo program via run, and (if your remote system isn’t
configured with passwordless sudo) respond to the password prompt by hand, as
below. (Note how we need to request a remote pseudo-terminal; most sudo
implementations get grumpy at password-prompt time otherwise.)

Giving passwords by hand every time can get old; thankfully Invoke’s powerful
command-execution functionality includes the ability to auto-respond to program output with pre-defined input. We can use this for
sudo:

Using watchers/responders works well here, but it’s a lot of boilerplate to set
up every time - especially as real-world use cases need more work to detect
failed/incorrect passwords.

To help with that, Invoke provides a Context.sudo method which handles most of the boilerplate for
you (as Connection subclasses Context, it gets this method
for free.) sudo doesn’t do anything users can’t do
themselves - but as always, common problems are best solved with commonly
shared solutions.

We filled in the sudo password up-front at runtime in this example; in
real-world situations, you might also supply it via the configuration system
(perhaps using environment variables, to avoid polluting config files), or
ideally, use a secrets management system.

Besides shell command execution, the other common use of SSH connections is
file transfer; Connection.put and Connection.get exist to fill this need.
For example, say you had an archive file you wanted to upload:

One-liners are good examples but aren’t always realistic use cases - one
typically needs multiple steps to do anything interesting. At the most basic
level, you could do this by calling Connection methods multiple times:

Most real use cases involve doing things on more than one server. The
straightforward approach could be to iterate over a list or tuple of
Connection arguments (or Connection objects themselves, perhaps via
map):

This approach works, but as use cases get more complex it can be
useful to think of a collection of hosts as a single object. Enter Group, a
class wrapping one-or-more Connection objects and offering a similar API;
specifically, you’ll want to use one of its concrete subclasses like
SerialGroup or ThreadingGroup.

When any individual connections within the Group encounter errors, the
GroupResult is lightly wrapped in a GroupException, which is raised. Thus
the aggregate behavior resembles that of individual Connection methods,
returning a value on success or raising an exception on failure.

Finally, we arrive at the most realistic use case: you’ve got a bundle of
commands and/or file transfers and you want to apply it to multiple servers.
You could use multiple Group method calls to do this:

That approach falls short as soon as logic becomes necessary - for example, if
you only wanted to perform the copy-and-untar above when /opt/mydata is
empty. Performing that sort of check requires execution on a per-server basis.

You could fill that need by using iterables of Connection objects (though
this foregoes some benefits of using Groups):

The only convenience this final approach lacks is a useful analogue to
Group.run - if you want to track the results of all the
upload_and_unpack call as an aggregate, you have to do that yourself. Look
to future feature releases for more in this space!

It’s often useful to run Fabric code from a shell, e.g. deploying applications
or running sysadmin jobs on arbitrary servers. You could use regular
Invoke tasks with Fabric library
code in them, but another option is Fabric’s own “network-oriented” tool,
fab.

fab wraps Invoke’s CLI mechanics with features like host selection, letting
you quickly run tasks on various servers - without having to define host
kwargs on all your tasks or similar.

Note

This mode was the primary API of Fabric 1.x; as of 2.0 it’s just a
convenience. Whenever your use case falls outside these shortcuts, it
should be easy to revert to the library API directly (with or without
Invoke’s less opinionated CLI tasks wrapped around it).

For a final code example, let’s adapt the previous example into a fab task
module called fabfile.py:

Not hard - all we did was copy our temporary task function into a file and slap
a decorator on it. task tells the CLI machinery to expose the
task on the command line:

$ fab --list
Available tasks:
upload_and_unpack

Then, when fab actually invokes a task, it knows how to stitch together
arguments controlling target servers, and run the task once per server. To run
the task once on a single server:

$ fab -H web1 upload_and_unpack

When this occurs, c inside the task is set, effectively, to
Connection("web1") - as in earlier examples. Similarly, you can give more
than one host, which runs the task multiple times, each time with a different
Connection instance handed in: