The NBD client driver says that you may need to explicitly select the deadline I/O scheduler in order to avoid deadlock.
In addition, to get better performance, set the block size of nbd0 to 4096, which is the same value of the cache block size inside xNBD. This is optional.

If you need concurrent access from multiple clients, repeat the same operations at each client node.
In normal cases, you need a cluster file system (e.g., OCFS2 and GFS) to store data safely into a shared disk.

Scenario 2 (Simple proxy server, distributed Copy-on-Write)

xNBD can also work as a proxy server to another target server.
This feature is used for distributed Copy-on-Write NBD disks;
one read-only disk image is shared among multiple clients, and updated disk data is saved at each proxy.

In the proxy server mode of xNBD, all I/O requests are intercepted, and redirected to the target server if needed. All updated blocks are saved at the proxy server, and read blocks are also cached. Writes do not happen at the target server.

Updated and cached blocks are saved at a cache disk file (cache.img). A bitmap file (cache.bitmap) records block numbers of updated and cached blocks.
A UNIX socket file (proxy.ctl) is created to control the proxy server (See the next example).

Then, an NBD client node connects to the proxy server.

nbd-client 10.255.255.254 8992 /dev/nbd0

If you want to add more clients to the target server, repeat these commands at each proxy server and client node.

A proxy server accepts NBD connections from other NBD proxies.
This means that you can cascade multiple NBD proxies as figured in the below.

Scenario 3 (Live VM & disk migration with Xen)

A proxy server is used for relocating a virtual disk to another xNBD server.
This mechanism transparently works with live migration of a VM.

In this example, 4 physical machines are used:

Source host node where a VM is started

Destination host node where the VM is migrated

xNBD target node exporting a virtual disk to the source host node

xNBD proxy node exporting a virtual disk to the destination host node

1. Setup a VM with an NBD disk

Source Side

First, setup a xNBD target server (10.10.1.1), and connect to it from a source host node (10.10.1.2).
Then, create a VM with a virtual disk of /dev/nbd0.

When using Xen, the Domain configuration file of the VM will include the following entry.

disk = [ "phy:/dev/nbd0,xvda,w" ]

xNBD is independent from VMM implementations. It also works with KVM (qemu) and others.
For instance, since KVM includes NBD client code, you do not need to set up /dev/nbd0 on a host OS.
Specify the NBD disk directly in a command line.

qemu-system-x86_64 -hda nbd:10.10.1.1:8992

Destination Side

Next, setup an xNBD proxy server (10.20.1.1), and connect to it from a destination host node (10.20.1.2).

2. Migrate the VM to the destination

Next, start live migration.

In the source host,

xm migrate -l 2 10.20.1.2 # Domain ID is 2

After memory page relocation is completed, the VM is terminated at the source host, and then restarted at the destination.
All disk I/O requests are intercepted at the xNBD proxy, and disk blocks are gradually cached (i.e., relocated) at the cache file.

3. Migrate all the disk blocks

There still remains not-yet-relocated blocks.
Now copy them to the xNBD proxy.

In the xNBD proxy node,

xnbd-bgctl --cache-all proxy.ctl

After all blocks are cached at the proxy node, the NBD connection to the target server is no longer required.
Now, you can change the xNBD proxy to a normal target server, disconnecting the NBD connection to the target server.

In the xNBD proxy node,

xnbd-bgctl --switch proxy.ctl

This command shutdowns the xNBD proxy server and restart it as a normal xNBD target server.
All client NBD sessions are preserved.