var (
// ErrLeader is returned when an operation can't be completed on a
// leader node.ErrLeader = errors.New("node is the leader")
// ErrNotLeader is returned when an operation can't be completed on a
// follower or candidate node.ErrNotLeader = errors.New("node is not the leader")
// ErrLeadershipLost is returned when a leader fails to commit a log entry
// because it's been deposed in the process.ErrLeadershipLost = errors.New("leadership lost while committing log")
// ErrAbortedByRestore is returned when a leader fails to commit a log
// entry because it's been superseded by a user snapshot restore.ErrAbortedByRestore = errors.New("snapshot restored while committing log")
// ErrRaftShutdown is returned when operations are requested against an
// inactive Raft.ErrRaftShutdown = errors.New("raft is already shutdown")
// ErrEnqueueTimeout is returned when a command fails due to a timeout.ErrEnqueueTimeout = errors.New("timed out enqueuing operation")
// ErrNothingNewToSnapshot is returned when trying to create a snapshot
// but there's nothing new commited to the FSM since we started.ErrNothingNewToSnapshot = errors.New("nothing new to snapshot")
// ErrUnsupportedProtocol is returned when an operation is attempted
// that's not supported by the current protocol version.ErrUnsupportedProtocol = errors.New("operation not supported with current protocol version")
// ErrCantBootstrap is returned when attempt is made to bootstrap a
// cluster that already has state present.ErrCantBootstrap = errors.New("bootstrap only works on new clusters")
)

var (
// ErrTransportShutdown is returned when operations on a transport are
// invoked after it's been terminated.ErrTransportShutdown = errors.New("transport shutdown")
// ErrPipelineShutdown is returned when the pipeline is closed.ErrPipelineShutdown = errors.New("append pipeline closed")
)

var (
// ErrLogNotFound indicates a given log entry is not available.ErrLogNotFound = errors.New("log not found")
// ErrPipelineReplicationNotSupported can be returned by the transport to
// signal that pipeline replication is not supported in general, and that
// no error message should be produced.ErrPipelineReplicationNotSupported = errors.New("pipeline replication not supported")
)

BootstrapCluster initializes a server's storage with the given cluster
configuration. This should only be called at the beginning of time for the
cluster, and you absolutely must make sure that you call it with the same
configuration on all the Voter servers. There is no need to bootstrap
Nonvoter and Staging servers.

One sane approach is to boostrap a single server with a configuration
listing just itself as a Voter, then invoke AddVoter() on it to add other
servers to the cluster.

RecoverCluster is used to manually force a new configuration in order to
recover from a loss of quorum where the current configuration cannot be
restored, such as when several servers die at the same time. This works by
reading all the current state for this server, creating a snapshot with the
supplied configuration, and then truncating the Raft log. This is the only
safe way to force a given configuration without actually altering the log to
insert any new entries, which could cause conflicts with other servers with
different state.

WARNING! This operation implicitly commits all entries in the Raft log, so
in general this is an extremely unsafe operation. If you've lost your other
servers and are performing a manual recovery, then you've also lost the
commit information, so this is likely the best you can do, but you should be
aware that calling this can cause Raft log entries that were in the process
of being replicated but not yet be committed to be committed.

Note the FSM passed here is used for the snapshot operations and will be
left in a state that should not be used by the application. Be sure to
discard this FSM and any associated state and provide a fresh one when
calling NewRaft later.

A typical way to recover the cluster is to shut down all servers and then
run RecoverCluster on every server using an identical configuration. When
the cluster is then restarted, and election should occur and then Raft will
resume normal operation. If it's desired to make a particular server the
leader, this can be used to inject a new configuration with that server as
the sole voter, and then join up other new clean-state peer servers using
the usual APIs in order to bring the cluster back into a known state.

type AppendEntriesRequest struct {
RPCHeader// Provide the current term and leaderTermuint64Leader []byte// Provide the previous entries for integrity checkingPrevLogEntryuint64PrevLogTermuint64// New entries to commitEntries []*Log// Commit index on the leaderLeaderCommitIndexuint64
}

AppendEntriesRequest is the command used to append entries to the
replicated log.

type AppendEntriesResponse struct {
RPCHeader// Newer term if leader is out of dateTermuint64// Last Log is a hint to help accelerate rebuilding slow nodesLastLoguint64// We may not succeed if we have a conflicting entrySuccessbool// There are scenarios where this request didn't succeed
// but there's no need to wait/back-off the next attempt.NoRetryBackoffbool
}

AppendEntriesResponse is the response returned from an
AppendEntriesRequest.

type AppendFuture interface {
Future// Start returns the time that the append request was started.
// It is always OK to call this method.Start() time.Time// Request holds the parameters of the AppendEntries call.
// It is always OK to call this method.Request() *AppendEntriesRequest// Response holds the results of the AppendEntries call.
// This method must only be called after the Error
// method returns, and will only be valid on success.Response() *AppendEntriesResponse
}

AppendFuture is used to return information about a pipelined AppendEntries request.

type AppendPipeline interface {
// AppendEntries is used to add another request to the pipeline.
// The send may block which is an effective form of back-pressure.AppendEntries(args *AppendEntriesRequest, resp *AppendEntriesResponse) (AppendFuture, error)
// Consumer returns a channel that can be used to consume
// response futures when they are ready.Consumer() <-chan AppendFuture// Close closes the pipeline and cancels all inflight RPCsClose() error
}

AppendPipeline is used for pipelining AppendEntries requests. It is used
to increase the replication throughput by masking latency and better
utilizing bandwidth.

type ApplyFuture interface {
IndexFuture// Response returns the FSM response as returned
// by the FSM.Apply method. This must not be called
// until after the Error method has returned.Response() interface{}
}

type Config struct {
// ProtocolVersion allows a Raft server to inter-operate with older
// Raft servers running an older version of the code. This is used to
// version the wire protocol as well as Raft-specific log entries that
// the server uses when _speaking_ to other servers. There is currently
// no auto-negotiation of versions so all servers must be manually
// configured with compatible versions. See ProtocolVersionMin and
// ProtocolVersionMax for the versions of the protocol that this server
// can _understand_.ProtocolVersionProtocolVersion// HeartbeatTimeout specifies the time in follower state without
// a leader before we attempt an election.HeartbeatTimeouttime.Duration// ElectionTimeout specifies the time in candidate state without
// a leader before we attempt an election.ElectionTimeouttime.Duration// CommitTimeout controls the time without an Apply() operation
// before we heartbeat to ensure a timely commit. Due to random
// staggering, may be delayed as much as 2x this value.CommitTimeouttime.Duration// MaxAppendEntries controls the maximum number of append entries
// to send at once. We want to strike a balance between efficiency
// and avoiding waste if the follower is going to reject because of
// an inconsistent log.MaxAppendEntriesint// If we are a member of a cluster, and RemovePeer is invoked for the
// local node, then we forget all peers and transition into the follower state.
// If ShutdownOnRemove is is set, we additional shutdown Raft. Otherwise,
// we can become a leader of a cluster containing only this node.ShutdownOnRemovebool// TrailingLogs controls how many logs we leave after a snapshot. This is
// used so that we can quickly replay logs on a follower instead of being
// forced to send an entire snapshot.TrailingLogsuint64// SnapshotInterval controls how often we check if we should perform a snapshot.
// We randomly stagger between this value and 2x this value to avoid the entire
// cluster from performing a snapshot at once.SnapshotIntervaltime.Duration// SnapshotThreshold controls how many outstanding logs there must be before
// we perform a snapshot. This is to prevent excessive snapshots when we can
// just replay a small set of logs.SnapshotThresholduint64// LeaderLeaseTimeout is used to control how long the "lease" lasts
// for being the leader without being able to contact a quorum
// of nodes. If we reach this interval without contact, we will
// step down as leader.LeaderLeaseTimeouttime.Duration// StartAsLeader forces Raft to start in the leader state. This should
// never be used except for testing purposes, as it can cause a split-brain.StartAsLeaderbool// The unique ID for this server across all time. When running with
// ProtocolVersion < 3, you must set this to be the same as the network
// address of your transport.LocalIDServerID// NotifyCh is used to provide a channel that will be notified of leadership
// changes. Raft will block writing to this channel, so it should either be
// buffered or aggressively consumed.NotifyCh chan<- bool// LogOutput is used as a sink for logs, unless Logger is specified.
// Defaults to os.Stderr.LogOutputio.Writer// Logger is a user-provided logger. If nil, a logger writing to LogOutput
// is used.Logger *log.Logger
}

Configuration tracks which servers are in the cluster, and whether they have
votes. This should include the local server, if it's a member of the cluster.
The servers are listed no particular order, but each should only appear once.
These entries are appended to the log during membership changes.

ReadPeersJSON consumes a legacy peers.json file in the format of the old JSON
peer store and creates a new-style configuration structure. This can be used
to migrate this data or perform manual recovery when running protocol versions
that can interoperate with older, unversioned Raft servers. This should not be
used once server IDs are in use, because the old peers.json file didn't have
support for these, nor non-voter suffrage types.

DiscardSnapshotStore is used to successfully snapshot while
always discarding the snapshot. This is useful for when the
log should be truncated but no snapshot should be retained.
This should never be used for production use, and is only
suitable for testing.

type FSM interface {
// Apply log is invoked once a log entry is committed.
// It returns a value which will be made available in the
// ApplyFuture returned by Raft.Apply method if that
// method was called on the same Raft node as the FSM.Apply(*Log) interface{}
// Snapshot is used to support log compaction. This call should
// return an FSMSnapshot which can be used to save a point-in-time
// snapshot of the FSM. Apply and Snapshot are not called in multiple
// threads, but Apply will be called concurrently with Persist. This means
// the FSM should be implemented in a fashion that allows for concurrent
// updates while a snapshot is happening.Snapshot() (FSMSnapshot, error)
// Restore is used to restore an FSM from a snapshot. It is not called
// concurrently with any other command. The FSM must discard all previous
// state.Restore(io.ReadCloser) error
}

FSM provides an interface that can be implemented by
clients to make use of the replicated log.

type FSMSnapshot interface {
// Persist should dump all necessary state to the WriteCloser 'sink',
// and call sink.Close() when finished or call sink.Cancel() on error.Persist(sink SnapshotSink) error// Release is invoked when we are finished with the snapshot.Release()
}

FSMSnapshot is returned by an FSM in response to a Snapshot
It must be safe to invoke FSMSnapshot methods with concurrent
calls to Apply.

FilterFn is a function that can be registered in order to filter observations.
The function reports whether the observation should be included - if
it returns false, the observation will be filtered out.

type Future interface {
// Error blocks until the future arrives and then
// returns the error status of the future.
// This may be called any number of times - all
// calls will return the same value.
// Note that it is not OK to call this method
// twice concurrently on the same Future instance.Error() error
}

type InstallSnapshotRequest struct {
RPCHeaderSnapshotVersionSnapshotVersionTermuint64Leader []byte// These are the last index/term included in the snapshotLastLogIndexuint64LastLogTermuint64// Peer Set in the snapshot. This is deprecated in favor of Configuration
// but remains here in case we receive an InstallSnapshot from a leader
// that's running old code.Peers []byte// Cluster membership.Configuration []byte// Log index where 'Configuration' entry was originally written.ConfigurationIndexuint64// Size of the snapshotSizeint64
}

InstallSnapshotRequest is the command sent to a Raft peer to bootstrap its
log (and state machine) from a snapshot on another peer.

LogCache wraps any LogStore implementation to provide an
in-memory ring buffer. This is used to cache access to
the recently written entries. For implementations that do not
cache themselves, this can provide a substantial boost by
avoiding disk I/O on recent entries.

const (
// LogCommand is applied to a user FSM.LogCommandLogType = iota// LogNoop is used to assert leadership.LogNoop// LogAddPeer is used to add a new peer. This should only be used with
// older protocol versions designed to be compatible with unversioned
// Raft servers. See comments in config.go for details.LogAddPeerDeprecated// LogRemovePeer is used to remove an existing peer. This should only be
// used with older protocol versions designed to be compatible with
// unversioned Raft servers. See comments in config.go for details.LogRemovePeerDeprecated// LogBarrier is used to ensure all preceding operations have been
// applied to the FSM. It is similar to LogNoop, but instead of returning
// once committed, it only returns once the FSM manager acks it. Otherwise
// it is possible there are operations committed but not yet applied to
// the FSM.LogBarrier// LogConfiguration establishes a membership change configuration. It is
// created when a server is added, removed, promoted, etc. Only used
// when protocol version 1 or greater is in use.LogConfiguration
)

NetworkTransport provides a network based transport that can be
used to communicate with Raft on remote machines. It requires
an underlying stream layer to provide a stream abstraction, which can
be simple TCP, TLS, etc.

This transport is very simple and lightweight. Each RPC request is
framed by sending a byte that indicates the message type, followed
by the MsgPack encoded request.

The response is an error string followed by the response object,
both are encoded using MsgPack.

InstallSnapshot is special, in that after the RPC request we stream
the entire state. That socket is not re-used as the connection state
is not known if there is an error.

NewNetworkTransport creates a new network transport with the given dialer
and listener. The maxPool controls how many connections we will pool. The
timeout is used to apply I/O deadlines. For InstallSnapshot, we multiply
the timeout by (SnapshotSize / TimeoutScale).

NewNetworkTransportWithLogger creates a new network transport with the given logger, dialer
and listener. The maxPool controls how many connections we will pool. The
timeout is used to apply I/O deadlines. For InstallSnapshot, we multiply
the timeout by (SnapshotSize / TimeoutScale).

type NetworkTransportConfig struct {
// ServerAddressProvider is used to override the target address when establishing a connection to invoke an RPCServerAddressProviderServerAddressProviderLogger *log.Logger// DialerStreamStreamLayer// MaxPool controls how many connections we will poolMaxPoolint// Timeout is used to apply I/O deadlines. For InstallSnapshot, we multiply
// the timeout by (SnapshotSize / TimeoutScale).Timeouttime.Duration
}

These are the versions of the protocol (which includes RPC messages as
well as Raft-specific log entries) that this server can _understand_. Use
the ProtocolVersion member of the Config object to control the version of
the protocol to use when _speaking_ to other servers. Note that depending on
the protocol version being spoken, some otherwise understood RPC messages
may be refused. See dispositionRPC for details of this logic.

There are notes about the upgrade path in the description of the versions
below. If you are starting a fresh cluster then there's no reason not to
jump right to the latest protocol version. If you need to interoperate with
older, version 0 Raft servers you'll need to drive the cluster through the
different versions in order.

The version details are complicated, but here's a summary of what's required
to get from a version 0 cluster to version 3:

1. In version N of your app that starts using the new Raft library with

versioning, set ProtocolVersion to 1.

2. Make version N+1 of your app require version N as a prerequisite (all

servers must be upgraded). For version N+1 of your app set ProtocolVersion
to 2.

3. Similarly, make version N+2 of your app require version N+1 as a

prerequisite. For version N+2 of your app, set ProtocolVersion to 3.

During this upgrade, older cluster members will still have Server IDs equal
to their network addresses. To upgrade an older member and give it an ID, it
needs to leave the cluster and re-enter:

1. Remove the server from the cluster with RemoveServer, using its network

address as its ServerID.

2. Update the server's config to a better ID (restarting the server).
3. Add the server back to the cluster with AddVoter, using its new ID.

You can do this during the rolling upgrade from N+1 to N+2 of your app, or
as a rolling change at any time after the upgrade.

0: Original Raft library before versioning was added. Servers running this

version of the Raft library use AddPeerDeprecated/RemovePeerDeprecated
for all configuration changes, and have no support for LogConfiguration.

1: First versioned protocol, used to interoperate with old servers, and begin

the migration path to newer versions of the protocol. Under this version
all configuration changes are propagated using the now-deprecated
RemovePeerDeprecated Raft log entry. This means that server IDs are always
set to be the same as the server addresses (since the old log entry type
cannot transmit an ID), and only AddPeer/RemovePeer APIs are supported.
Servers running this version of the protocol can understand the new
LogConfiguration Raft log entry but will never generate one so they can
remain compatible with version 0 Raft servers in the cluster.

2: Transitional protocol used when migrating an existing cluster to the new

server ID system. Server IDs are still set to be the same as server
addresses, but all configuration changes are propagated using the new
LogConfiguration Raft log entry type, which can carry full ID information.
This version supports the old AddPeer/RemovePeer APIs as well as the new
ID-based AddVoter/RemoveServer APIs which should be used when adding
version 3 servers to the cluster later. This version sheds all
interoperability with version 0 servers, but can interoperate with newer
Raft servers running with protocol version 1 since they can understand the
new LogConfiguration Raft log entry, and this version can still understand
their RemovePeerDeprecated Raft log entries. We need this protocol version
as an intermediate step between 1 and 3 so that servers will propagate the
ID information that will come from newly-added (or -rolled) servers using
protocol version 3, but since they are still using their address-based IDs
from the previous step they will still be able to track commitments and
their own voting status properly. If we skipped this step, servers would
be started with their new IDs, but they wouldn't see themselves in the old
address-based configuration, so none of the servers would think they had a
vote.

3: Protocol adding full support for server IDs and new ID-based server APIs

(AddVoter, AddNonvoter, etc.), old AddPeer/RemovePeer APIs are no longer
supported. Version 2 servers should be swapped out by removing them from
the cluster one-by-one and re-adding them with updated configuration for
this protocol version, along with their server ID. The remove/add cycle
is required to populate their server ID. Note that removing must be done
by ID, which will be the old server's address.

type RPCHeader struct {
// ProtocolVersion is the version of the protocol the sender is
// speaking.ProtocolVersionProtocolVersion
}

RPCHeader is a common sub-structure used to pass along protocol version and
other information about the cluster. For older Raft implementations before
versioning was added this will default to a zero-valued structure when read
by newer Raft versions.

NewRaft is used to construct a new Raft node. It takes a configuration, as well
as implementations of various interfaces that are required. If we have any
old state, such as snapshots, logs, peers, etc, all those will be restored
when creating the Raft node.

AddNonvoter will add the given server to the cluster but won't assign it a
vote. The server will receive log entries, but it won't participate in
elections or log entry commitment. If the server is already in the cluster,
this updates the server's address. This must be run on the leader or it will
fail. For prevIndex and timeout, see AddVoter.

AddVoter will add the given server to the cluster as a staging server. If the
server is already in the cluster as a voter, this updates the server's address.
This must be run on the leader or it will fail. The leader will promote the
staging server to a voter once that server is ready. If nonzero, prevIndex is
the index of the only configuration upon which this change may be applied; if
another configuration entry has been added in the meantime, this request will
fail. If nonzero, timeout is how long this server should wait before the
configuration change log entry is appended.

AppliedIndex returns the last index applied to the FSM. This is generally
lagging behind the last index, especially for indexes that are persisted but
have not yet been considered committed by the leader. NOTE - this reflects
the last index that was sent to the application's FSM over the apply channel
but DOES NOT mean that the application's FSM has yet consumed it and applied
it to its internal state. Thus, the application's state may lag behind this
index.

Apply is used to apply a command to the FSM in a highly consistent
manner. This returns a future that can be used to wait on the application.
An optional timeout can be provided to limit the amount of time we wait
for the command to be started. This must be run on the leader or it
will fail.

Barrier is used to issue a command that blocks until all preceeding
operations have been applied to the FSM. It can be used to ensure the
FSM reflects all queued writes. An optional timeout can be provided to
limit the amount of time we wait for the command to be started. This
must be run on the leader or it will fail.

BootstrapCluster is equivalent to non-member BootstrapCluster but can be
called on an un-bootstrapped Raft instance after it has been created. This
should only be called at the beginning of time for the cluster, and you
absolutely must make sure that you call it with the same configuration on all
the Voter servers. There is no need to bootstrap Nonvoter and Staging
servers.

DemoteVoter will take away a server's vote, if it has one. If present, the
server will continue to receive log entries, but it won't participate in
elections or log entry commitment. If the server is not in the cluster, this
does nothing. This must be run on the leader or it will fail. For prevIndex
and timeout, see AddVoter.

GetConfiguration returns the latest configuration and its associated index
currently in use. This may not yet be committed. This must not be called on
the main thread (which can access the information directly).

LeaderCh is used to get a channel which delivers signals on
acquiring or losing leadership. It sends true if we become
the leader, and false if we lose it. The channel is not buffered,
and does not block on writes.

RemovePeer (deprecated) is used to remove a peer from the cluster. If the
current leader is being removed, it will cause a new election
to occur. This must be run on the leader or it will fail.
Use RemoveServer instead.

RemoveServer will remove the given server from the cluster. If the current
leader is being removed, it will cause a new election to occur. This must be
run on the leader or it will fail. For prevIndex and timeout, see AddVoter.

Restore is used to manually force Raft to consume an external snapshot, such
as if restoring from a backup. We will use the current Raft configuration,
not the one from the snapshot, so that we can restore into a new cluster. We
will also use the higher of the index of the snapshot, or the current index,
and then add 1 to that, so we force a new state with a hole in the Raft log,
so that the snapshot will be sent to followers and used for any new joiners.
This can only be run on the leader, and blocks until the restore is complete
or an error occurs.

WARNING! This operation has the leader take on the state of the snapshot and
then sets itself up so that it replicates that to its followers though the
install snapshot process. This involves a potentially dangerous period where
the leader commits ahead of its followers, so should only be used for disaster
recovery into a fresh cluster, and should not be used in normal operations.

const (
// Follower is the initial state of a Raft node.FollowerRaftState = iota// Candidate is one of the valid states of a Raft node.Candidate// Leader is one of the valid states of a Raft node.Leader// Shutdown is the terminal state of a Raft node.Shutdown
)

type RequestVoteResponse struct {
RPCHeader// Newer term if leader is out of date.Termuint64// Peers is deprecated, but required by servers that only understand
// protocol version 0. This is not populated in protocol version 2
// and later.Peers []byte// Is the vote granted.Grantedbool
}

RequestVoteResponse is the response returned from a RequestVoteRequest.

type Server struct {
// Suffrage determines whether the server gets a vote.SuffrageServerSuffrage// ID is a unique string identifying this server for all time.IDServerID// Address is its network address that a transport can contact.AddressServerAddress
}

Server tracks the information about a single server in a configuration.

const (
// Voter is a server whose vote is counted in elections and whose match index
// is used in advancing the leader's commit index.VoterServerSuffrage = iota// Nonvoter is a server that receives log entries but is not considered for
// elections or commitment purposes.Nonvoter// Staging is a server that acts like a nonvoter with one exception: once a
// staging server receives enough log entries to be sufficiently caught up to
// the leader's log, the leader will invoke a membership change to change
// the Staging server to a Voter.Staging
)

Note: Don't renumber these, since the numbers are written into the log.

type SnapshotFuture interface {
Future// Open is a function you can call to access the underlying snapshot and
// its metadata. This must not be called until after the Error method
// has returned.Open() (*SnapshotMeta, io.ReadCloser, error)
}

SnapshotFuture is used for waiting on a user-triggered snapshot to complete.

type SnapshotMeta struct {
// Version is the version number of the snapshot metadata. This does not cover
// the application's data in the snapshot, that should be versioned
// separately.VersionSnapshotVersion// ID is opaque to the store, and is used for opening.IDstring// Index and Term store when the snapshot was taken.Indexuint64Termuint64// Peers is deprecated and used to support version 0 snapshots, but will
// be populated in version 1 snapshots as well to help with upgrades.Peers []byte// Configuration and ConfigurationIndex are present in version 1
// snapshots and later.ConfigurationConfigurationConfigurationIndexuint64// Size is the size of the snapshot in bytes.Sizeint64
}

type SnapshotStore interface {
// Create is used to begin a snapshot at a given index and term, and with
// the given committed configuration. The version parameter controls
// which snapshot version to create.Create(version SnapshotVersion, index, term uint64, configuration Configuration,
configurationIndex uint64, trans Transport) (SnapshotSink, error)
// List is used to list the available snapshots in the store.
// It should return then in descending order, with the highest index first.List() ([]*SnapshotMeta, error)
// Open takes a snapshot ID and provides a ReadCloser. Once close is
// called it is assumed the snapshot is no longer needed.Open(id string) (*SnapshotMeta, io.ReadCloser, error)
}

SnapshotStore interface is used to allow for flexible implementations
of snapshot storage and retrieval. For example, a client could implement
a shared state store such as S3, allowing new nodes to restore snapshots
without streaming from the leader.

These are versions of snapshots that this server can _understand_. Currently,
it is always assumed that this server generates the latest version, though
this may be changed in the future to include a configurable version.

0: Original Raft library before versioning was added. The peers portion of

these snapshots is encoded in the legacy format which requires decodePeers
to parse. This version of snapshots should only be produced by the
unversioned Raft library.

1: New format which adds support for a full configuration structure and its

associated log index, with support for server IDs and non-voting server
modes. To ease upgrades, this also includes the legacy peers structure but
that will never be used by servers that understand version 1 snapshots.
Since the original Raft library didn't enforce any versioning, we must
include the legacy peers structure for this version, but we can deprecate
it in the next snapshot version.

WithPeers is an interface that a transport may provide which allows for connection and
disconnection. Unless the transport is a loopback transport, the transport specified to
"Connect" is likely to be nil.