Chuyển đổi lý thuyết P6

Chuyển đổi lý thuyết P6

ATM Switching with Minimum-Depth Blocking Networks
Architectures and performance of interconnection networks for ATM switching based on the adoption of banyan networks are described in this chapter. The interconnection networks presented now have the common feature of a minimum depth routing network, that is the path(s) from each inlet to every outlet crosses the minimum number of routing stages required to guarantee full accessibility in the interconnection network and to exploit the self-routing property....

168 ATM Switching with Minimum-Depth Blocking Networks
5. Providing external queueing when replicating unbuffered banyan networks, so that multi-
ple packets addressing the same destination can be concurrently switched with success.
Section 6.1 describes the performance of the unbuffered banyan networks and describes
networks designed according to criteria 1 and 2; therefore networks built of a single banyan
plane or parallel banyan planes are studied. Criteria 3 and 4 are exploited in Section 6.2, which
provides a thorough discussion of banyan architectures suitable to ATM switching in which
each switching element is provided with an internal queueing capability. Section 6.3 discusses
how a set of internally unbuffered networks can be used for ATM switching if queueing is
available at switch outlets with an optional queueing capacity associated with network inlets
according to criterion 5. Some ﬁnal remarks concerning the switch performance under
offered trafﬁc patterns other than random and other architectures of ATM switches based on
minimum-depth routing networks are ﬁnally given in Section 6.4.
6.1. Unbuffered Networks
The class of unbuffered networks is described now so as to provide the background necessary
for a satisfactory understanding of the ATM switching architectures to be investigated in the
next sections. The structure of the basic banyan network and its trafﬁc performance are ﬁrst
discussed in relation to the behavior of the crossbar network. Then improved structures using
the banyan network as the basic building block are examined: multiple banyan planes and mul-
tiple interstage links are considered.
6.1.1. Crossbar and basic banyan networks
The terminology and basic concepts of crossbar and banyan networks are here recalled and the
corresponding trafﬁc performance parameters are evaluated.
6.1.1.1. Basic structures
In principle, we would like any interconnection network (IN) to provide an optimum perfor-
mance, that is maximum throughput ρ and minimum packet loss probability π . Packets are
lost in general for two different reasons in unbuffered networks: conﬂicts for an internal IN
resource, or internal conﬂicts, and conﬂicts for the same IN outlet, or external conﬂicts. The loss
due to external conﬂicts is independent of the particular network structure and is unavoidable
in an unbuffered network. Thus, the “ideal” unbuffered structure is the crossbar network (see
Section 2.1) that is free from internal conﬂicts since each of the N 2 crosspoints is dedicated to
each speciﬁc I/O couple.
An N × N banyan network built out of b × b SEs includes n stages of N ⁄ b SEs in which
n = log bN . An example of a banyan network with Baseline topology and size N = 16 is
given in Figure 6.1a for b = 2 and in Figure 6.1b for b = 4 . As already explained in
Section 2.3.1, internal conﬂicts can occur in banyan networks due to the link commonality of
different I/O paths. Therefore the crossbar network can provide an upper bound on through-

170 ATM Switching with Minimum-Depth Blocking Networks
and dilated banyan networks to be described next. Further extensions of these results are
reported by Szymanski and Hamacker. [Szy87].
The analysis given here, which summarizes the main results provided in these papers, relies
on a simplifying assumption, that is the statistical independence of the events of packet arrivals
at SEs of different stages. Such a hypothesis means overestimating the offered load stage by
stage, especially for high loads [Yoo90].
The throughput and loss performance of the basic unbuffered b n × b n banyan network,
which thus includes n stages of b × b SEs, can be evaluated by recursive analysis of the load on
adjacent stages of the network. Let p i ( i = 1, …, n ) indicate the probability that a generic
outlet of an SE in stage i is “busy”, that is transmits a packet ( p 0 denotes the external load
offered to the network). Since the probability that a packet is addressed to a given SE outlet
is 1 ⁄ b , we can easily write
p0 = p
 pi – 1  b
p i = 1 –  1 – ---------- 
- ( i = 1, …, n ) (6.2)
b
Thus, throughput and loss are given by
ρ = pn
pn
π = 1 – ----
-
p0
p=1.0
0.8
max
0.7
Maximum throughput, ρ
Crossbar
0.6
0.5
0.4
b=8
0.3
b=4
b=2
0.2
1 10 100 1000 10000
Switch size, N
Figure 6.2. Switch capacity of a banyan network

Unbuffered Networks 171
The switch capacity, ρ max , of a banyan network (Equation 6.2) with different sizes b of the
basic switching element is compared in Figure 6.2 with that provided by a crossbar network
(Equation 6.1) of the same size. The maximum throughput of the banyan network decreases as
the switch size grows, since there are more packet conﬂicts due to the larger number of net-
work stages. For a given switch size a better performance is given by a banyan network with a
larger SE: apparently as the basic b × b SE grows, less stages are needed to build a banyan net-
work with a given size N.
An asymptotic estimate of the banyan network throughput is computed in [Kru83]
2b
ρ ≅ --------------------------------
-
2b
( b – 1 ) n + ---- -
p
which provides an upper bound of the real network throughput and whose accuracy is larger
for moderate loads and large networks. Figure 6.3 shows the accuracy of this simple bound for
a banyan network loaded by three different trafﬁc levels. The bound overestimates the real net-
work throughput and the accuracy increases as the offered load p is lowered roughly
independently of the switch size.
b=2
0.9
0.8 Crossbar
Analysis
Bound
Network throughput
0.7
0.6
p=1.0
0.5 p=0.75
0.4
p=0.5
0.3
0.2
0.1
1 10 100 1000 10000
Switch size, N
Figure 6.3. Switch capacity of a banyan network
It is also interesting to express π as a function of the loss probability π i = 1 – p i ⁄ p i – 1
( i = 1, …, n ) occurring in the single stages. Since packets can be lost in general at any stage
due to conﬂicts for the same SE outlet, it follows that
n
π = 1– ∏ ( 1 – πi)
i=1

Unbuffered Networks 173
Banyan
networks
0 0
#0
1 1
#1
Interconnection
Interconnection
pattern
pattern
#(K-1)
N-1 N-1
1xK NxN Kx1
Figure 6.4. Replicated Banyan Network
RBNs with random and multiple loading are characterized by full banyan networks, the
same input and output interconnection patterns, and different operations of the splitters,
whereas selective loading uses “truncated” banyan networks and two different types of inter-
connection pattern. In all these cases each combiner that receives more than one packet in a
slot discards all but one of these packets.
A replicated banyan network operating with RL or ML is represented in Figure 6.5: both
interconnection patterns are of the EGS type (see Section 2.1). With random loading each
splitter transmits the received packet to a randomly chosen plane out of the K = K r planes
with even probability 1 ⁄ K r . The aim is to reduce the load per banyan network so as to
increase the probability that conﬂicts between packets for interstage links do not occur. Each
received packet is broadcast concurrently to all the K = K m planes with multiple loading.
The purpose is to increase the probability that at least one copy of the packet successfully
reaches its destination.
Selective loading is based on dividing the outlets into K = K s disjoint subsets and dedicat-
ing each banyan network suitably truncated to one of these sets. Therefore one EGS pattern of
size NK s connects the splitters to the K s banyan networks, whereas K s suitable patterns (one
per banyan network) of size N must be used to guarantee full access to all the combiners from
every banyan inlet. The splitters selectively load the planes with the trafﬁc addressing their
respective outlets. In order to guarantee full connectivity in the interconnection network, if
each banyan network includes n – k stages ( k = log bKs ) , the splitters transmit each packet to

174 ATM Switching with Minimum-Depth Blocking Networks
Banyan
networks
0 0
#0
1 1
#1
#(K-1)
N-1 N-1
1xK NxN Kx1
Figure 6.5. RBN with random or multiple loading
the proper plane using the ﬁrst k digits (in base b) of the routing tag. The example in
Figure 6.6 refers to the case of N = 16 , b = 2 and K s = 2 in which the truncated banyan
network has the reverse Baseline topology with the last stage removed. Note that the connec-
tion between each banyan network and its combiners is a perfect shufﬂe (or EGS) pattern. The
target of this technique is to reduce the number of packet conﬂicts by jointly reducing the
offered load per plane and the number of conﬂict opportunities.
Providing multiple paths per I/O port, and hence reducing the packet loss due to conﬂicts
for interstage links, can also be achieved by adopting a multiplicity K = K d ( K d ≥ 2 ) of
physical links for each “logical” interstage link of a banyan network (see Figure 4.10 for
N = 16 , b = 2 and K d = 2 ). Now up to K d packets can be concurrently exchanged
between two SEs in adjacent stages. These networks are referred to as dilated banyan networks
(DBN). Such a solution makes the SE, whose physical size is now 2K d × 2K d , much more
complex than the basic 2 × 2 SE. In order to drop all but one of the packets received by the
last stage SEs and addressing a speciﬁc output, K d × 1 combiners can be used that concentrate
the K d physical links of a logical outlet at stage n onto one interconnection network output.
However, unlike replicated networks, this concentration function could be also performed
directly by each SE in the last stage.

Unbuffered Networks 175
Figure 6.6. Example of RBN with selective loading
6.1.2.2. Performance
Analysis of replicated and dilated banyan networks follows directly from the analysis of a single
banyan network. Operating a random loading of the K planes means evenly partitioning the
offered load into K ﬂows. The above recursive analysis can be applied again considering that
the offered load per plane is now
p
p 0 = ---
-
K
Throughput and loss in this case are
K
ρ = 1 – ( 1 – pn) (6.6)
K
ρ 1 – ( 1 – pn)
π = 1 – -- = 1 – --------------------------------
- - (6.7)
p p
For multiple loading it is difﬁcult to provide simple expressions for throughput and delay.
However, based on the results given in [Kum86], its performance is substantially the same as
the random loading. This fact can be explained considering that replicating a packet on all

Networks with a Single Plane and Internal Queueing 177
gives a switch throughput very close to crossbar network capacity. Nevertheless the overall
complexity of a dilated banyan network is much higher than in RBNs (more complex SEs are
required). For example a network with size N = 32 and K = K d = 2 includes 160 SEs of
size 2 × 2 in an RBN and 80 SEs of size 4 × 4 . Since we have no queueing in unbuffered
banyan networks, the packet delay ﬁgure is not of interest here.
p=1.0
0.8
max
0.7
Maximum throughput, ρ
Crossbar
0.6 Ks=4 Kd =4
K =2
d
0.5
K r=4
0.4
Ks =2
K r=2
0.3
1 10 100 1000 10000
Switch size, N
Figure 6.7. Switch capacity for different banyan networks
So far we have studied the trafﬁc performance of unbuffered banyan networks with ran-
dom offered trafﬁc in which both internal and external conﬂicts among packets contribute to
determine the packet loss probability. A different pattern of offered trafﬁc consists in a set of
packets that does not cause external conﬂicts, that is each outlet is addressed by at most one
packet. A performance study of these patterns, referred to as permutations, is reported in
[Szy87].
6.2. Networks with a Single Plane and Internal
Queueing
In general the I/O paths along which two packets are transmitted are not link-independent in
a banyan network. Thus, if two or more ATM cells address the same outlet of an SE (that is the
same interstage link or the same switch output), only one of them can actually be transmitted
to the requested outlet. The other cell is lost, unless a storage capability is available in the SE.
We assume that each switching element with size b × b is provided with a queueing capacity
per port of B cells per port and we will examine here three different types of arrangements of
this memory in the SE: input queueing, output queueing and shared queueing. With input and out-

178 ATM Switching with Minimum-Depth Blocking Networks
put queueing b physical queues are available in the SE, whereas only one is available with
shared queueing. In this latter case the buffer is said to include b logical queues, each holding
the packets addressing a speciﬁc SE outlet. In all the buffered SE structure considered here we
assume a FIFO cell scheduling, as suggested by simplicity requirements for hardware
implementation.
Various internal protocols are considered in our study, depending on the absence or pres-
ence of signalling between adjacent stages to enable the downstream transmission of a packet
by an SE. In particular we deﬁne the following internal protocols:
• backpressure (BP): signals are exchanged between switching elements in adjacent stages
so that the generic SE can grant a packet transmission to its upstream SEs only within the
current idle buffer capacity. The upstream SEs enabled to transmit are selected according to
the acknowledgment or grant mode, whereas the number of idle buffer positions is deter-
mined based on the type of backpressure used, which can be either global (GBP) or local
(LBP). These operations are deﬁned as follows:
— acknowledgment (ack): the generic SE in stage i ( 1 ≤ i < n ) issues as many requests as
the number of SE outlets addressed by head-of-line (HOL) packets, each transmitted to
the requested downstream SE. In response, each SE in stage i ( 1 < i ≤ n ) enables the
transmission by means of acknowledgments to all the requesting upstream SEs, if their
number does not exceed its idle buffer positions, determined according to the GBP or
LBP protocol; otherwise the number of enabled upstream SEs is limited to those
needed to saturate the buffer;
— grant (gr): without receiving any requests, the generic SE in stage i ( 1 < i ≤ n ) grants
the transmission to all the upstream SEs, if its idle buffer positions, n idle , are at least b;
otherwise only n idle upstream SEs are enabled to transmit; unlike the BP-ack protocol,
the SE can grant an upstream SE whose corresponding physical or logical queue is
empty with the BP-gr operations;
— local backpressure (LBP): the number of buffer places that can be ﬁlled in the
generic SE in stage i ( 1 < i ≤ n ) at slot t by upstream SEs is simply given by the num-
ber of idle positions at the end of the slot t – 1 ;
— global backpressure (GBP): the number of buffer places that can be ﬁlled in the
generic SE in stage i ( 1 < i ≤ n ) at slot t by upstream SEs is given by the number of
idle positions at the end of the slot t – 1 increased by the number of packets that are
going to be transmitted by the SE in the slot t;
• queue loss (QL): there is no exchange of signalling information within the network, so
that a packet per non-empty physical or logical queue is always transmitted downstream by
each SE, independent of the current buffer status of the destination SE; packet storage in
the SE takes place as long as there are enough idle buffer positions, whereas packets are lost
when the buffer is full.
From the above description it is worth noting that LBP and GBP, as well as BP-ack and
BP-gr, result in the same number of upstream acknowledgment/grant signals by an SE if at
least b positions are idle in its buffer at the end of the preceding slot. Moreover, packets can be
lost for queue overﬂow only at the ﬁrst stage in the BP protocols and at any stage in the QL
protocol. In our model the selection of packets to be backpressured in the upstream SE (BP) or
to be lost (QL) in case of buffer saturation is always random among all the packets competing

Networks with a Single Plane and Internal Queueing 179
for the access to the same buffer. Note that such general description of the internal protocols
applied to the speciﬁc type of queueing can make meaningless some cases.
The implementation of the internal backpressure requires additional internal resources to
be deployed compared to the absence of internal protocols (QL). Two different solutions can
be devised for accomplishing interstage backpressure, that is in the space domain or in the time
domain. In the former case additional internal links must connect any couple of SEs interfaced
by interstage links. In the latter case the interstage links can be used on a time division base to
transfer both the signalling information and the ATM cells. Therefore an internal bit rate, C i ,
higher than the link external rate, C (bit/s), is required. With the acknowledgment BP we
have a two-phase signalling: the arbitration phase where all the SEs concurrently transmit their
requests downstream and the enable phase where each SE can signal upstream the enabling sig-
nal to a suitable number of requesting SEs. The enable phase can be accomplished
concurrently by all SEs with the local backpressure, whereas it has be a sequential operation
with global backpressure. In this last case an SE needs to know how many packets it is going to
transmit in the current slot to determine how many enable signals can be transmitted
upstream, but such information must be ﬁrst received by the downstream SEs. Thus the enable
phase of the BP-ack protocol is started by SEs in stage n and ends with the receipt of enable
signal by SEs in stage 1. Let l d and l u (bit) be the size of each downstream and upstream sig-
nalling packet, respectively, and l c (bit) the length of an information packet (cell). Then the
internal bit rate is C i = C for the QL protocol and C i = ( 1 + η ) C for the BP protocol
where η denotes the switching overhead. This factor in the BP protocol with acknowledgment is
given by
 ld + ( n – 1) lu
 -------------------------------- GBP
 lc
η =  (6.8)
 ld + lu
 ------------- LBP
 lc
In the BP protocol with grant we do not have any request phase and the only signalling is rep-
resented by the enable phase that is performed as in the case of the BP-ack protocol. Thus the
internal rate C i of the BP-gr protocol is given by Equation 6.8 setting l d = 0 .
The network is assumed to be loaded by purely random and uniform trafﬁc; that is at stage 1:
1. A packet is received with the same probability in each time slot;
2. Each packet is given an outlet address that uniformly loads all the network outlets;
3. Packet arrival events at different inlets in the same time slots are mutually independent;
4. Packet arrival events at an inlet or at different inlets in different time slot are mutually inde-
pendent.
Even if we do not provide any formal proof, assumption 2 is likely to be true at every stage,
because of general considerations about ﬂow conservation across stages. The independence
assumption 3 holds for every network stage in the QL mode, since the paths leading to the dif-
ferent inlets of an SE in stage i cross different SEs in stage j < i (recall that one path through
the network connects each network inlet to each network outlet). Owing to the memory

180 ATM Switching with Minimum-Depth Blocking Networks
device in each SE, the assumption 4, as well as the assumption 3 for the BP protocol, no
longer holds in stages other than the ﬁrst. For simplicity requirements the assumption 3 is sup-
posed to be always true in all the stages in the analytical models to be developed later. In spite
of the correlation in packet arrival events at a generic SE inlet in stages 2 through n, our mod-
els assume independence of the state of SEs in different stages. Such a correlation could be
taken into account by suitably modelling the upstream trafﬁc source loading each SE inlet.
Nevertheless, in order to describe simple models, each upstream source will be represented
here by means of only one parameter, the average load.
We assume independence between the states of SEs in the same stage, so that one SE per
stage is representative of the behavior of all the elements in the same stage ( SE i will denote
such an element for stage i). For this reason the topology of the network, that is the speciﬁc
kind of banyan network, does not affect in any way the result that we are going to obtain. As
usual we consider N × N banyan networks with b × b switching elements, thus including
n = log bN stages.
Buffered banyan networks were initially analyzed by Dias and Jump [Dia81], who only
considered asymptotic loads, and by Jenq [Jen83], who analyzed the case of single-buffered
input-queued banyan networks loaded by a variable trafﬁc level. The analysis of buffered ban-
yan networks was extended by Kumar and Jump [Kum84], so as to include replicated and
dilated buffered structures. A more general analysis of buffered banyan networks was presented
by Szymanski and Shiakh [Szy89], who give both separate and combined evaluation of differ-
ent SE structures, such as SE input queueing, SE output queueing, link dilation. The analysis
given in this section for networks adopting SEs with input queueing or output queueing is
based on this last paper and takes into account the modiﬁcation and improvements described
in [Pat91], mainly directed to improve the computational precision of network throughput and
cell loss. In particular, the throughput is only computed as a function of the cell loss probabil-
ity and not vice versa.
As far as networks with shared-queued SEs are concerned, some contributions initially
appeared in the technical literature [Hlu88, Sak90, Pet90], basically aiming at the study of a
single-stage network (one switching element). Convolutional approaches are often used that
assume mutual independence of the packet ﬂows addressing different destinations. Analytical
models for multistage structures with shared-buffered SEs have been later developed in [Tur93]
and [Mon92]. Turner [Tur93] proposed a simple model in which the destinations of the pack-
ets in the buffer were assumed mutually independent. Monterosso and Pattavina [Mon92]
developed an exact Markovian model of the switching element, by introducing modelling
approximation only in the interstage trafﬁc. The former model gave very inaccurate results,
whereas the latter showed severe limitation in the dimensions of the networks under study. The
model described here is the simplest of the three models described in [Gia94] in which the SE
state is always represented as a two-state variable. The other two more complex models therein,
not developed here, take into account the correlation of the trafﬁc received at any stage other
than the ﬁrst.

Networks with a Single Plane and Internal Queueing 181
6.2.1. Input queueing
The functional structure of a 2 × 2 SE with input queueing, shown in Figure 6.8 in the solu-
tion with additional interstage links for signalling purposes, includes two (local) queues, each
with capacity B = B i cells, and a controller. Each of the local queues, which interface directly
the upstream SEs, performs a single read and write operation per slot. The controller receives
signals from the (remote) queues of the downstream SEs and from the local queues when per-
forming the BP protocol. With this kind of queueing there is no need for an arbitration phase
with downstream signalling, since each queue is fed by only one upstream SE. Thus the BP
protocol can only be of the grant type. Nevertheless, arbitration must take place slot by slot by
the SE controller to resolve possible conﬂicts arising when more than one HOL cell of the
local queues addresses the same SE outlet.
Controller
Figure 6.8. SE with input queueing
Packet transmissions to downstream SEs (or network outlets) and packet receipt from
upstream SEs (or network inlets) take place concurrently in the SE at each time slot. For the
sake of better understanding the protocols QL and GBP, we can well imagine for an SE that
packet transmissions occur in the ﬁrst half of the slot, whereas packet receipts take place in the
second half of the slot based on the empty buffer space at the end of the ﬁrst phase. With the
LBP protocol there is no need for such decomposition as the amount of packets to be received
is independent of the packets to be transmitted in the slot. In such a way we can deﬁne a vir-
tual half of each time slot that separates transmissions from receipts.
In order to develop analytical models for the network, it turns out useful to deﬁne the fol-
lowing probability distributions to characterize the dynamic of the generic input queue of the
SE, the tagged queue:
• d i, t ( m ) = Pr [the tagged queue at stage i at time t contains m packets];
• d' i, t ( m ) = Pr [the tagged queue at stage i at time t contains m packets if we consider to be
removed those packets that are going to be transmitted in the slot t];
• a i, t = Pr [an SE at stage i at time t offers a packet to a queue at stage i + 1 ]; a 0 = p
denoted the external offered load;
• b i, t = Pr [a packet offered by a queue at stage i at time t is actually transmitted by the
queue];

184 ATM Switching with Minimum-Depth Blocking Networks
Packet losses take place only at stage 1 with backpressure, whereas in the QL mode a packet is
lost at stage i ( i ≥ 2 ) only if it is not lost in stages 1 through i – 1 , that is
 n i–1
 d' ( B ) +
 1 i ∑
d' i ( B i ) ∏
( 1 – d' j ( B i ) ) QL
π =  i=2 j=1
 d (B ) LBP
 1 i
 d' ( B ) GBP
1 i
Moreover the switch throughput, ρ, is the trafﬁc carried by the last stage
ρ = an (6.10)
and the average packet delay,T, is straightforwardly obtained through the Little's formula
 n Bi
 1 1


--
n ∑ ∑ hdi ( h)
---
ai
QL
T =  i=1 h=0 (6.11)
 n Bi
 1
 -----
nρ
- ∑ ∑ hdi ( h) BP
 i = 1h = 0
The accuracy of the analytical model in terms of packet loss probability is assessed in Fig-
ures 6.9-6.11 by comparing data obtained from the model with results given by computer
simulation for a network with N = 256 and b = 2 (hence the network includes eight
stages). In these ﬁgures the overall buffer capacity per SE B t = bB i has been chosen ranging
from B t = 4 to B t = 32 cells. The best accuracy is attained with the GBP protocol especially
if low offered loads are considered, whereas the model for LBP and QL turns out to be less
accurate.
The loss performance given by the analytical model for three protocols GBP, LBP and QL
for the same buffer size is shown in Figure 6.12. As one might expect, the GBP protocol gives
the best performance and behaves signiﬁcantly better than the other two protocols especially
for small buffers. Apparently, if the buffer is quite large the performance improvement enabled
by the exploiting of the buffer positions (at most one with IQ) being emptied in the same slot
(GBP over LBP) becomes rather marginal.
6.2.2. Output queueing
With output queueing, the (local) queues of the SE, each with capacity B = B o cells, inter-
face the SE outlets, as represented in Figure 6.13 for a 2 × 2 SE in the space division solution
for the inter-stage signalling. Now switching precedes rather than following queueing so that
each queue must be able to perform up to b write and 1 read operations per slot. The SE con-
troller exchanges information with the SEs in the adjacent stages and with the local queues
when the BP protocol is operated. In case of possible saturation of any local queues, it is a task
of the SE controller to select the upstream SEs enabled to transmit a packet without overﬂow-