Throughput-Optimal Opportunistic Scheduling in the Presence of Flow-Level DynamicsResearch supported by NSF Grants 07-21286 and 08-31756, ARO MURI Subcontracts, and the DTRA grants HDTRA1-08-1-0016 and HDTRA1-09-1-0055.A shorter version of this paper appears in the Proc. IEEE INFOCOM 2010.

Throughput-Optimal Opportunistic Scheduling in the Presence of
Flow-Level Dynamics

Abstract

We consider multiuser scheduling in wireless networks with channel variations and flow-level dynamics. Recently, it has been shown that the MaxWeight algorithm, which is throughput-optimal in networks with a fixed number of users, fails to achieve the maximum throughput in the presence of flow-level dynamics. In this paper, we propose a new algorithm, called workload-based scheduling with learning, which is provably throughput-optimal, requires no prior knowledge of channels and user demands, and performs significantly better than previously suggested algorithms.

Multiuser scheduling is one of the core challenges in wireless communications. Due to channel fading and wireless interference, scheduling algorithms need to dynamically allocate resources based on both the demands of the users and the channel states to maximize network throughput. The
celebrated MaxWeight algorithm developed in [3] for exploiting channel variations works as follows. Consider a network with a single base station and n users, and further assume that the base station can transmit to only one user in each time slot. The MaxWeight algorithm computes the product of the queue length and current channel rate for each user, and chooses to transmit to that user which has the largest product; ties can be broken arbitrarily. The throughput-optimality property of the MaxWeight algorithm was first established in [3], and the results were later extended to more general channel and arrival models in [4, 5, 6]. The MaxWeight algorithm should be contrasted with other opportunistic scheduling such as [7, 8] which exploit channel variations to allocate resources fairly assuming continuously backlogged users, but which are not throughput-optimal when the users are not continuously backlogged.

While the results in [3, 4, 5] demonstrate the power of MaxWeight-based algorithms, they were obtained under the assumptions that the number of users in the network is fixed and the traffic flow generated by each user is long-lived, i.e., each user continually injects new bits into the network. However, practical networks have flow-level dynamics: users arrive to transmit data and leave the network after the data are fully transmitted. In a recent paper [1], the authors show that the MaxWeight algorithm is in fact not throughput optimal in networks with flow-level dynamics by providing a clever example showing the instability of the MaxWeight scheduling. The intuition is as follows: if a long-lived flow does not receive enough service, its backlog builds up, which forces the MaxWeight scheduler to allocate more service to the flow. This interaction between user backlogs and scheduling guarantees the correctness of the resource allocation. However, if a flow has only a finite number of bits, its backlog does not build up over time and it is possible for the MaxWeight to stop serving such a flow and thus, the flow may stay in the network forever. Thus, in a network where finite-size flows continue to arrive, the number of flows in the network could increase to infinity. One may wonder why flow-level instability is important since, in real networks, base stations limit the number of simultaneously active flows in the network by rejecting new flows when the number of existing flows reaches a threshold. The reason is that, if a network model without such upper limits is unstable in the sense that the number of flows grows unbounded, then the corresponding real network with an upper limit on the number of flows will experience high flow blocking rates. This fact is demonstrated in our simulations later.

In [1], the authors address this instability issue of MaxWeight-based algorithms, and establish necessary and sufficient conditions for the stability of networks with flow-level dynamics. The authors also propose throughput-optimal scheduling algorithms. However, as the authors mention in [1], the proposed algorithms require prior knowledge of channel distribution and traffic distribution, which is difficult and sometimes impossible to obtain in practical systems, and further, the performance of the proposed algorithms is also not ideal. A delay-driven MaxWeight scheduler has also been proposed to stabilize the network under flow-level dynamics [2]. The algorithm however works only when the maximum achievable rates of the flows are identical.

Since flow arrivals and departures are common in reality, we are interested in developing practical scheduling algorithms that are throughput-optimal under flow-level dynamics. We consider a wireless system with a single base station and multiple users (flows). The network contains both long-lived flows, which keep injecting bits into the network, and short-lived flows, which have a finite number of bits to transmit. The main contributions of this paper include the following:

We obtain the necessary conditions for flow-level stability of networks with both long-lived flows and short-lived flows. This generalizes the result in [1], where only short-lived flows are considered.

We propose a simple algorithm for networks with short-lived flows only. Under this algorithm, each flow keeps track of the best channel condition that it has seen so far. Each flow whose current channel condition is equal to the best channel condition that it has seen during its lifetime is eligible for transmission. It is shown that an algorithm which uniformly and randomly chooses a flow from this set of eligible flows for transmission is throughput-optimal. Note that the algorithm is a purely opportunistic algorithm in that it selects users for transmission when they are in the best channel state that they have seen so far, without considering their backlogs.

Based on an optimization framework, we propose to use the estimated workload, the number of time slots required to transmit the remainder of a flow based on the best channel condition seen by the flow so far, to measure the backlog of short-lived flows. By comparing this short-lived flow backlog to the queue lengths and channel conditions of the long-lived flows, we develop a new algorithm, named workload-based scheduling with learning, which is throughput-optimal under flow-level dynamics. The term ”learning” refers to the fact that the algorithm learns the best channel condition for each short-lived flow and attempts to transmit when the channel condition is the best.

We use simulations to evaluate the performance of the proposed scheduling algorithm, and observe that the workload-based scheduling with learning performs significantly better than the MaxWeight scheduling in various settings.

The terminology of long-lived and short-lived flows above has to be interpreted carefully in practical situations. In practice, each flow has a finite size and thus, all flows eventually will leave the system if they receive sufficient service. Thus, all flows are short-lived flows in reality. Our results suggest that transmitting to users who are individually in their best estimated channel state so far is thus, throughput optimal. On the other hand, it is also well known that real network traffic consists of many flows with only a few packets and a few flows with a huge number of packets. If one considers the time scales required to serve the small-sized flows, the large-sized flows will appear to be long-lived (i.e., persistent forever) in the terminology above. Thus, if one is interested in performance over short time-scales, an algorithm which considers flows with a very large number of packets as being long-lived may lead to better performance and hence, we consider the more general model which consists of both short-lived flows and long-lived flows. Our simulations later confirm the fact that the algorithm which treats some flows are being long-lived leads to better performance although throughput-optimality does not require such a model. In addition, long-lived flows partially capture the scenario where all bits from a flow do not arrive at the base station all at once. This fact is also exploited in our simulation experiments.

Network Model: We consider a discrete-time wireless downlink network with a single base station and many flows, each flow associates with a distinct mobile user. The base station can serve only one flow at a time.

Traffic Model: The network consists of the following two types of flows:

Long-lived flows: Long-lived flows are traffic streams that are always in the network and continually generate bits to be transmitted.

Short-lived flows: Short-lived flows are flows that have a finite number of bits to transmit. A short-lived flow enters the network at a certain time, and leaves the system after all bits are transmitted.

We assume that the set of long-lived flows is fixed, and short-lived flows arrive and depart. We let l be the index for long-lived flows, L be the set of long-lived flows, and L be the number of long-lived flows, i.e., L=|L|. Furthermore, we let Xl(t) be the number of new bits injected by long-lived flow l in time slot t, where Xl(t) is a discrete random variable with finite support, and independently and identically distributed (i.i.d.) across time slots. We also assume {E}[Xl(t)]=xl and Xl(t)≤Xmax for all l and t.

Similarly, we let i be the index for short-lived flows, I(t) be the set of short-lived flows in the network at time t, and I(t) be the number of short-lived flows at time t, i.e., I(t)=|I(t)|. We denote by fi the size (total number of bits) of short-lived flow i, and assume fi≤Fmax for all i.

It is important to note that we allow different short-lived flows to have different maximum link rates. A careful consideration of our proofs will show the reader that the learning algorithm is not necessary if all users have the same maximum rate and that one can simply transmit to the user with the best channel state if it is assumed that all users have the same maximum rate. However, we do not believe that this is a very realistic scenario since SNR variations will dictate different maximum rates for different users.

Residual Size and Queue Length: For a short-lived flow i, let Qi(t) which we call the residual size, denote the number of bits still remaining in the system at time t. For a long-lived flow l, let Ql(t) denote the number of bits stored at the queue at the base station.

Channel Model: There is a wireless link between each user and the base station. Denote by Ri(t) the state of the link between short-lived flow i and the base station at time t (i.e., the maximum rate at which the base station can transmit to short-lived flow i at time t), and Rl(t) the state of the link between long-lived flow l and the base station at time t. We assume that Ri(t) and Rl(t) are discrete random variables with finite support. Define Rmaxi and Rmaxl to be the largest values that these random variables can take, i.e., P(Rj(t)>Rmaxj)=0 for each j∈L⋃(⋃tI(t)).Choose pmaxs>0 and Rmax>0 such that

Pr(Ri(t)=Rmaxi)≥pmaxs

∀i,t

max{maxiRmaxi,maxlRmaxl}≤Rmax.

The states of wireless links are assumed to be independent across flows and time
slots (but not necessarily identically distributed across flows). The independence assumption across time slots can be relaxed easily but at the cost of more complicated proofs.

In this section, we introduce a new scheduling algorithm called Workload-based Scheduling with Learning (WSL).

Workload-based Scheduling with Learning: For a short-lived flow i, we define

~Rmaxi(t)=maxmax{t−D,bi}≤s≤tRi(s),

where bi is the time short-lived flow i joins the network and D>0 is called the learning period. A key component of this algorithm is to use Rmaxi to evaluate the workload of short-lived flows (the reason will be explained in a detail in Section 5). However, Rmaxi is in general unknown, so the scheduling algorithm uses ~Rmaxi(t) as an estimate of Rmaxi.

During each time slot, the base station first checks the following inequality:

α∑i∈I(t)⌈Qi(t)~Rmaxi(t)⌉>maxl∈LQl(t)Rl(t),

(1)

where α>0.

If inequality (1) holds, then the base station serves a short-lived flow as follows: if at least one short-lived flow (say flow i) satisfies Ri(t)≥Qi(t) or Ri(t)=~Rmaxi(t), then the base station selects such a flow for transmission (ties are broken according to a good tie-breaking rule, which is defined at the end of this algorithm); otherwise, the base station picks an arbitrary short-lived flow to serve.

If inequality (1) does not hold, then the base station serves a long-lived flow l∗ such that

l∗∈argmaxl∈LQl(t)Rl(t)

(ties are broken arbitrarily).

“Good” tie-breaking rule: Assume that the tie-breaking rule is applied to pick a short-lived flow every time slot (but the flow is served only if
α∑i∈I(t)⌈Qi(t)~Rmaxi(t)⌉>maxl∈LQl(t)Rl(t)). We define Emiss(t) to be the event that the tie-breaking rule selects a short-lived flow with ~Rmaxi(t)≠Rmaxi. Define

Ws(t)=∑i∈I(t)⌈Qi(t)Rmaxi⌉,

which is he total workload of the system at time t. A tie-breaking rule is said to be good if the following condition holds: Consider the WSL with the given tie-breaking rule and learning period D. Given any ϵmiss>0, there exist Nϵmiss and Dϵmiss such that

Pr(Emiss(t))≤ϵmiss

if D≥Dϵmiss and Ws(t−D)≥Nϵmiss.

□

Remark 1: While all WSL scheduling algorithms with good tie-breaking rules are throughput optimal, their performances in terms of other metrics could be different depending upon the tie-breaking rules. We consider two tie-breaking rules in this paper:

Uniform Tie-breaking: Among all short-lived flows satisfying Ri(t)=~Rmaxi(t) or Ri(t)≥Qi(t), the base-station uniformly and randomly selects one to serve.

Oldest-first Tie-breaking: Let βi denote the number of time slots a short-lived flow has been in the network. The base station keeps track τi=min{¯τ,βi} for every short-lived flow, where ¯τ is some fixed positive integer. Among all short-lived flows satisfying Ri(t)=~Rmaxi(t) or Ri(t)≥Qi(t), the tie-breaking rule selects the one with the largest τi, and the ties are broken uniformly and randomly.3

The “goodness” of these two tie-breaking rules are proved in Appendix C and D, and the impact of the tie-breaking rules on performance is studied in Section 6 using simulations.

Remark 2: The α in inequality (1) is a parameter balancing the performance of long-lived flows and short-lived flows. A large α will lead to a small number of short-lived flows but large queue-lengths of long-lived flows, and vice versa.

Remark 3: In Theorem 3, we will prove that WSL is throughput optimal when D is sufficiently large. From purely throughput-optimality considerations, it is then natural to choose D=∞. However, in practical systems, if we choose D too large, such as ∞, then it is possible that a flow may stay in the system for a very long time if its best channel condition occurs extremely rarely. Thus, it is perhaps best to choose a finite D to tradeoff between performance and throughput.

Remark 4: If all flows are short-lived, then the algorithm simplifies as follows: If at least one short-lived flow (say flow i) satisfies Ri(t)≥Qi(t) or Ri(t)=~Rmaxi(t), then the base station selects such a flow for transmission according to a “good” tie-breaking rule; otherwise, the base station picks an arbitrary short-lived flow to serve. Simply stated, the algorithm serves one of the flows which can be completely transmitted or sees its best channel state, where the best channel state is an estimate based on past observations. If no such flow exists, any flow can be served. We do not separately prove the throughput optimality of this scenario since it is a special case of the scenario considered here. But it is useful to note that, in the case of short-lived flows only, the algorithm does not consider backlogs at all in making scheduling decisions.

We will prove that WSL (with any α>0) is throughput-optimal in the following sections, i.e., the scheduling policy can support any set of traffic flows that are supportable by any other algorithm. In the next section, we first present the necessary conditions for the stability, which also define the network throughput region.

In this section, we establish the necessary conditions for the stability of networks with flow-level dynamics. To get the necessary condition, we need to classify the short-lived flows into different classes.

A short-lived flow class is defined by a pair of random variables (^R,^F). Class-k is associated with random variables ^Rk and ^Fk.4 A short-lived flow i belongs to class k if Ri(t) has the same distribution as ^Rk and the size of flow i (fi) has the same distribution as ^Fk. We let Λk(t) denote the number of class-k flows joining the network at time t, where Λk(t) are i.i.d. across time slots and independent but not necessarily identical across classes, and {E}[Λk(t)]=λk. Denote by K the set of distinct classes. We assume that K is finite, |K|=K, and Λk[t]≤λmax for all t and k∈K.

Let c denote an L-dimensional vector describing the state of the channels of the long-lived flows. In state c,Rc,l is the service rate that long-lived flow l can receive if it is scheduled. We denote by C the set of all possible states.

Let C(t) denote the state of the long-lived flows at time t, and πc denote the probability that C(t) is in state c.

Let pc,l be the probability that the base station serves flow l when the network is in state c. Clearly, for any c, we have

∑l∈Lpc,l≤1.

Note that the sum could be less than 1 if the base station schedules a short-lived flow in this state.

Let μc,s be the probability that the base station serves a short-lived flow when the network is in state c.

Let Θk,β(t) denote the number of short-lived flows that belong to class-k and have residual size Q(t)=β. Note that β can only take on a finite number of values.

Theorem 1

Consider traffic parameters {xl} and {λk}, and suppose that there exists a scheduling policy guaranteeing

limt→∞{E}⎡⎣∑l∈LQl(t)+∑k∈KFmax∑β=1Θk,β(t)⎤⎦<∞.

Then there exist pc,l and μc,s such that the following inequalities hold:

xl≤∑c∈Cπcpc,lRc,l∀l∈L.

(2)

Missing or unrecognized delimiter for \left

(3)

(∑l∈Lpc,l)+μc,s≤1∀c∈C.

(4)

□

Inequality (2) and (3) state that the service allocated should be no less than the user requests if the flows are supportable. Inequality (4) states that the overall time used to serve long-lived and short-lived flows should be no more than the time available. To prove this theorem, it can be shown that for any traffic for which we cannot find pc,l and μc,s satisfying the three inequalities in the theorem, a Lyapunov function can be constructed such that the expected drift of the Lyapunov function is larger than some positive constant under any scheduling algorithm, which implies the instability of the network. The complete proof is based on the Strict Separation Theorem and is along the lines of a similar proof in [5], and is omitted in this paper.

First, we provide some intuition into how one can derive the WSL algorithm from optimization decomposition considerations. Then, we will present our main throughput optimality results.
Given traffic parameters {xl} and {λk}, the necessary conditions for the supportability of the traffic is equivalent to the feasibility of the following constraints:

xl≤∑c∈Cπcpc,lRc,l

∀l

(5)

∑k∈Kλk{E}[⌈^Fk^Rmaxk⌉]≤∑c∈Cμc,sπc

∑l∈Lpc,l+μc,s≤1

∀c.

For convenience, we view the feasibility problem as an optimization problem with the objective maxA, where A is some constant. While we have not explicitly stated that the x’s and μ’s are non-negative, this is assumed throughout.

Partially augmenting the objective using Lagrange multipliers, we get

maxA−∑l∈Lql(xl−∑cπcpc,lRc,l)−

Missing or unrecognized delimiter for \left

s.t.

∑l∈Lpc,l+μc,s≤1%∀c.

For the moment, let us assume Lagrange multipliers ql and qs are given. Then the maximization problem above can be decomposed into a collection of optimization problems, one for each c:

maxpc,l,μc,s∑l∈LqlRc,lpc,l+qsμc,s

s.t.

∑l∈Lpc,l+μc,s≤1.

It is easy to verify that one optimal solution to the optimization problem above is:

if qs>maxl∈LqlRc,l, then μc,s=1 and pc,l=0(∀l);

otherwise,
μc,s=0, and pc,l∗=1 for some l∗∈argmaxqlRc,l and pc,l=0 for other l.

The complementary slackness conditions give

ql(xl−∑c∈Cπcpc,lRc,l)=0.

Since xl is the mean
arrival rate of long-lived flow l and ∑c∈Cπcpc,lRc,l is the mean service rate,
the condition on ql says that if the mean arrival rate is less than the mean service rate, ql is equal to zero. Along with the non-negativity condition on ql, this suggests that perhaps ql behaves likes a queue with these arrival and service rates. Indeed, it turns out that the mean of the queue lengths are proportional to Lagrange multipliers (see the surveys in [9, 10, 11]). For long-lived flow l, we can treat the queue-length Ql(t) as a time-varying estimate of Lagrange multiplier ql. Similarly qs can be associated with a queue whose arrival rate is ∑k∈Kλk{E}[⌈^Fk^Rmaxk⌉], which is the mean rate at which workload arrives where workload is
measured by the number of slots needed to serve a short-lived flow if it is served when its channel condition is the best. The service rate is ∑c∈Cμc,sπc which is the rate at which the workload can potentially decrease when a short-lived flow is picked for scheduling by the base station. Thus, the workload in the system can serve as a dynamic estimate of qs.

Letting αWs(t) (α>0) be an estimate of qs, the observations above suggest the following workload-based scheduling algorithm if Rmaxi are known.

Workload-based Scheduling (WS):
During each time slot, the base station checks the following inequality:

αWs(t)>maxl∈LQl(t)Rl(t).

(6)

If inequality (6) holds, then the base station serves a short-lived flow as follows: if at least one short-lived flow (say flow i) satisfies Ri(t)≥Qi(t) or Ri(t)=Rmaxi, then such a flow is selected for transmission (ties are broken arbitrarily); otherwise, the base station picks an arbitrary short-lived flow to serve.

If inequality (6) does not hold, then the base station serves a long-lived flow l∗ such that l∗∈argmaxl∈LQl(t)Rl(t) (ties are broken arbitrarily).

The factor α can be obtained from the optimization formulation by multiplying constraint (5) by α on both sides

□

However, this algorithm which was directly derived from dual decomposition considerations is not implementable since Rmaxi’s are unknown. So WSL uses ~Rmaxi(t) to approximate Rmaxi. Note that an inaccurate estimate of Rmaxi not only affects the base station’s decision on whether Ri(t)=Rmaxi, but also on its computation of ⌈Qi(t)Rmaxi⌉. However, it is not difficult to see that the error in the estimate of the total workload is a small fraction of the total workload when the total workload is large: when the workload is very large, the total number of short-lived flows is large since their file sizes are bounded. Since the arrival rate of short-lived flows is also bounded, this further implies that the majority of short-lived flows must have arrived a long time ago which means that with high probability, their estimate of their best channel condition must be correct.

Next we will prove that both WS and WSL can stabilize any traffic xl and λk such that (1+ϵ)xl and (1+ϵ)λk are supportable, i.e., satisfying the conditions presented in Theorem 1. In other words, the number of short-lived flows in the network and the queues for long-lived flows are all bounded. Even though WS is not practical, we study it first since the proof of its throughput optimality is easier and provides insight into the proof of throughput-optimality of WSL.

Let

M(t)=({Ql(t)}l∈L,{Θk,β(t)}k∈K,1≤β≤Fmax).

Since the base station makes decisions on M(t) and R(t)={{Ri(t)}i∈I(t),{Rl(t)}l∈L} under WS. It is easy to verify that M(t) is a finite-dimensional Markov chain under WS. Assume that Λk(t), ^Fk and Xl(t) are such that the Markov chain M is irreducible and aperiodic.

Theorem 2

Given any traffic xl and λk such that (1+ϵ)xl and (1+ϵ)λk are supportable, the Markov chain M(t) is positive-recurrent under WS, and

limt→∞{E}⎡⎣∑l∈LQl(t)+∑i∈I(t)Qi(t)⎤⎦<∞.

{proof}

We consider the following Lyapunov function:

V(t)=α(Ws(t))2+∑l∈L(Ql(t))2,

(7)

and prove that

{E}[V(t+1)−V(t)|M(t)]≤Ud1M(t)∈Υ−ϵ2[α¯λWs(t)

+∑l∈LQl(t)xl]1M(t)∉Υ

for some Ud>0,ϵ>0, ¯λ>0, and a finite set Υ. Positive recurrence of M then follows from Foster’s Criterion for Markov chains [12], and the boundedness of the first moment follows from [13]. The detailed proof is presented in Appendix A.

We next study WSL, where Rmaxi is estimated from the history. We define Θk,β,r(t) to be the number of short-lived flows that belong to class-k, have a residual size of β, and have ~Rmaxi(t)=r. Furthermore, we define

~M(n)=⎛⎜⎝{Ql(t)}l∈L,{Θk,β,r(t)}k∈K1≤β≤Fmax1≤r≤^Rmaxk⎞⎟⎠(n−1)T+1≤t≤nT

from some T≥D. It is easy to see that ~M(n) is a finite-dimensional Markov chain under WSL.5

Theorem 3

Consider traffic xl and λk such that (1+ϵ)xl and (1+ϵ)λk are supportable. Given WSL with a good tie-breaking rule, there exists Dϵ such that the Markov chain ~M(n) is positive-recurrent under the WSL with learning period D≥Dϵ and the given tie-breaking rule. Further,

limt→∞{E}⎡⎣∑l∈LQl(t)+∑i∈I(t)Qi(t)⎤⎦<∞.

{proof}

The proof of this theorem is built upon the following two facts:

When the number of short-lived flows is large, the majority of short-lived flows must have been in the network for a long time and have obtained the correct estimate of the best channel condition, which implies that

∑i∈I(t)⌈Qi(t)Rmaxi⌉≈∑i∈I(t)⌈Qi(t)~Rmaxi(t)⌉.

When the number of short-lived flows is large, the short-lived flow selected by the base station (say flow i) has a high probability to satisfy Ri(t)=Rmaxi or Ri(t)≥Qi(t).

From these two facts, we can prove that with a high probability, the scheduling decisions of WSL are the same as those of WS, which leads to the throughput optimality of WSL. The detailed proof is presented in Appendix B.

In this section, we use simulations to evaluate the performance of different variants of WSL and compare it to other scheduling policies. There are three types of flows used in the simulations:

S-flow: An S-flow has a finite size, generated from a truncated exponential distribution with mean value 30 and maximum value 150. Non-integer values are rounded to integers.

M-flow: An M-flow keeps injecting bits into the network for 10,000 time slots and stops. The number of bits generated at each time slot follows a Poisson distribution with mean value 1.

L-flow: An L-flow keeps injecting bits into the network and never leaves the network. The number of bits generated at each time slot follows a truncated Poisson distribution with mean value 1and maximum value 10.

Here S-flows represent short-lived flows that have finite sizes and whose bits arrive all at once; L-flows represent long-lived flows that continuously inject bits and never leave the network; and M-flows represent flows of finite size but whose arrival rate is controlled at their sources so that they do not arrive instantaneously into the network. Our simulation will demonstrate the importance of modeling very large, but finite-sized flows as long-lived flows.

We assume that the channel between each user and the base station is distributed according to one of the following three distributions:

G-link: A G-link has five possible link rates {10,20,30,40,50}, and each of the states happens with probability 20%.

P-link: A P-link has five possible link rates {5,10,15,20,25}, and each of the states happens with probability 20%.

R-link: An R-link has five possible link rates {10,20,30,40,100}, and the probabilities associated with these link states are {0.5,0.2,0.2,0.09,0.01}.

The G, P and R stand for Good, Poor and Rare, respectively. We include these three different distributions to model the SNR variations among the users, where G-links represent links with high SNR (e.g., those users close to the base station), P-links represent links with low SNR (e.g., those users far away from the base station), and R-links represent links whose best state happens rarely. The R-links will be used to study the impact of learning period D on the network performance.

We name the WSL with the uniform tie-breaking rule WSLU, and the WSL with the oldest-first tie-breaking rule WSLO. In the following simulations, we will first demonstrate that the WSLU performs significantly better than previously suggested algorithms, and then show that the performance can be further improved by choosing a good tie-breaking policy (e.g., WSLO). We set α to be 50 in all the following simulations.

Simulation I: Short-lived Flow or Long-lived Flow?

We first use the simulation to demonstrate the importance of considering a flow with a large number of packets as being long-lived. We consider a network consisting of multiple S-flows and three M-flows, where the arrival of S-flows follows a truncated Poisson process with maximum value 100 and mean value λ. All the links are assumed to be G-links. We evaluate the following two schemes:

Scheme-1: Both S-flows and M-flows are considered to be short-lived flows.

Scheme-2: An M-flow is considered to be long-lived before its last packet arrives, and to be short-lived after that.

The performance of these two schemes are shown in Figure 1, where WS with Uniform Tie-breaking Rule is used as the scheduling algorithm. We can see that the performances are substantially different (note that the network is stable under both schemes). The number of queued bits of M-flows under Scheme-1 is larger than that under Scheme-2 by two orders of magnitude. This is because even an M-flow contains a huge number of bits (10,000 on average), it can be served only when the link rate is 50 under Scheme-1. This simulation suggests that when the performance we are interested is at a small scale (e.g. acceptable queue-length being less than or equal to 100) compared with the size of the flow (e.g., 104 in this simulation), the flow should be viewed as a long-lived flow for performance purpose.

Simulation II: The Impact of Learning Period D

In this simulation, we investigate the impact of D on the performance of WSLU. Recall that it is nature to choose D=∞ for purely throughput-optimality considerations, but the disadvantage is that a flow may stay in the network for a very long time if the best link state occurs very rarely. We consider a network consisting of S-flows, which arrive according to a truncated Poisson process with maximum value 100 and mean λ, and three L-flows. All links are assumed to be R-links. Figure 2 depicts the mean and standard deviation of the file-transfer delays with D=16 and D=∞ when the traffic load is light or medium. As we expected, the standard deviation under WSLU with D=∞ is significantly larger than that under WSLU with D=16 when λ is large. This occurs because the best link rate 100 occurs with a probability 0.01. This simulation confirms that in practical systems, we may want to choose a finite D to get desired performance.

Figure 2: The performance of WSLU with D=16 and D=∞ when the traffic load is light or medium

Further we would like to comment that while the WSLU algorithm with a small D has a better performance in light or medium traffic regimes, throughput optimality is only guaranteed when D is sufficiently large. Figure 3 illustrates the average number of S-flows and average file-transfer delay for D=16 and D=∞ in heavy traffic regime. We can observe that in the heavy traffic regime, the WSLU with D=∞ still stabilizes the network but the algorithm with D=16 does not. So there is a clear tradeoff in choosing D: A small D reduces the file-transfer delay in light or medium traffic regimes, but a large D guarantees stability in heavy traffic regime.

Figure 3: The performance of WSLU with D=16 and D=∞ when the traffic load is heavy

Simulation III: Performance comparison of various algorithms

In the following simulations, we choose D=16. In the introduction, we have pointed out that the MaxWeight is not throughput optimal under flow-level dynamics because the backlog of a short-lived queue does not build up even when it has not been served for a while. To overcome this, one could try to use the delay of the head-of-line packet, instead of queue-length, as the weight because the head-of-line delay will keep increasing if no service is received. In the case of long-lived flows only, this algorithm is known to be throughput-optimal [5]. We will show that this Delay-based scheduling does not solve the instability problem when there are short-lived flows.

Delay-based Scheduling: At each time slot, the base station selects a flow i such that i∈argmaxiDi(t)Ri(t), where Di(t) is the delay experienced so far by the head-of-line packet of flow i.

We first consider the case where all flows are S-flows, which arrive according to a truncated Poisson process with maximum value 100 and mean λ. An S-flow is assigned with a G-link or a P-link equally likely.

Figure 4 shows the average file-transfer delay and average number of S-flows under different values of λ. We can see that WSLU performs significantly better than the MaxWeight and Delay-based algorithms. Specifically, under MaxWeight and Delay-based algorithms, both the number of S-flows and file-transfer delay explode when λ≥0.102. WSLU, on the other hand, performs well even when λ=0.12.

Figure 4: The performance of the Delay-based, MaxWeight, and WSLU algorithms in a network without L-flows

Next, we consider the same scenario with three L-flows in the network. Two of the L-flows have G-links and one has a P-link. Figure 5 shows the average number of short-lived flows and average file-transfer delay under different values of λ. We can see that the MaxWeight becomes unstable even when the arrival rate of S-flows is very small. This is because the MaxWeight stops serving S-flows when the backlogs of L-flows are large, so S-flows stay in the network forever. The delay-based scheduling performs better than the MaxWeight, but significantly worse than WSLU.

Figure 5: The performance of the Delay-based, MaxWeight, and WSLU algorithms in a network with both S-flows and L-flows

Simulation IV: Blocking probability of various algorithms

While our theory assumes that the number of flows in the network can be infinite, in reality, base stations limit the number of simultaneously active flows, and reject new flows when the number of existing flows above some threshold. In this simulation, we assume that the base station can support at most 20 S-flows. A new S-flow will be blocked if 20 S-flows are already in the network. In this setting, the number of flows in the network is finite, so we compute the blocking probability, i.e., the fraction of S-flows rejected by the base station.

We consider the case where no long-lived flow is in the network and the case where both short-lived and long-lived flows are present in the network. The flows and channels are selected as in Simulation III. The results are shown in Figure 7 and 7. We can see that the blocking probability under WSLU is substantially smaller than that under the MaxWeight or the delay-based scheduling. Thus, this simulation demonstrates that instability under the assumption when the number of flows is allowed to unbounded implies high blocking probabilities for the practical scenario when the base station limits the number of flows in the network.

Figure 6: The blocking probabilities of the Delay-based, MaxWeight, and WSLU in a network without L-flowsFigure 7: The blocking probabilities of the Delay-based, MaxWeight, and WSLU in a network with L-flows

Simulation V: WSLU versus WSLO

In this simulation, we study the impact of tie-breaking rules on performance. We compare the performance of the WSLU and WSLO. We first study the case where the base station does not limit the number of simultaneously active flows and there is no long-lived flow in the network. The simulation setting is the same as that in Simulation III. Figure 8 shows the average file-transfer delay and average number of S-flows under different values of λ. We can see that the WSLO reduces the file-transfer delay and number of S-flows by nearly 75% when λ=0.13, which indicates the importance of
selecting a good tie-breaking rule for improving the network
performance.

Figure 8: The performance of the WSLU and WSLO algorithms in a network without L-flows

Next, we study the case where the base station does not limit the number of simultaneously active flows and there are three L-flows in the network. Figure 9 shows the average number of short-lived flows and average file-transfer delay under different values of λ. We can see again that the WSLO algorithm has a much better performance than the WSLU, especially when λ is large.

Figure 9: The performance of the WSLU and WSLO algorithms in a network with both S-flows and L-flows

Finally we consider the situation where the base station can support at most 20 S-flows. A new S-flow will be blocked if 20 S-flows are already in the network. The simulation setting is the same as that in Simulation IV. We calculate the blocking probabilities, and the results are shown in Figure 11 and 11. We can see that the blocking probability under the WSLO is much smaller than that under the WSLU policy when λ is large.

Figure 10: The blocking probabilities of the WSLU and WSLO in a network without L-flowsFigure 11: The blocking probabilities of the WSLU and WSLO in a network with L-flows

In this paper, we studied multiuser scheduling in networks with flow-level dynamics. We first obtained necessary conditions for flow-level stability of networks with both long-lived flows and short-lived flows. Then based on an optimization framework, we proposed the workload-based scheduling with learning that is throughput-optimal under flow-level dynamics and requires no prior knowledge about channels and traffic. In the simulations, we evaluated the performance of the proposed scheduling algorithms, and demonstrated that the proposed algorithm performs significantly better than the MaxWeight algorithm and the Delay-based algorithm in various settings. Next we discuss the limitations of our model and possible extensions.

7.1 The choice of D

According to Theorem 3, the learning period D should be sufficiently large to guarantee throughput-optimality. Our simulation results on the other hand suggested that a small D may result in better performance. Therefore, there is clear trade-off in choosing D. The study of the choice for D is one potential future work.

7.2 Unbounded file arrivals and file sizes

One limitation of our model is that the random variables associated with the number of file arrivals and file sizes are assumed to be upper bounded. One interesting future research problem is to extend the results to unbounded number of file arrivals and file sizes.

Recall that Ws(t)=∑i∈I(t)⌈Qi(t)Rmaxi⌉. We define ^Rmaxk to be the largest achievable link rate of class-k short-lived flows, and
As(t)=∑k∈K∑Λk(t)i=1⌈fi^Rmaxk⌉,
which is the amount of new workload (from short-lived flows) injected in the network at time t, and μs(t) to be the decrease of the workload at time t, i.e., μs(t)=1 if the workload of short-lived flows is reduced by one and μs(t)=0 otherwise. Based on the notations above, the evolution of short-lived flows can be described as:

Ws(t+1)=Ws(t)+As(t)−μs(t).

Further, the evolution of Ql(t) can be described as

Ql(t+1)=Ql(t)+Xl(t)−μl(t)+ul(t),

where μl(t) is the decrease of Ql(t) due to the service long-lived flow l receives at time t, and ul(t) is the unused service due to the lack of data in the queue.

We consider the following Lyapunov function

V(t)=α(Ws(t))2+∑l∈L(Ql(t))2.

(8)

We will prove that the drift of the Lyapunov function satisfies

{E}[V(t+1)−V(t)|M(t)]≤Ud1M(t)∈Υ−ϵ2[α¯λWs(t)

+∑l∈LQl(t)xl]1M(t)∉Υ

for some Ud>0,¯λ>0 and a finite set Υ (the values of these parameters will be defined in the following analysis). Positive recurrence of M then follows from Foster’s Criterion for Markov chains [12].

First, since the number of arrivals, the sizes of short-lived flows and channel rates are all bounded,
it can be verified that there exists U, independent of M(t), such that

{E}[V(t+1)−V(t)|M(t)]

=

{E}[α(Ws(t+1))2−α(Ws(t))2+

∑l∈L(Ql(t+1))2−∑l∈L(Ql(t))2∣∣
∣∣M(t)]

≤

U+2αWs(t){E}[As(t)−μs(t)|M(t)]+

2∑l∈LQl(t){E}[Xl(t)−μl(t)|M(t)]

≤

Missing or unrecognized delimiter for \right

−{E}[μs(t)|M(t)])

+2∑l∈LQl(t)(xl−{E}[μl(t)|M(t)]).

Recall that we assume that (1+ϵ)xl and (1+ϵ)λk satisfy the supportability conditions of Theorem 1. By adding and subtracting corresponding pc,lRc,l and μc,s, we obtain that

{E}[V(t+1)−V(t)|M(t)]−U

≤

2αWs(t){E}[{E}[μc,s−μs(t)|C(t)=c]∣∣M(t)]

+2∑l∈LQl(t){E}[{E}[pc,lRc,l−μl(t)|C(t)=c]∣∣M(t)]

−2ϵαWs(t)¯λ−2ϵ∑l∈LQl(t)xl,

where

Missing or unrecognized delimiter for \left

Next we assume C(t)=c and analyze the following quantity

αWs(t)(μc,s−μs(t))+∑l∈LQl(t)(pc,lRc,l−μl(t)).

(9)

We have the following facts:

Fact 1:Assume that there exists a short-lived flow i such that Ri(t)=Rmaxi or Ri(t)≥Qi(t). If a short-lived flow is selected to be served, then the workload of the selected flow is reduced by one and μs(t)=1. If long-lived flow l is selected, the rate flow l receives is Rc,l. Thus, we have that

αWs(t)μs(t)+∑l∈LQl(t)μl(t)

=

max{αWs(t),maxlQl(t)Rc,l}

≥

αWs(t)μc,s+∑l∈LQl(t)pc,lRc,l,

where the last inequality holds because ∑lpc,l+μc,s≤1.
Therefore, we have (???)≤0 in this case.

Fact 2:Assume that there does not exist a short-lived flow i such that Ri(t)=Rmaxi or Ri(t)≥Qi(t). In this case, we have

We next compute the drift of the Lyapunov function according to the value of M(t).

Case I: Assume M(t)∈Υ. According to the definition of Υ, we have

{E}[V(t+1)−V(t)|M(t)]≤U+2αUW+2RmaxLUQ.

Case II: Assume Ws(t)>UW. Since the size of a short-lived flow is upper bounded by Fmax,Ws(t)>UW implies that at least UWFmax short-lived flows are in the network at time t. Define S(t) to be the following event: no short-lived flow satisfies Ri(t)=Rmaxi or Ri(t)≥Qi(t) .

Recall that

miniPr(Ri(t)=Rmaxi)≥pmaxs.

Given at least UWFmax short-lived flows are in the network, we have that

Pr(1S(t)=1)≤(1−pmaxs)UWFmax≤ϵ1.

According to facts 1 and 2, (???) is positive only if S(t) occurs and the value of (???) is bounded by αWs(t)+Rmaxmaxl∈LQl(t). Therefore, we can conclude that in this case (Case II),

{E}[V(t+1)−V(t)|M(t)]

(13)

≤

U+2ϵ1(αWs(t)+Rmaxmaxl∈LQl(t))

−2ϵαWs(t)¯λ−2ϵ∑l∈LQl(t)xl

≤

U−ϵαWs(t)¯λ−ϵ∑l∈LQl(t)xl

≤

−ϵ2[α¯λWs(t)+∑l∈LQl(t)xl]

(14)

where inequality (13) holds due to the definition of ϵ1 (10), and inequality (14) holds due to inequality (11).

Case III: Assume that Ws(t)≤UW and Ql(t)>UQ for some l. In this case,
if a long-lived flow is selected for a given c, we have

where Ud=U+2αUW+2RmaxLUQ and Υ is a set with a finite number of elements. Since V(t)≥0 for all t, the Lyapunov function is always lower bounded. Further the drift of the Lyapunov is upper bounded when M(t) belongs to a finite set Υ, and is negative otherwise. So invoking Foster’s criterion, the Markov chain M(t) is positive recurrent and the boundedness of the first moment follows from [13].

Consider the network that is operated under WSL, and define H(t) to be

H(t)≜{Ql(t),Rl(t),Qi(t),Ri(t),~Rmaxi(t)}.

Now given H(t), we define the following notations:

Define μ2;l(t)=Rl(t) if flow l is selected by WSL, and μ2;l(t)=0 otherwise.

Define μ2;i(t)=1 if flow i is selected by WSL and the workload of flow i can be reduced by one, and μ2;i(t)=0 otherwise.

Define μ1;l(t)=Rl(t) if flow l is selected by WS, and μ1;l(t)=0 otherwise.

Define μ1;i(t)=1 if flow i is selected by WS and the workload of flow i can be reduced by one, and μ1;i(t)=0 otherwise.

We remark that μ2;j(t) is the action selected by the base station at time t under WSL and μ1;j(t) is the action selected by the base station at time t under WS, assuming the same history H(t).

We define the Lyapunov function to be

V(n)=α(Ws(nT))2+∑l∈L(Ql(nT))2.

(17)

This Lyapunov function is similar to the one used in the proof of Theorem 2, and we will show that this is a valid Lyapunov function for the workload-based scheduling with learning. Then, it is easy to verify that there exists U1 independent of ~M(n) such that

{E}[V(n+1)−V(n)|~M(n)]

<

U1+2α{E}⎡⎣Ws(nT)(n+1)T−1∑t=nT(As(t)−μ2;s(t))∣∣
∣∣~M(n)⎤⎦

+∑l∈L2{E}⎡⎣Ql(nT)(n+1)T−1∑t=nT(Xl(t)−μ2;l(t))∣∣
∣∣~M(n)⎤⎦.

Dividing the time into two segments [nT,nT+D−1] and [nT+D,(n+1)T−1], we obtain