In order to transfer voice or some other application requiring real-time presentation over a packet network we need a de-jitter buffer to eliminate delay jitters. An important design parameter is the depth of the de-jitter buffer since it influences two important parameters controlling voice quality, namely voice-path delay and packet loss probability. In this paper we propose and study several schemes for optimally adjusting the depth of the de-jitter buffer. In addition to de-jitter-buffer depth adjustments within a call, the initial value and rates of changes of the de-jitter buffer depth are allowed to depend on the class of the call and are adaptively adjusted (upwards or downwards) for every new call based on voice-path delay and packet loss probability measurements over one or more previous calls. Parameter adjustments are geared towards either (a) minimizing voice-path delay while maintaining a packet loss probability objective, or (b) maximizing R-Factor, an objective measure of voice quality that depends both on the voice-path delay and the packet loss probability. Using simulation models it is shown that adaptive schemes perform better than static ones and adaptive schemes with learning perform better than ones without learning.

We consider a single node which multiplexes a large number of
traffic sources. We are concerned with the amount of buffer and
bandwidth that should be allocated to a class of i.i.d. on/off
fluid flows. We impose a maximum overflow probability on the
class, and assume that the aggregate flow can be modelled using
effective bandwidth. Unlike previous approaches which assume that
the total buffer allocated to the class is either constant or
linearly proportional to the number of sources, we determine the
minimum cost allocation given a cost per unit of each resource. We
find that the optimal bandwidth allocation above the mean rate and
the optimal buffer allocation are both proportional to the square
root of the number of sources. Correspondingly, we find that the
excess cost incurred by a fixed buffer allocation or by linear
buffer allocations is proportional to the square of the percentage
difference between the assumed number of sources and the actual
number of sources and to the square root of the number of sources.

This paper proposes a new method of bandwidth allocation during congestion, called the proportional allocation of bandwidth. Traditionally, max-min fairness has been proposed to allocate bandwidth under congestion. Our allocation scheme considers the situation where flows might have different subscribed information rates, based on their origin. In proportional allocation of bandwidth, during congestion all flows get a share of available bandwidth, which is in proportion to their subscribed information rate. We suggest a method for implementing this with minimum signaling and without storing per-flow state. Our method is based on the principles of differentiated services - diffserv. We show by simulation that it is possible to obtain proportional bandwidth allocation without per-flow state storing and minimum signaling.

A design methodology for an optical mesh network has recently been developed. For a given backbone packet network, we assume that a traffic matrix and an underlying optical fiber network are given. It is necessary to establish logical links between backbone nodes using optical wavelengths over the fiber network. The methodology chooses and sizes logical backbone links, chooses the physical paths for these logical links, and determines traffic routings over logical links under both normal and failure conditions. The methodology utilizes a linear programming engine embedded within a heuristic framework. This work presumes an optical networking environment utilizing dense wavelength division multiplexing (DWDM) and intelligent optical switch (lOS) equipment. It can be used to study potential future networks with high levels of traffic, e.g., in which demand entering the network at any given instant may be measured in terabits per second. In this paper, we describe the optical mesh network design problem that is addressed, outline the solution methodology and discuss some computational experience.

Trafﬁc models with a rate varying according to a Gaussian distribution are commonly used to evaluate statistical multiplexing in telecommunication systems. The superposition of a sufﬁcient large number of homogeneous MarkovianOn-Off sources asymptotically approaches an Ornstein-Uhlenbeck process (OUP) which represents a Gaussian process with exponential autocorrelation function. We derive a simple expression for the bandwidth demand under QoS constraints which is close to numerical OUP/D/1 analysis results over the entire parameter region with relevance to applications. In comparison, results of the ﬂuid ﬂow method for ﬁxed aggregation level are used to verify the OUP/D/1 asymptotics and to estimate its accuracy depending on the number of aggregated ﬂows. Moreover, the OUP/D/1 asymptotics provides a useful check of the accuracy of bounds and approximations proposed in the literature in order to improve the effective bandwidth principle. Based on analytical evaluation, the efﬁciency of buffers for voice trafﬁc is ﬁnally shown to be very limited, i.e. no more than 2% of bandwidth can be saved owing to buffers with regard to real time constraints and a predeﬁned loss probability as QoS demands for voice.

We have developed a closed equation that allows calculation of the optimal bandwidth based on estimates of volume and quality of service criteria of user traffic. The basic assumption underlying the equation is that traffic to be dropped at a bottleneck link should not be sent to that link in the first place. Instead, such traffic should be dropped as early as possible. The equation was validated with simulations using a topology and a traffic mix typical for 3G radio access networks. The quality of service of user traffic to the routers on either side of the backup link was considerably better compared to standard methods. The equation uses only a few parameters based on the traffic demand matrix and is easily incorporated in a network-planning tool to calculate backup links or bandwidth requirements for logical pipes. It can also be used as basis for complete network planning.

With the growing popularity of real-time audio/video streaming applications
on the Internet, it is important to study the traffic characteristics of such
applications and to understand their implications on network performance.
In this paper, we present a measurement study of RealMedia streaming traffic,
where the focus is both on the application-layer view
(i.e., the output of the audio/video encoder)
and on the network-layer view
(i.e., the departure process for network packets
emanating from the RealMedia server).
Our main observation is that, although RealVideo can be compressed as
Variable-Bit-Rate (VBR) at the application layer,
it is often streamed as Constant-Bit-Rate (CBR) at the network layer.
The audio and video streams have a hierarchical traffic structure:
at large time scales (minutes), the overall bit rate is constant;
at medium time scales (seconds), the packets have an on and off
pattern due to the interleaving of audio and video; at fine-grain
time scales (sub-second), back-to-back packet trains of two or
more packets are often seen.
We also note that most CBR-coded RealVideo streams are not
long-range dependent (LRD). We attribute this difference to
the CBR nature of the coder, which dynamically changes the
video frame rate to keep the Internet traffic demands near-CBR
over moderate time scales.

A number of recent studies are based on data collected from routing tables of inter-domain routers utilizing Border Gateway Protocol (BGP) and tools, such as traceroute, to probe end-to-end paths. The goal is to infer Internet topological properties. However, as more data is collected, it becomes obvious that data intended to represent the same properties, if gathered at different points within the network, can depict significantly different characteristics. While systematic data collection from a number of network vantage points can reduce certain ambiguities, thus far, no methods have been reported for fully resolving these issues. The goal of our study was to quantify the effect these anomalies have on key Internet structural attributes. We report on our analysis of over 290,000 measurements from globally distributed sites. We contrast results obtained from router-level measurements with those obtained from BGP routing tables, and offer insights as to why certain inferred properties differ. We demonstrate that the effect on some attributes, such as the average path length and the AS degree distribution can be minimized through careful data collection techniques. We also illustrate how using this same data to model other attributes, such as the actual forwarding path between a pair of nodes, or the level of AS path asymmetry, can produce substantially misleading results.

Classic architecture of a Web-based informational system (WBIS) consists of HTML browser on the client side and Web and database servers on the server side. Information is concentrated and updated in a database and a user of WBIS receives it upon a request. Depending upon WBIS aim different demands are made to its components. For high-speed communications systems it's important not only communication channels throughput but performance of communication nodes and of software components selected for the system construction. For real-time WBIS that monitors industrial, environmental, or man-caused relatively fast process it's typical permanent data flow from the number of data sources. A system must process, visualize and selectively save data. This kind of WBIS' dzfferfrom e-commerce WBIS' in demands for type of data, data amount per time unit and type and time of data processing and visualization. That is why ecommerce oriented benchmarks like TPC-W can't simulate behavior of monitoring WBIS. As a result a benchmark for Win32 software components of monitoring WBIS that simulates different modes of its work was developed. The components are Java-applets, CGI scripts, COMIDCOM objects, Microsoft IIS-4 and SQL 7.0 servers. The demands for Engineering level of the Novosibirsk Hydroelectric Power Station automated system of maintenance and control were used as basics. In the benchmark a system must monitor 1 000 of 1 6-bit parameters with data renewal of 1hz. The results obtained let optimize classic architecture of WBIS and methods of its software components interaction for achieving better system performance in monitoring tasks.

The recent evolution on the network tomography have successfully provided
principles and methodologies of inferring network-internal (local)
characteristics
from only end-to-end measurements,
which should be followed by deployment in practical use.
In this paper, we propose two types of
user-oriented tools for inferring one-way
packet losses on paths from/to an user-host (a client) to/from a specified target
host (a server or router)
without any measurement on the target,
which utilize a method based on the network tomography.
One is a stand-alone tool running on the client, and
the other is a client-server style tool running on both the client
and proxy measurement server(s) distributed in the Internet.
Both of them can infer one-way packet loss rates not only on a
path between the client and an application server, but on
a path segment (a portion of the path)
between the client and any router residing in the path, and thus
can find the congested area along the path.
We have developed prototypes of the tools
and have evaluated them in experiments in the Internet environment,
which showed that
the tools could infer one-way packet loss rates
within 1% errors in various network conditions.

In order to provide good Quality of Service (QoS) in a Differentiated Services (DiffServ) network, a dynamic admission control scheme is definitely needed as an alternative to overprovisioning. In this paper, we present a simple measurement-based admission control (MBAC) mechanism for DiffServ-based access networks. Instead of using active measurements only or doing purely static bookkeeping with parameter-based admission control (PBAC), the admission control decisions are based on bandwidth reservations and periodically measured & exponentially averaged link loads. If any link load on the path between two endpoints is over the applicable threshold, access is denied. Link loads are periodically sent to Bandwidth Broker (BB) of the routing domain, which makes the admission control decisions. The information needed in calculating the link loads is retrieved from the router statistics. The proposed admission control mechanism is verified through simulations. Our results prove that it is possible to achieve very high bottleneck link utilization levels and still maintain good QoS.

Current world-wide web servers as well as proxy servers rely for their scheduling
on services provided by the underlying operating system. In practice, this means
that some form of first-come-first-served (FCFS) scheduling is utilised. Although
FCFS is a reasonable scheduling strategy for job sequences that do not show much
variance, in the world-wide web (WWW), however, it has been shown that the typical
object sizes requested do exhibit heavy tails. This means that the probability
to observe very long jobs (very large objects) is much higher than typically
predicted using an exponential model. Under these circumstances, job scheduling
on the basis of shortest-job first (SJF) has been shown to perform much better,
in fact, to minimise the total average waiting time, simply by avoiding situations
in which short jobs have to wait for very long one. However, SJF has as disadvantage
that long jobs might suffer from starvation.
In order to avoid the problems of both FCFS and SJF we present in this paper a
new scheduling algorithm called class-based interleaving weighted fair queueing
(CI-WFQ). This algorithm uses the specific characteristics of the job stream being
served, that is, the distribution of the sizes of the objects being requested, to
set its parameters such that good mean reponse times are obtained and starvation does
not occur. In the paper, the new scheduling approach is introduced and compared, using
trace-driven simulations, with existing scheduling approaches.

In the Internet, TCP (Transmission Control Protocol) has been used as an
end-to-end congestion control mechanism. Of all several TCP implementations,
TCP Reno is the most popular implementation. TCP Reno uses a loss-based
approach since it estimates the severity of congestion by detecting packet
losses in the network. On the contrary, another implementation called TCP Vegas
uses a delay-based approach. The main advantage of a delay-based approach is,
if it is properly designed, packet losses can be prevented by anticipating
impending congestion from increasing packet delays. However, TCP Vegas was
designed using not a theoretical approach but an ad hock one. In this paper, we
therefore design a delay-based congestion control mechanism by utilizing the
classical control theory. Our rate-based congestion control mechanism
dynamically adjusts the packet transmission rate to stabilize the round-trip
time for utilizing the network resources and also for preventing packet losses
in the network. Presenting several simulation results in two network
configurations, we quantitatively show the robustness and the effectiveness of
our delay-based congestion control mechanism.

AQM (Active Queue Management) mechanism, which performs congestion
control at a router for assisting the end-to-end congestion control
mechanism of TCP, has been actively studied by many researchers.
For instance, RED (Random Early Detection) is a representative AQM
mechanism, which drops arriving packets with a probability being
proportional to its average queue length. The RED router has four
control parameters, and its effectiveness heavily depends on a
choice of these control parameters. This is why many researches on
the parameter tuning of RED control parameters have been performed.
However, most of those studies have investigated the effect of RED
control parameters on its performance from a small number of
simulation results. In this paper, we therefore statistically
analyze a great number of simulation results using the multivariate
analysis. We quantitatively show the relation between RED control
parameters and its performance.

Sending IP packets over multiple parallel links is in extensive use in today's Internet and its use is growing due to its scalability, reliability and cost-effectiveness. To maximize the efficiency of parallel links, load balancing is necessary among the links, but it may cause the problem of packet reordering. Since packet reordering impairs TCP performance, it is important to reduce the amount of reordering. Hashing offers a simple solution to keep the packet order by sending a flow over a unique link, but static hashing does not guarantee an even distribution of the traffic amount among the links, which could lead to packet loss under heavy load. Dynamic hashing offers some degree of load balancing but suffers from load fluctuations and excessive packet reordering. To overcome these shortcomings, we have enhanced the dynamic hashing algorithm to utilize the flow volume information in order to reassign only the appropriate flows. This new method, called dynamic hashing with flow volume (DHFV), eliminates unnecessary flow reassignments of small flows and achieves load balancing very quickly without load fluctuation by accurately predicting the amount of transferred load between the links. In this paper we provide the general framework of DHFV and address the challenges in implementing DHFV. We then introduce two algorithms of DHFV with different flow selection strategies and show their performances through simulation.

We study the error handling methods and their integration with the MAC layer to be applied to a broadband powerline communications (PLC) access network. Because of the expected unfavorable noise influence in PLC systems, various error handling mechanisms have to be applied in different network layers. On the other hand, PLC networks have to provide various telecommunication services and to ensure sufficient QoS, which is important for the competition with other communication technologies used in the access area. Therefore, we propose application of efficient reservation MAC protocols ensuring realization of QoS guarantees and providing a good utilization of a shared medium with limited transmission capacity like in PLC networks. We consider several error protection methods against disturbances for data transmission and implement them within an extended two-step reservation MAC protocol with active polling. Simulation results show that application of a fast —re-signaling procedure reduces signaling delay in disturbed networks. On the other hand, application of ARQ mechanisms improves the network utilization and data throughput significantly and reduces the packet transmission delays. However, the combination of ARQ mechanisms and a per-packet reservation MAC protocols causes so-called transmission gaps (unused portions ofthe network capacity). Therefore, we implement an ARQ-plus mechanism which avoids the transmission gaps and improves furthermore the network performances.

We consider a queueing system consisting of multiple identical servers and a
common queue. The service time follows an exponential distribution and the
arrival process is governed by a semi-Markov process (SMP). The motivation
to study the queueing system with SMP arrivals lies in that it can model the
auto-correlated traffic on the high speed network generated by a real time
communication, for example, the MPEG-encoded VBR video. Our analysis is
based on the theory of piecewise Markov process. We first derive the
distributions of the queue size and the waiting time. When the sojourn time of SMP follows an exponential distribution all the unknown constants contained in the generating function of queue size can be determined through the zeros of
the denominator for this generating function. Based on the result of the
analysis, we propose a model to evaluate the waiting time of MPEG video
traffic on an ATM network with multiple channels. Here, the SMP corresponds to the exact MPEG sequence of frames. Finally, a numerical example using a real video data is shown.

There is a brewing controversy in the traffic modeling community concerning
how to model backbone traffic. The fundamental work on self-similarity in
data traffic appears to be contradicted by recent findings that suggest that
backbone traffic is smooth.
The traffic analysis work to date has focused on high-quality but
limited-scope packet trace measurements; this limits its applicability to
high-speed backbone traffic. This paper uses more than one year's worth of
SNMP traffic data covering an entire Tier 1 ISP backbone to address the
question of how backbone network traffic should be modeled. Although the
limitations of SNMP measurements do not permit us to comment on the fine
timescale behavior of the traffic, careful analysis of the data suggests
that irrespective of the variation at fine timescales, we can construct
a simple traffic model that captures key features of the observed traffic.
Furthermore, the model's parameters are measurable using existing network
infrastructure, making this model practical in a present-day operational
network.
In addition to its practicality, the model verifies basic statistical
multiplexing results, and thus sheds deep insight into how smooth
backbone traffic really is.

Modern wireless networks are evolving to provide data and multimedia services, in addition to the basic voice service. In this environment most of the traffic sources are bursty in their nature (e.g. internet). It means, they alternate from a random-length Talkspurt (On) period to a random-length Silence (Off) period several times during a random session time. In a wireless cell, the proper dimensioning of network resources may depend on the handover statistics, as well as on the bandwidth needed for each service. This bandwidth requirement is a function of the statistical behavior of On-Off length periods.
In this paper, we consider different probability distributions for the session time such as exponential, k-Erlang, and Pareto; whereas the Talkspurt and Silence periods are exponentially distributed. Thus, we compare the probability distribution and statistical moments for counting the number of cell crossings (handovers) and the random number of On-Off pairs in a session.

The performance of direct-sequence code-division
multiple access (DS-CDMA) systems in indoor wireless channels
is presented. TCP traffic, channel noise and signal
characteristics are measured using commercial products
that implement the IEEE 802.11b DS-CDMA standard.
The measured throughputs for a
low density mobile network are typically around $4.2$ Mbps
when one of the two end-systems is on the wired segment.
Multipath interference is shown to contribute to significant
degradation in TCP performance. Packet loss rates ranging from $0.02$ to
$0.2$ percent are observed in line-of-sight conditions.
The BER performance of DS-CDMA is estimated using an image source
based channel model of the rectangular room to compute the spatial
variation of the channel impulse response. These impulse responses
provide a quantitative measure of the system performance and
provide estimates of signal power levels required for
supporting high data rates in channels influenced by multipath
interference.

In this work, we like to present the experience on design and
implementation of traffic smoothing algorithm in our mobile streaming
system for broadband wireless network. A number of smoothing
algorithms have been proposed in various public forums and
literatures(to be discussed in detail shortly). Each of these algorithms
claims itself optimal in their own metrics, e.g. variance of rate
changes, the number of rate changes, client buffer utilization, etc. The
ultimate objective of smoothing is to
provide better quality streaming service with minimum amount of
resources by reducing the burstiness of the traffic. Packet loss and jitter is the two widely used metrics for QoS. The metrics mentioned above, e.g. client buffer utilization, the number of
rate changes, variance of the transmission rate, do not have direct
relationship with QoS, but are used since they are assumed to have
direct impact on the packet loss and jitter behavior in some way. Indeed, none of these works address how their smoothing algorithms can improve the packet loss and jitter behavior in actual system. In this work, we like to present the result of study obtained from actual experiment. We instrument the effect of smoothing on packet loss behavior in mobile terminal on broadband wireless network under various different system settings. This
activity requires comprehensive streaming system including streaming server, and streaming client.

There is an emerging interest in integrating mobile wireless communication
with the Internet based on the Ipv6 technology. Many issues introduced by
the mobility of users arise when such an integration is attempted. This paper addresses the problem of mobility management, i.e., that of tracking the current IP addresses of mobile terminals and sustaining active IP connections as mobiles move. The paper presents some architectural and mobility management options for integrating wireless access to the Internet. We then present performance results for Mobile IPv4, route optimization and Mobile IPv6.

MPLS-based recovery is intended to effect rapid and complete restoration
of traffic affected by a fault in an MPLS network. Two MPLS-based recovery
models have been proposed: IP re-routing which establishes recovery paths
on demand, and protection switching which works with pre-established
recovery paths. IP re-routing is robust and frugal since no resources are
pre-committed but is inherently slower than protection switching which
is intended to offer high reliability to premium services where fault
recovery takes place at the 100 ms time scale. We present a model of
protection switching in MPLS networks. A variant of the flow deviation
method is used to find and capacitate a set of optimal label switched
paths. The traffic is routed over a set of working LSPs. Global repair
is implemented by reserving a set of pre-established recovery LSPs.
An analytic model is used to evaluate the MPLS-based recovery mechanisms
in response to bi-directional link failures. A simulation model is
used to evaluate the MPLS recovery cycle in terms of the time needed to
restore the traffic after a uni-directional link failure. The models
are applied to evaluate the effectiveness of protection switching in
networks consisting of between 20 and 100 nodes.

An analytical model is developed in this paper for evaluating the performance of bandwidth allocation algorithms, with or without preemption, when used for Diffserv-aware MPLS traffic engineering. It is shown that, a major difference between various algorithms is their capabilities to provide greater bandwidth sharing versus robust service protection/isolation.

QOS aware applications have propelled the development of two complementary technologies, Multicasting and Differentiated Services. To provide the required QOS on the Internet, either the bandwidth needs to be increased (Multicasting) or limited bandwidth prioritized among users (DiffServ). Although, the bandwidth on the Internet is continually increasing, the backbone is still insufficient to support QOS without resource allocations. Hence, there is a need to map multicasting in a DiffServ Environment to conserve network bandwidth and to provision this bandwidth in an appropriate fashion. In this regard, two issues have to be addressed. One, the key difference between multicast and DiffServe routing is the structure of the multicast tree. This tree is maintained in multicast aware routers whereas in DiffServe, the core routers maintain no state information regarding the flows. Second, the task of restructuring the multicast tree when members join/leave. Currently, the first issue is addressed by embedding the multicast information within the packet itself as an additional header field. In this paper, we propose a neural network based heuristic approach to address the second problem of routing in a dynamic DiffServe Multicast environment.
Many dynamic multicast routing algorithms have been proposed. The greedy algorithm creates a near optimal tree when a node is added but requires many query/reply messages. The PSPT algorithm cannot construct a cost optimal tree. The VTDM algorithm requires the estimated number of nodes that will join and is not flexible. The problem of building an optimal tree to satisfy QOS requirements at minimum cost and taking minimum network resources is NP- complete and none of the above solutions give an optimal solution.
We have modeled this combinatorial optimization as a nonlinear programming problem and trained an artificial neural network to solve the problem. The problem is tractable only when the QOS parameters are combined into DiffServe classes because of the flows are short-liv

Modern communication networks carry several grades of data, voice and video sessions typically using single-service or multi-service platforms employing IP, ATM or MPLS protocol mechanisms. It is well established that in many instances the session duration may have a heavy-tailed distribution [1-2]. We explore the impact of such distributions on the response time performance of user sessions. We concentrate mainly on a single output link (potentially a bottleneck on the data path) of a multi-service platform. First-come-first-served and processor sharing type scheduling mechanisms are considered (weighted fair queueing and weighted round robin are implementable approximations to generalized processor sharing). The output link is modeled as a single-server (no limit on individual session rate) or multiple servers (rate limit on individual sessions either inherently as for CBR applications or for congestion avoidance as in a cable access network). Also, the impacts of bandwidth differences between input and output links are considered. It is observed that in some cases, heavy-tailed session durations have significant impacts but those impacts may be effectively neutralized using appropriate scheduling or rate control mechanisms.

Recently researchers have proposed active queue management (AQM) mechanisms as a means of better managing congestion at the bottlenecks
inside the network. Random Early Detection (RED) mechanism has been
proposed to control the average queue size at the congested routers. It has been shown that the interaction between an RED gateway and TCP connections can lead to period doubling bifurcation and chaos. In this paper we extend this model and study the interaction of the RED gateway with TCP and UDP connections, using a discrete-time model.
First, we show that the presence of UDP traffic does much more than simply taking away the available capacity from the TCP connections. In fact it fundamentally changes the dynamics of the system. Second, with the help of bifurcation diagrams, we demonstrate the existence of nonlinear phenomena, such as oscillations and chaos, as the parameters of the RED mechanism are varied. Further, the presence of UDP traffic tends to stabilize the system in the sense that bifurcations and chaos are delayed in the parameter
region. We investigate the impact of various system parameters on the stability of the system, present numerical results, and validate our
analysis through ns-2 simulation.

The Transmission Control Protocol (TCP), provides flow control functions which are based on the window mechanism. Packet losses are detected by various mechanisms, such as timeouts and duplicate acknowledgements, and are then recovered from using different techniques. A problem that arises with the use of window based mechanisms is that the availability of a large number of credits at the source may cause a source to flood the network with back-to-back packets, which may drive the network into congestion, especially if multiple sources become active at the same time. In this paper we propose a new approach for congestion reduction. The approach works by shaping the traffic at the TCP source, such that the basic TCP flow control mechanism is still preserved, but the packet transmissions are spaced in time in order to prevent a sudden surge of traffic from overflowing the routers' buffers. Simulation results show that this technique can result in an improved network performance, in terms of reduced mean delay, delay variance, and packet dropping ratio.

Mobile Networks have been expanding and IMT-2000 further increases their available bandwidth over wireless links. However, TCP, which is a reliable end-to-end transport protocol, is tuned to perform well in wired networks where bit error rates are very low and packet loss occurs mostly because of congestion. Although a TCP sender can execute flow control to utilize as much available bandwidth as possible in wired networks, it cannot work well in wireless networks characterized by high bit error rates. In the next generation mobile systems,
sophisticated error recovery technologies of FEC and ARQ are indeed
employed over wireless links, i.e., over Layer 2, to avoid performance
degradation of upper layers. However, multiple retransmissions by Layer
2 ARQ can adversely increase transmission delay of TCP segments, which will further make TCP unnecessarily increase RTO (Retransmission TimeOut). Furthermore, a link bandwidth assigned to TCP flows can change in response to changing air conditions to use wireless links efficiently. TCP thus has to adapt its transmission rate according to the changing available bandwidth. The major goal of this study is to develop a receiver-based effective TCP flow control without any modification on TCP senders, which are probably connected with wired networks. For this end, we propose a TCP flow control employing some Layer 2 information on a wireless link at the mobile station. Our performance evaluation of the proposed TCP shows that the receiver-based TCP flow control can moderate the performance degradation very well even if FER on Layer 2 is high.

Keywords/Phrases

Keywords

in

Remove

in

Remove

in

Remove

+ Add another field

Search In:

Proceedings

Volume

Journals +

Volume

Issue

Page

Journal of Applied Remote SensingJournal of Astronomical Telescopes Instruments and SystemsJournal of Biomedical OpticsJournal of Electronic ImagingJournal of Medical ImagingJournal of Micro/Nanolithography, MEMS, and MOEMSJournal of NanophotonicsJournal of Photonics for EnergyNeurophotonicsOptical EngineeringSPIE Reviews