RED vs DropTail using NS2

Hi
i want to prove that RED is better than DropTail using NS2. i have followed
"Random Early Detection Gateways for Congestion Avoidance,1993 " paper
also. For showing RED solves global synchronization problem,bias againt
bursty traffic,
i tried a lot using tcl script making different scenarios. But no useful
results at all.
In my case packet loss is more in RED than DropTail.
somenone please help
_______________________________________________
end2end-interest mailing list
end2end-interest <at> postel.org
http://mailman.postel.org/mailman/listinfo/end2end-interest
Contact list-owner <at> postel.org for assistance.

updated E2E list advisory board

Joe Touch <touch <at> ISI.EDU>
2014-10-22 01:47:03 GMT

Hi, all,
Henning Schulzrinne and Scott Brim's terms as members of the
E2E-Interest email list Advisory Board have completed. Although we had
no significant email issues during that time, I would like to thank them
for offering their service.
I'd also like to welcome Fred Baker and Wes Eddy as new members to the
advisory board, hoping that their service is similarly uneventful.
Thanks to all,
Joe (as list admin)
_______________________________________________
end2end-interest mailing list
end2end-interest <at> postel.org
http://mailman.postel.org/mailman/listinfo/end2end-interest
Contact list-owner <at> postel.org for assistance.

Deflating excessive buffers

Martin Heusse <Martin.Heusse <at> imag.fr>
2014-09-22 19:17:44 GMT

Dear E2E list,
in case it matches your curiosity, I wanted to point to the work we just presented at ITC'26 and maybe gather
your comments (T. Braud, M. Heusse, A. Duda: "TCP over Large Buffers: When Adding Traffic Improves
Latency"). (BTW, I don't see too many people talking about their work, so I'm not sure it's the usage to do
this... But since my posts to this list were sometimes sarcastic I also thought it would be an opportunity
to contribute in a different way!)
The has been many exchanges on this list about the impact of having excessively large buffers at the head of
the bottleneck link, which increases queueing delay and hurts link utilization --often in the downlink
direction, whereas the congestion is more often in the uplink direction (bufferbloat, combined with
upload/download interference).
We showed that (assuming the uplink is congested):
1- sending a small amount of tiny packets (small bitrate, significant packet rate) into the uplink buffer,
they occupy the unnecessary (so to speak) buffer slots and reduce the apparent buffer size, which in turn
reduces queueing delay. The gain may be quite significant (halve the response time for instance). Do you
know many examples where sending more packets speeds things up?
2- Actually, an intense load in the downlink direction has a similar effect: many ACKs enter the uplink
buffer at times, which is enough to make it overflow and calm down uploads. This effect may explain why a
case of "bufferbloat" may not always be as bad as it could be.
Incidentally, the paper pits popular variants of TCP against each other in various setups.
Best regards,
Martin
_______________________________________________

Fwd: Re: Once again buffer bloat and CC. Re: A Cute Story. Or: How to talk completely at cross purposes. Re: [ih] When was Go Back N adopted by TCP

Joe Touch <touch <at> ISI.EDU>
2014-08-20 21:31:55 GMT

Forwarded for David Reed.
Joe (as list admin)
-------- Original Message --------
Subject: Re: [e2e] Once again buffer bloat and CC. Re: A Cute Story.
Or: How to talk completely at cross purposes. Re: [ih] When was Go Back
N adopted by TCP
Date: Wed, 20 Aug 2014 16:03:28 -0400 (EDT)
From: dpreed <at> reed.com
To: Detlef Bosau <detlef.bosau <at> web.de>, Kathleen Nichols
<nichols <at> pollere.com>
CC: end2end-interest <at> postel.org, "Joe Touch" <touch <at> isi.edu>
[Joe Touch - please pass this on to the e2e list if it is OK with you]
I'd like to amplify Detlef's reference to my position and approach to
end-to-end congestion management, which may or may not be the same
approach he would argue for:
To avoid/minimize end-to-end queueing delay in a shared internetwork, we
need to change the idea that we need to create substantial queues in
order to measure the queue length we want to reduce. This is possible,
because of a simple observation: you can detect and measure the
probability that two flows sharing a link will delay each other before
they actually do... call this "pre-congestion avoidance".
Rather than leave that as an exercise for the reader (it's only a Knuth
[20] problem at most, but past suggestions have not been followed up,

Re: Once again buffer bloat and CC. Re: A Cute Story. Or: How to talk completely at cross purposes. Re: [ih] When was Go Back N adopted by TCP

Detlef Bosau <detlef.bosau <at> web.de>
2014-08-20 21:09:55 GMT

Am 20.08.2014 um 22:03 schrieb dpreed <at> reed.com:
>
> [Joe Touch - please pass this on to the e2e list if it is OK with you]
>
>
>
> I'd like to amplify Detlef's reference to my position and approach to
> end-to-end congestion management, which may or may not be the same
> approach he would argue for:
>
What I have in mind is different in some respect - however the goals are
quite compatible,
>
>
>
> To avoid/minimize end-to-end queueing delay in a shared internetwork,
> we need to change the idea that we need to create substantial queues
> in order to measure the queue length we want to reduce.
>
That's what I talked about, when I argued, we would measure the wrong
parameters.
Particularly, when you refer to Raj Jain, Jain measures (in his
mathematical model) a queueing system's power in order to achieve a
workload which would allow the system to work with optimum performance.
What we actually measure is: Was the workload too large for the system
or not?

HotNets 2014: the Thirteenth ACM Workshop on Hot Topics in Networks
October 27-28, 2014 -- Los Angeles, California, USA
http://http://conferences.sigcomm.org/hotnets/2014/
Call for Papers
The 13th ACM Workshop on Hot Topics in Networks (HotNets 2014) will
bring together researchers in computer networks and systems to engage
in a lively debate on the theory and practice of networking. HotNets
provides a venue for debating future research agendas in networking and
for presenting innovative ideas that have the potential to
significantly influence the community.
We invite researchers and practitioners to submit short position
papers. In particular we are interested in papers that foster
discussions that can shape research agendas for the networking
community as a whole. Thus, we strongly encourage papers that identify
fundamental open questions, or offer a constructive critique of the
state of networking research.
We also encourage submissions of early-stage work describing enticing
but unproven ideas. Submissions can, for example, advocate a new
approach, re-frame or debunk existing work, report unexpected early
results from a deployment, or propose new evaluation methodologies.
Novel ideas need not necessarily be supported by full evaluation;
well-reasoned arguments or preliminary evaluations can be used to
support their feasibility. Once fully developed and evaluated, we

A Cute Story. Or: How to talk completely at cross purposes. Re: [ih] When was Go Back N adopted by TCP

Detlef Bosau <detlef.bosau <at> web.de>
2014-06-03 12:43:51 GMT

I presume that I'm allowed to forward some mail by DPR here to the list
(if not, DPR may kill me...), however the original mail was sent to the
Internet History list and therefore actually intended to reach the public.
A quick summary at the beginning: Yes, TCP doesn't manage for sent
packets a retransmission queue with copies of the sent packets but
maintains an unacknowledged data queue and does GBN basically. This
seems to be in contrast to RFC 793, but that's life.
A much more important insight into the history of TCP is the "workload
discussion" as conducted by Raj Jain and Van Jacobson.
Unfortunately, both talk completely at cross purposes and have
completely different goals......
Having read the congavoid paper, I noticed that VJ refers to Jains CUTE
algorithm in the context of how a flow shall reach equilibrium.
Unfortunately, this doesn't really make sense, because slow start and
CUTE pursue different goals.
- Van Jacobson asks how a flow should reach equlibrium,
- Raj Jain assumes a flow to be in equilibrium and asks which workload
makes the flow work with an optimum performance.
We often mix up "stationary" and "stable". To my understanding, for a
queueing system "being stable" means "being stationary", i.e.
the queueing system is positively recurrent, i.e., roughly, in human
speech: None of the queue lengths will stay beyond all limits for all
times but there is a probability > 0 for a queue to reach a finite

IEEE LANMAN Call for Participation

Eric Rozner2 <erozner1 <at> gmail.com>
2014-04-28 19:08:59 GMT

CALL FOR PARTICIPATION
IEEE Workshop on Local and Metropolitan Area Networks (LANMAN)
http://www.ieee-lanman.org/ <http://www.ieee-lanman.org/#papers>
DEADLINE APPROACHING
Early Registration: April 28, 2014
CONFERENCE DATES
May 21-23, 2014
Reno, NV USA
DESCRIPTION
IEEE LANMAN has an established tradition as a forum for presenting and
discussing the latest technical advances in local and metropolitan area
networking. Continuing that tradition, IEEE LANMAN 2014 invites
cutting-edge papers spanning both theory and experimentation. Papers are
solicited in all areas of networking, but in keeping with the current
research trend, this workshop’s central theme is data center networking.
The intimate single-track session format of the workshop encourages
stimulating exchanges between researchers. The workshop is expected to be a
forum for discussion of new and interdisciplinary ideas on architectures,
service models, pricing, and performance. Speculative and potentially
transformative ideas are particularly encouraged, as are studies reporting
measurements from real-life networks and testbeds. Papers are solicited on
any LANMAN topic including, but not limited to, the following:
PROGRAM
http://www.ieee-lanman.org/#program
KEYNOTES

A not-end-to-end question

Bob Braden <braden <at> ISI.EDU>
2014-03-27 22:32:36 GMT

Friends,
I am pondering a question that is sort of anti-end-to-end. But since I
set up this list in the first instance, I figure I have the right to
abuse it
There is a community of electrical power engineers who are reworking the
power transmission system, starting by instrumenting it with measurement
devices called Phasor Measureent Units or PMUs. A PMU samples the
electrical state at a particular point ("bus") at O(100) times a second
, encapsulates the sample in a frame of ~100 bytes, and sends it (in
general) towards one or more control cemters Each frame carries an
absolute timestamp, currently using GPS clocks at each PMU. The frames
are passed downstream to a data sink, an application program running
usually in a control center computer.
This PMU data transmission problem requires high availability and
controlled latency. Just throwing away packets as we commonly do in the
Internet does not work here.
There are several proposals, eg MPLS, to solve this problem. However, I
have been pondering the question: isn't this a nearly perfect
application for Integrated Services and RSVP? Didn't we solve this
problem more than 15 ears ago?
Is there any difference in principle between streaming audio/video data
and streaming PMU data?
The major argument against Intserv and RSVP has always been with scaling
up to Internet sizes. However, the network delivering PMU data will not
suffer from a scaling problem. The population of PMUs is expected to