Sure, adding dollars and watts to the link
can buy margin, but highest margin does not equate to lowest cost or best
suitability for high volume adoption, (otherwise wouldn't we be using trans-oceanic
terminal equipment for SR apps. ?)

lowest cost is achieved by choosing known
low cost technologies to meet 90+% of the application market and ensuring any
application spec doesn't try to squeeze the technology to its limits.

That's definitely one way
it can be interpreted. There is some apples and oranges comparison going
on there, and I'm not sure how the VG got added to the 802.3 mix. ;-)

But with optics, I would
agree that the tougher choices haven't always been made, even when the writing
was on the wall. The biggest complaint with 10GbE was all the possible
port types (thank you WAN interface sublayer). The 10GBASE-SR PHY is
doing well in the market though, and that's partially due to the fact that it
is the only 850nm wavelength PHY for that space. Interestingly enough
though, implementations can be achieved with either linear or limiting
components. And, if you put a linear at one end and a limiting at the
other, they will communicate.

That's what I do like
about Ali's proposal. He has shown that it is possible to do 300m of
MMF with an linear approach. That indicates to the task force there is
more margin in a linear approach than in a limiting approach; therefore, having
more margin to play with, the linear approach with a 100m MMF reach should be
able to become the lowest cost solution for the largest volume of the MMF
market. That's a huge benefit. Rather than trying to pushing the
limits and slowing the adoption curve, there is an implementation option which
should make 100G MMF up to 100m a fiscally viable option.

"Let the market
decide" was how we ended up with 100BASE-TX, instead of 100BASE-T4,
100BASE-T2, or 100BASE-VG. The 802.3 working group did a poor job of
making tough decisions and minimizing the number of options to be presented to
the industry.

What a mess.

But I think 100BASE-TX is
the most widely deployed of the various 802.3 interfaces.
There have been a few billion shipped so far.

"Let the market
decide" is a really, really bad way to write a standard. The
IEEE 802.3 working group has done a very good job of making tough decisions and
minimizing the number of options to be presented to the industry. To
create a reach objective that can only be satisfied by one implementation
is a poor choice as it reduces the ability of component vendors
to compete based upon their respective implementation strategies. As the
current objective is written, the reach is achievable with limiting and linear
TIA's and may be achievable with lower cost components.

They did not show a picture or how big is the server, but based on your remarks
it is small enough to fit in modest room.

I assume the Intra-links with the Blue Gene might be proprietary or IB.
What does clustering system Intra-links has do to
with the Ethernet network connection.

I assume still some of the users in TJW lab may want to connect with higher
speed Ethernet to this server, very likely you will need
links longer than 100 m. In addition higher speed Ethernet may be used to
cluster several Blue Gene system for fail over,
redundancy, disaster tolerance, or higher performance which will require links
longer than 100 m.

As you could see here the form factors which allow you to go >100 m will be
several time larger and not compatible
with the higher density solution based on nx10G. Linear nx10G as given
in http://www.ieee802.org/3/ba/public/jan08/ghiasi_02_0108.pdf
can extend the reach to 300 m on OM3 fiber and relax the transmitter and jitter
budget.

You have stated strongly you see no need for more than 100 m, but we have
also heard from other who stated
there is a need for MMF for more than 100 m especially if you have to
change the form factor for more than
100m! Like FC and SFP+ we can define limiting option for 100 m
and linear option for 300 m, and
let the market decide.

Thanks,
Ali

Petar Pepeljugoski wrote:

Frank,

You
are missing my point. Even the best case stat, no matter how you twist it in
your favor, is based on distances from yesterday. New servers are much smaller,
require shorter interconnect distances. I wish you could come to see the room
where current #8 on the top500 list of supercomputers is (Rpeak 114 GFlops),
maybe you'll understand then.

Instead
of trying to design something that uses more power and goes unnecessarilly
longer distances, we should focus our effort towards designing energy
efficient, small footprint, cost effective modules.

Depending on the sources of link statistics, 100m OM3 reach
objective actually covers from 70% to 90% of the links, so we are talking about
that 100m isnot even close to 95% coverage.

RegardsFrank

From: Petar
Pepeljugoski [mailto:petarp@US.IBM.COM]
Sent: Friday, March 14, 2008 5:09 PM
To:STDS-802-3-HSSG@listserv.ieee.org
Subject: Re: [802.3BA] Longer OM3 Reach Objective
Hello Jonathan,
While I am sympathetic with your view of the objectives, I disagree and oppose
changing the current reach objective of 100m over OM3 fiber.
From my previous standards experience, I believe that all the difficulties
arise in the last 0.5 dB or 1dB of the power budget (as well as jitter budget).
It is worthwhile to ask module vendors how much would their yield improve if
they are given 0.5 or 1 dB. It is responsible for most yield hits, making
products much more expensive.
I believe that selecting specifications that penalize 95% of the customers to
benefit 5% is a wrong design point.
You make another point - that larger data centers have higher bandwidth needs.
While it is true that the bandwidth needs increase, you fail to mention is that
the distance needs today are less than on previous server generations, since
the processing power today is much more densely packed than before.
I believe that 100m is more than sufficient to address our customers' needs.
Sincerely.

I
am a consultant with over 25 years experience in data centerinfrastructure design and data center relocations
including in excess of 50data centers totaling 2 million+ sq ft. I am
currently engaged in datacenter projects for one of the two top credit card
processing firms and oneof the two top computer manufacturers.

I'm concerned about the 100m OM3 reach objective,
as it does not cover anadequate number (>95%) of backbone
(access-to-distribution anddistribution-to-core switch) channels for most of
my clients' data centers.

Based on a review of my current and past projects,
I expect that a 150m orlarger reach objective would be more suitable.
It appears that some of thedata presented by others to the task force, such
as Alan Flatman's DataCentre Link Survey supports my impression.

There is a pretty strong correlation between the
size of my clients' datacenters and the early adoption of new technologies
such as higher speed LANconnectivity. It also stands to reason that
larger data centers havehigher bandwidth needs, particularly at the
network core.

I strongly encourage you to consider a longer OM3
reach objective than 100m.