This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Preparing for Power Over Ethernet Plus

The original 802.3af PoE
standard offered a fairly
straightforward way to
supply loads with 13 W
or so of usable power delivered
at 48 V dc. But IEEE 802.3at
PoE Plus, which ups usable
power to something over 50 W,
introduces some wrinkles that
designers and even IT managers
must understand.

One catch is that designers
can still supply power in a limited
fashion in some existing
Ethernet installations via a midspan
bridge. But in that case,
designers can’t implement
power negotiations between a
powered device (PD) and power
source equipment (PSE). This
implies dedicated PoE Plus ports
and relatively high duty-cycle
power supplies in midspans.

Something else to watch out for
are PDs that dynamically negotiate
power requirements with the
PSE via their Ethernet connection.
This requires more code in the PD
microcontroller and a greater
understanding of dynamic power
requirements on the part of the
engineer writing that code.

A potential pitfall for end users
is that PDs can meet the standard
by operating in a fall-back
mode if there’s not enough
power for full functionality. (For
example, a video phone could
fall back to operating voice-only,
without a video display.)
Alternatively, a PD application
could meet the standard simply
by signaling “insufficient power.”
IT managers who bought a lot of
“compliant” video phones could
find themselves embarrassed by
a system that didn’t work as
expected if a “compliant” switch
didn’t possess a sufficiently
robust power supply.

HISTORY LESSON
To get comfortable with PoE
Plus, it helps to understand its
genesis and subsequent evolution.
In the beginning, Cisco had
a proprietary approach for powering
Voice over Internet Protocol
(VoIP) business phones that
involved powering some pairs in
the router with 48 V.

The rest of the industry saw that
this was good and wished for an
open standard, which became
IEEE 802.3af. To be conservative,
the IEEE subcommittee limited
power to 15 W at the PSE, which
was enough for the non-video
VoIP phones that then dominated
the market. They also expanded
Cisco’s idea by allowing the
“spare pairs” in an Ethernet cable
to be powered by a midspan,
making it possible to retrofit PoE
to legacy Ethernet plant.

When PoE hit the streets, many
potential vendors saw its advantages
and jumped on the bandwagon.
VoIP phones would no longer need power plugs, making
them more like old-fashioned
public-branch-exchange (PBX)
phones. Wireless hotspots could
be located anywhere someone
could pull a CAT5 cable.

Supermarket shelves would twinkle
with up-to-date price tags that
would always match the prices in
the cash register. And, PoE musical
instruments, mixers, and
recording equipment would displace
the MIDI bus and revolutionize
the music business.

Obviously, some of these goals
were more realistic than others.
In the three years since basic PoE
was released, three killer applications
have taken hold: VoIP
phones, Wi-Fi hotspots, and security
cameras. Within those applications,
though, there immediately
appeared a need for power
beyond 13 W.

For example, there’s an anticipated
demand for video conferencing
using VoIP phones, and
backlighting a video screen takes
power. Simple short-range Wi-Fi
is happy with 13 W, but WiMAX
takes more power. And while
fixed security cameras don’t
require much power, once motors
are added for panning, tilting,
and zooming, power does
become an issue.

But the manufacturers of
Ethernet switches, concerned
about over-specifying power
supplies, pointed out that video
phones and pan/zoom/tilt cameras
don’t need full power all of
the time. Most of the time, the
phone is just sitting there. Even
when there’s a call, video isn’t
always necessary. Unless it’s a
formal conference, most people
would prefer to remain invisible
to the other party. Similarly,
those high-end security cameras
only move when a guard touches
a joystick. In other words, the
requirement for higher power
changes continuously.

The dynamic-power issue transformed
the questions facing the
IEEE 802.3at task force from simply
“How much current can a
bundle of CAT5 cables and their
associated RJ45 connectors safely
handle?” to “How can we create
a protocol that allows PDs to
dynamically negotiate for power
with a PSE?”

Continued on Page 2.

BACK TO BASICS
Before we get into that, let’s
take a look at basic PoE as
described in the IEEE 802.3af,
DTE Power via MDI, which was
formally approved on June 12,
2003 (see the figure).
PoE uses either the data pairs or
the spare pairs in an Ethernet
cable to carry 48 V dc from the
PSE in an endpoint switch or
midspan hub to the PD appliance
at the other end of the
cable. Data pairs are powered
via center tap, while spare pairs
are simply paralleled. The sense
of the dc voltage doesn’t matter,
thanks to a diode bridge ahead
of the PD controller chip.

To take advantage of Power
over Ethernet, PSEs must be able to detect the presence of a PD on any port. And, PD appliances
must be able to assert their PoE
compatibility and may assert
their maximum power requirements.
Under the 802.3af standard,
PSEs may not apply power
to the Ethernet cable unless
there’s a PoE-enabled PD on the
other end. PoE PDs are identified
by the presence of a 25-kΩ resistor
across their input.

The PSE measures resistance by
applying two voltages (separated
by 1 V and a 20-ms interval) and
using the resulting currents to
determine the resistance value.
This part of the handshake is
called the “discovery” phase.
Next, there’s an optional “classification”
phase. In the 15-W maximum
world of basic PoE, classification
allows the PSE to decide
whether it has enough capacity
to supply the PD. If it doesn’t, it
can refuse to power up the
Ethernet pair.

During the classification phase,
the PSE briefly asserts a 15.5- to
20-V pulse on the pair, and the
PD can opt to signal the PSE by
placing a load on the line. Doing
nothing, not putting a load
online, automatically identifies
the PD as Class 0, and the PSE
would expect it to limit current to
400 mA. Class 1 PDs must selflimit
to 120 mA, Class 2 to 210
mA, and Class 3 to 310 mA. A
Class 4 was reserved for future
use. Timing relationships were
500 ms (max) for detection, 10
to 75 ms for classification, and
400 ms for power turn-on.

DATA PAIRS AND SPARE PAIRS
Originally, PoE was intended
for standard Ethernet cable,
which has four twisted pairs. But
only two of those pairs carry
data. Under basic PoE, powering
is an either/or situation—only
one pair may be used at a time.
This enables the seamless use of
new endpoint routers with built-in
PoE or power via midspan
bridges in legacy systems.
Midspans would only power the
spare pairs, while endpoint
equipment could power either
pair. (In practice, all endpoints
power the data pair.)

That arrangement eliminates the
possibility of midspan PoE for
legacy plant in which the spare
pairs are left unconnected.
However, the expectation was
that most PoE would be endpointpowered
in the long term. So,
endpoints should be able to handle
any kind of infrastructure,
including legacy sites with unconnected
spare pairs.

When using the spare pairs in
basic PoE, pins 4 and 5 are paralleled
for one side of the dc supply
and pins 7 and 8 are paralleled
for the other side. When
using the data pairs, the PSE
applies dc power to the center
tap of each isolation transformer
so that pins 3 and 6 supply one
side of the dc and pins 1 and 2
supply the other. At the PD, datapair
power is recovered via center
taps on each transformer.

MORE POWER
With that background, it’s possible
to understand PoE Plus. Part
of the IEEE 802.3at Task Force’s
job was to decide whether the
additional power would be delivered
by simply increasing the
maximum current rating or by
paralleling the spare pairs with
the data pairs. The more challenging
part involved expanding
the classification scheme for PDs
to dynamically negotiate with the
PSE for more or less power.

Resolving the first issue was relatively
simple, once it was agreed
that CAT5 cable and RJ45 connectors
could handle more current
and that was acceptable to use
both sets of twisted pairs. One
could have it both ways: more
current and four active pairs. That
decision impacted midspan makers,
though. If there’s no continuity
through the spare pairs, they can
only supply half as much power
as an endpoint switch.

That leads to a situation in
which PoE Plus-compliant
midspans can be configured with
a mix of basic PoE, PoE Plus, and
non-PoE ports—without the versatility
inherent in a full PoE Plus
endpoint. This isn’t a big disappointment
to midspan makers,
however. There was never going
to be a way to deliver as much
power through two pairs as was
delivered through four. All they
ever wanted was to deliver more
than 13 W on dedicated ports at
prices below those for new endpoint
switches or routers.

MAX POWER
So, what’s the real maximum
power that can be supplied by
PoE Plus? “The IEEE 802.3at current
has been established at 360
mA per conductor, 720-mA delivered
current by the TIA TR-42
working group. This current is
good for up to a 45°C environment
and must be de-rated to 0
mA at 60°C,” says Clay Stanford
of Linear Technology.

“There are concerns about the
45°C and the de-rating. Because
of this, it would be my opinion
that the 720-mA current limit isn’t
set in stone. It might be reduced
to something like 500 mA so that
a higher ambient temperature
could be allowed,” he notes.

With respect to voltage,
Stanford says, “The IEEE 802.3at
committee has tentatively established
the PSE output voltage as
50 to 57 V. The committee has
established the total round-trip
100-meter cable resistance to be
12.5 x max.” With this voltage
and resistance, the power is: