Little with an S25n. I screwed up that config pretty nicely. Once I got it right, it's been rock solid and haven't had any real issues with it. I don't do much with it though, about 10 filled ports with jumbo frames. No VLANs, LACP, or STP.

You need a good handle on how to configure a high end switch. There is no GUI, and it comes with nothing activated at all (ports aren't enabled, nevermind configured for actual switching...). If you come from cheaper switches, you'll have a tough time getting it configured properly so it doesn't stop passing traffic. If you have Cisco/HP Procurve/Dell PowerConnect experience, you may be OK or may have alternatives you can more easily integrate into your environment.

The real question is what do you need? How many ports? Gigabit or 10GB? Stacking? The more requirements given, the more suggestions can be made by those more experienced than me.

Which previous gear was OEM from Brocade? the E series? C series? I don't think so. There was probably an S series that was- but most of their stuff runs FTOS- not a Brocade OS.

I'm terrified of what Dell will do to Force10- support will probably start sucking, the products will be cost "optimized" which means cheaper. And R&D may drop into the toilet. I know Force10 is on the 802.3ba 40 and 100G comittee and has 40G and 100G blades in the works- not they may never see the light of day.

You need a good handle on how to configure a high end switch. There is no GUI, and it comes with nothing activated at all (ports aren't enabled, nevermind configured for actual switching...).

All kidding aside, no GUI and shut ports is a high-end feature? I haven't touched a switch GUI in a while outside of an emergency where I forgot one stinkin' command and said "screw it"... and I close ports that aren't explicitly in use (if they aren't closed already).

Give me a stripped-down switch that only does what I tell it to do and nothing else.

You need a good handle on how to configure a high end switch. There is no GUI, and it comes with nothing activated at all (ports aren't enabled, nevermind configured for actual switching...).

All kidding aside, no GUI and shut ports is a high-end feature? I haven't touched a switch GUI in a while outside of an emergency where I forgot one stinkin' command and said "screw it"... and I close ports that aren't explicitly in use (if they aren't closed already).

Give me a stripped-down switch that only does what I tell it to do and nothing else.

I come from SMB experience where we don't spend more than $500 a switch for 48 ports and gigabit. It was a bit of a change and a learning curve for me that I enjoyed. I am not sure what experience the OP is at, but I know Dell sales are pushing these with their Equallogics and wasn't sure if the OP was in the same situation as I was.

The 10G stuff in the big chassis for a long time was the densest, best value per port for 10G gear. It is still solid.

Elaborate, pls. When I was buying my new core switch they were waaaaay overpriced compared to anyone else (Juniper, Brocade, HP etc.)

What did you get for a core? Was it just one core switch?

How many ports of 10G? The E1200 was the only way of getting more than 300+ ports of line rate 10G for a long time. It is probably still the cheapest. It was cheap in comparison to anybody in that class. Last time I went through this it was probably only 2000 bucks a port without optics. Probably 3K with. If you are that dense and that fast (Cisco had only oversubscribed stuff in that segment, same with Juniper, Brocade didn't have the density- HP doesn't really have line rate 10G in that class).

The E series is also enterprise class- hitless upgrades, modular hot swappable parts.

If you are talking about S series stuff - many switches have come out since those were released that are better. Juniper, Cisco X series etc.

On the C series, it was always in the middle- not quite the value of the E or the S, but still a good way to get line rate 10G. I mean it is similar to a 6500 in price (or sometimes 4500) but has line rate 10G in denser increments (ones that aren't 2 or 4 ports a card of line rate).

I come from SMB experience where we don't spend more than $500 a switch for 48 ports and gigabit. It was a bit of a change and a learning curve for me that I enjoyed. I am not sure what experience the OP is at, but I know Dell sales are pushing these with their Equallogics and wasn't sure if the OP was in the same situation as I was.

I've dealt with cisco here and there, am not afraid of cmdline. We've been deploying 62xx Powerconnects for iscsi and I'm not very happy with the results. Force10 looks like the next step up, didn't know if anyone had any experience with them or not.

Keep in mind that previous gear was OEM from Brocade...Force10 is a new beast, that Dell bought.

I'm retarded- I finally got this. He's not saying the Force10 is OEM- he's saying the Dell stuff was previously OEM Brocade. That is true. If you are used to the Dell gear from before, you will experience change.

Force10 for the E and C did the designs and engineering and had somebody fab it. They did it long enough ago that Cisco copied the architecture for Nexus (Supervisor and fabric modules separated), etc.

I come from SMB experience where we don't spend more than $500 a switch for 48 ports and gigabit. It was a bit of a change and a learning curve for me that I enjoyed. I am not sure what experience the OP is at, but I know Dell sales are pushing these with their Equallogics and wasn't sure if the OP was in the same situation as I was.

I've dealt with cisco here and there, am not afraid of cmdline. We've been deploying 62xx Powerconnects for iscsi and I'm not very happy with the results. Force10 looks like the next step up, didn't know if anyone had any experience with them or not.

Which previous gear was OEM from Brocade? the E series? C series? I don't think so. There was probably an S series that was- but most of their stuff runs FTOS- not a Brocade OS.

I'm terrified of what Dell will do to Force10- support will probably start sucking, the products will be cost "optimized" which means cheaper. And R&D may drop into the toilet. I know Force10 is on the 802.3ba 40 and 100G comittee and has 40G and 100G blades in the works- not they may never see the light of day.

Force10 support was already in a mess before Dell bought them - the company had run out of VC and most of the good talent had left. Dell got them on their death bed.

My bigger concern is can Dell attract that kind of talent back to support the current Force10 line and plan a eventual credible refresh?

We have a 2 node vsphere 5 cluster, Dell R910's, hooked to an Equallogic PS6100XV, with a 2 switch PC6248 stack. Getting really high latency hits anytime the i/o ramps up even a little. We're talking a windows update on a VM triggers it.

We have a 2 node vsphere 5 cluster, Dell R910's, hooked to an Equallogic PS6100XV, with a 2 switch PC6248 stack. Getting really high latency hits anytime the i/o ramps up even a little. We're talking a windows update on a VM triggers it.

We have a 2 node vsphere 5 cluster, Dell R910's, hooked to an Equallogic PS6100XV, with a 2 switch PC6248 stack. Getting really high latency hits anytime the i/o ramps up even a little. We're talking a windows update on a VM triggers it.

The 10G stuff in the big chassis for a long time was the densest, best value per port for 10G gear. It is still solid.

Elaborate, pls. When I was buying my new core switch they were waaaaay overpriced compared to anyone else (Juniper, Brocade, HP etc.)

What did you get for a core? Was it just one core switch?

How many ports of 10G? The E1200 was the only way of getting more than 300+ ports of line rate 10G for a long time. It is probably still the cheapest. It was cheap in comparison to anybody in that class. Last time I went through this it was probably only 2000 bucks a port without optics. Probably 3K with. If you are that dense and that fast (Cisco had only oversubscribed stuff in that segment, same with Juniper, Brocade didn't have the density- HP doesn't really have line rate 10G in that class).

The E series is also enterprise class- hitless upgrades, modular hot swappable parts.

If you are talking about S series stuff - many switches have come out since those were released that are better. Juniper, Cisco X series etc.

On the C series, it was always in the middle- not quite the value of the E or the S, but still a good way to get line rate 10G. I mean it is similar to a 6500 in price (or sometimes 4500) but has line rate 10G in denser increments (ones that aren't 2 or 4 ports a card of line rate).

It was late last year, around the takeover and a single C300 with ~100 gig and 16-20 10gig ports; price was like 30-40% higher than a Brocade SX1600 or HP 8212zl, both giving me GUI and free mgmt... F10 didn't want to give in much, I smelled some pigheaded reaction, like if it were something spectacular, exceptional product - which is not, it's a fast, no-nonsense but very stripped-down switch, without even a basic GUI and for mgmt sw they were charging $5-10k which I found preposterous in 2012, these people were really out of touch... Eventually I went with the Procurve, mostly due to the fact it offered 10GBASE-T modules. It's been working fine since.

We have a 2 node vsphere 5 cluster, Dell R910's, hooked to an Equallogic PS6100XV, with a 2 switch PC6248 stack. Getting really high latency hits anytime the i/o ramps up even a little. We're talking a windows update on a VM triggers it.

Lol. I have 2 i350's sitting on my desk ready to go into one of the nodes tomorrow. I'll report back with my results.

Yes, Broadcom is a junk, don't even get me started on this. TL,DR: I run a PS6510E w/ couple of R815s and double Broadcom 57711 (dual-port 10Gb) per server, 2x PC 8024F and I could write a book about Broadcom's bugfest firmwares, their updaters that wipe out all your settings randomly without any warning or reason, completely broken iSOE implementation with no response to inquiries whatsoever etc etc.

Stick to Intel, they just work, thought their overall failure rate (when we had to RMA) exceeds Broadcom's which I found interesting.

The E1200 was the only way of getting more than 300+ ports of line rate 10G for a long time. It is probably still the cheapest. It was cheap in comparison to anybody in that class. Last time I went through this it was probably only 2000 bucks a port without optics. Probably 3K with. If you are that dense and that fast (Cisco had only oversubscribed stuff in that segment, same with Juniper, Brocade didn't have the density- HP doesn't really have line rate 10G in that class).

The promo material on the Extreme BlackDiamond X8 is intriguing. Generally speaking, 10GBASE-T ports are available at less than $400 now, but SFP+ ports for fiber are still considerably more expensive.

The E1200 was the only way of getting more than 300+ ports of line rate 10G for a long time. It is probably still the cheapest. It was cheap in comparison to anybody in that class. Last time I went through this it was probably only 2000 bucks a port without optics. Probably 3K with. If you are that dense and that fast (Cisco had only oversubscribed stuff in that segment, same with Juniper, Brocade didn't have the density- HP doesn't really have line rate 10G in that class).

The promo material on the Extreme BlackDiamond X8 is intriguing. Generally speaking, 10GBASE-T ports are available at less than $400 now, but SFP+ ports for fiber are still considerably more expensive.

The E1200 was the only way of getting more than 300+ ports of line rate 10G for a long time. It is probably still the cheapest. It was cheap in comparison to anybody in that class. Last time I went through this it was probably only 2000 bucks a port without optics. Probably 3K with. If you are that dense and that fast (Cisco had only oversubscribed stuff in that segment, same with Juniper, Brocade didn't have the density- HP doesn't really have line rate 10G in that class).

The promo material on the Extreme BlackDiamond X8 is intriguing. Generally speaking, 10GBASE-T ports are available at less than $400 now, but SFP+ ports for fiber are still considerably more expensive.

Yes but 320 ports of line rate in the same switch is something none of those have. At this time the price per port on those E chassis is probably in the 600$-800$ range.

I'm not sure it makes sense to cram them into one switch unless you are really-really space-constrained...

Performance is better if they are in the same switch as you don't have to traverse uplinks.

Also if you are aggregating a large number of 10G ports, what else would you do?

Stack, man! Stack!

(shameless plug, the ICX series we just released has some pretty awesome stack cabling performance).

In seriousness, though, linerate 10G for anything past 8 ports gets problematic without either a monolithic purpose-built (read more $$), or you'll need some serious iron to support it. (We do have a solution there, as well...MLXe 32)

We buy 8024's for under $150 a port now, amazing how far they have come down. Decent switch for mass 10Gig ports.

I've never heard anyone recommend PowerConnects on any basis except cost, and have heard of them having issues like poor spanning-tree implementations in the past. Can someone further characterize the assessment 'decent switch'?