Monthly Archives: October 2012

Overview

Last week I picked up this Watchguard Firebox x500 for cheap to experiment with. It turned out to be a great success so it was time to try it for “real” on better, faster, production capable hardware. I’ve been following this thread with great interest for a while, a few guys in the thread spent a lot of time getting these things working with pfSense. If it wasn’t for these guys, this conversion would be extremely time consuming if not impossible.

I’ve bought 3 Fireboxes on eBay, x550e, x750e and an x1250e. Even though they are all different models and WatchGuard sells them as products with increasing price/performance for each higher model, the actual hardware inside these firewalls is almost identical.

The “e” series Fireboxes are significantly deeper than the x500/x700 series, which turns out is actually too long for the 4U bracket I bought for the uplink shelf. The x750e is 14″ deep and it still requires another 2″ to accommodate the power plug. The Firebox x500 comes in at 9″ plus the plug.

I’ve started with the mid-level Core x750 as the Guinea pig. A bit of irony with the sticker asking to install Firebox software. It’s never gonna happen.

It’s time to put the Force10 S50 into use. This means unfortunately a lot of work as it involves taking down all the virtual and physical servers, pulling all the wiring, the original switches, rewiring everything from scratch and documenting all patch panel and port changes. I use a Visio document to keep track of each patch panel and switch port mappping. Also use it to map out all my networks at home and at the various data centers.

Pulled all the patch cords out.

Pulled the two Dell 5324’s and racked the Force10 switch up.

Completely rewired the back of the servers. Organized the cables better so that pretty much all connections to each server are sequential. In original setup I had each switch perform a different role. One was strictly for SAN/NAS traffic, the other was for LAN/DMZ/Web traffic. With a single switch now, all that will be split strictly via VLAN so it makes the wiring much, much simpler and easier to follow.

Everything connected and running again. The only thing left to configure is LAGs to connect to two other switches. I want to make sure that the switch runs fine and performance is optimal before playing with LAG/LACP and Spanning Trees.

The whole process of swapping switches and rewiring took me almost 6 hours. Though a lot of that time was spent planning the layout and documenting the ports.

Over the years I’ve been adding more and more hardware to my home network. This led to a big increase in network complexity, especially once I started adding multiple managed switches. Got to a point where I had 4 managed switches + 4 unmanaged switches running the house. Combined with many VLAN’s across switches, LAG groups and Spanning Trees, things got pretty crazy. So, I’ve decided to simplify the setup a bit. Get rid of some of the switches and change the network layout to a proper star topology with one core switch in the center.
For the core switch, I needed something fast. A switch that will be able to handle all the traffic I can throw at it, but also cheap, cause I’m on a budget.

So I ended up buying this Force10 S50 switch on eBay. Hoping to replace the two Dell 5324’s I have in the main rack and have it act as my core switch. These Force10 switches are supposedly incredibly fast. They’re known for their super low latency which is perfect for an iSCSI setup I’m planning.

To be perfectly honest I’ve never heard of the Force10 brand, but a friend of mine who’s something of a networking guru highly recommended it. The price was definitely right and since this switch used to cost over $6K (+$5K L3 Routing Option), it was definitely a high end switch in its time.

The Force10 S50 is a 48 Port, Gigabit, Managed Layer 2 Switch / Layer 3 router. This particular model is SA-01-GE-48T, running SFTOS, which is kind of a bummer as it can not be upgraded to the more recent FTOS but it shouldn’t be too big of a deal if the switch works correctly.

This switch uses an RJ45 connector for console access which I believe is the same as most Cisco switches. It’s still a regular serial RS-232 port but with a different plug. Not exactly sure what the reasoning behind this is.

It so happened that I had an RJ45 to DB-9 cable kicking around that was still in its original packaging, since I’ve never had to use it before. All my current switches are standard DB-9 plugs but I tend not to toss cables unless I have oodles of them.

However, turns out that whatever that cable came with used different signal paths as it didn’t work with the S50. The switch didn’t show any output during it’s bootup sequence.
Luckily, I also had a converter plug that plugs into a DB-9 socket but also has an RJ-45 plug in it. Using a standard network patch cable worked in this case and I was able to communicate with the switch via PuTTY.

Powered up the switch. I was surprised to find out how quietly it ran, considering several small fans are used in it. Not that it matters too much as the servers occupying the same rack will drown out any other noise.
The system booted up and ran a self diagnosis with no issues. The BPS LED is amber due to the fact that there’s no Backup Power Supply hooked up. The power draw on the switch is about 90W which seems a bit high, I’m curious to see if that usage goes up when the switch starts moving traffic.

Installing ffmpeg under CentOS with all the plugins enabled is no easy task apparently. The only way to get it working properly is to build it from scratch. This process was compiled from lists and forums on the internet

Add DAG Repo to your Yum Repo list if it’s not already added. Run Update to make sure everything is in sync.

Overview

I’ve been using pfSense for a few years but almost always on a dedicated PC or a virtual machine. For a while now I’ve been toying with the idea of getting pfSense running on an actual firewall box. The advantage of running it on an actual firewall is twofold, size and power draw. Plus, it’s common hardware, easier to develop.

I picked up this WatchGuard Firebox X500 Core from Kijiji. Price was great and best of all the guy was about 5 minutes away from me.

As soon as I got home I wasted no time taking it apart. Removing the final screw behind the Void Warranty sticker was quite satisfying.

The interior guts of the firewall. Ugh. Disgusting filthy inside, must have been running in some crappy closet.

Some good blasts of air and it looks much better. Now to analyze the components. The WatchGuard Firefox is essentially just an x86 PC. The motherboard implements Intel 815 Chipset.
It comes with an Intel Pentium III based Celeron M 310 1.2Ghz as its processor. There’s a possibility of upgrading this CPU to a faster processor like the SL8BA,SL8BG Pentium M 1.7Ghz or SL6N5 LV version of Pentium M 1.7Ghz. The firewall comes with 6 10/100Mbit Ethernet Ports. These ports are driven by on-board Realtek chips. Even though one of the ports is designated as WAN, in pfSense any port or combination of ports can be used for WAN functionality.

The Firebox also comes with 256MB of PC-133 Non-Ecc Memory. The chipset supports up to 512MB so I asked around and a buddy of mine happened to have a few 512MB sticks.

Installation

It’s been a while since I looked at other filers as NexentaStor pretty much satisfied all my requirements. But some of the limitations of NexentaStor still bother me, plus there’s some questions regarding their rather vague licensing model.

One of the options I’ve looked at a while ago was OpenIndiana and Napp-It. Last time I looked at Napp-It, it was still in early stage of development. It’s been a few years since I last tried it so hopefully the project is a lot more mature now.

First step is to download the iso image for OpenIndiana. This time I’m doing the install entirely through the Dell’s R710 iDRAC (which I admit I completely forgot about).

Installation is a pretty straightforward, wizard based process.

Installation was a bit slow over iDrac but finally completed and upon reboot I was greeted with a login screen.

Time to configure networking and SSH so I can configure the rest of the box faster. One option is to configure the networking via the UI or via the console. In this case I’ll configure the first interface via the UI.

Now that NexentaStor is fully configured. Time to do some benchmarks. From eperience random IO was always wicked fast but I found that sequential performance was really dependent on the CPU speed in terms of CPU frequency. In my previous builds typical speeds were about 300MB/s writes and 500MB/s reads. Let’s see if this new server can beat that.

All drives are 146GB 15K SAS Drives.

During these tests Deduplication and Compression are turned off in order to only factor the disks in the equation.

Ugh. Without a SLOG (or ZIL), performance is going to be crap. NexentaStor is basically waiting on each drive to confirm commitment to disk before attempting to write the next block. Great for data integrity, terrible for performance.

Now, a little warning about disabling Sync. Unless the server is on permanent power (UPS, backup genny), disabling is Sync is downright dangerous for your data. A sudden power failure guarantees loss or corruption of uncommitted data. In my case, with my custom built UPS system, I have guaranteed 4 hours of uptime which is plenty of time to commit any writes to disk and gracefully shutdown all servers.

Whoa! In this case NexentaStor uses BOTH drives in a mirror for reads. Results in INSANE read speeds.

Update: I realized during the benchmarks that one drive was being utilized 90% while others were hovering around 60%. Replacing the drive and resilvering the array yielded absolutely unbelievable numbers.

Am I impressed? Hell yeah. One thing is for certain. the NICs are definitely going to be the bottleneck. Even with 4 way Active/Active MPIO. I’m going to have to start thinking about moving to 10Gb network next year. What good is all that speed if I’m bottlenecked at the NIC.

Now to go back to the Web UI to complete the configuration of the filer.

Ran into an issue configuring the Jumbo Frames on the Intel PRO/1000 VT card.
For whatever reason NexentaStor will only allow ports 3 and 4 (igb2/igb3) to bet set to jumbo frames (9K). Attempting to set igb0/igb1 port to Jumbo Frames will result in error:
SystemCallError: failed to configure igb0 with ip 192.168.91.1 netmask 255.255.255.0 mtu 9000 broadcast + up: ifconfig: setifmtu: SIOCSLIFMTU: igb0: Invalid argument
This of course doesn’t make sense since Intel Pro cards are on Nexenta’s HCL. To fix this, gotta get into the guts of the OS. In Console or through SSH:

# option expert_mode="1" -s
# !bash
You are about to enter the Unix ("raw") shell and execute low-level Unix command(s). Warning: using low-level Unix commands is not recommended! Execute? Yes

# option expert_mode="1" -s
# !bash
You are about to enter the Unix ("raw") shell and execute low-level Unix command(s). Warning: using low-level Unix commands is not recommended! Execute? Yes

Now the actual solaris shell is enabled and all commands can be accessed. From here I need to disconnect the igb0/igb1 interfaces

Overview

A while back I picked up an HP DL160 G6 to replacing my aging Dell PowerEdge 2950 II as my NexentaStor server. The end result was less than perfect and I was never happy with the performance of the server. I was seeing performance decrease over the old PE2950II and the two on-board NIC’s were not sufficient to feed all the servers and workstations with data. So I decide to scrap it and rebuild from scratch. With that in mind, I picked up yet another Dell PowerEdge R710 from Kijiji.

This R710 one comes with two X5550 2.6Ghz Quad Core CPU’s and 96GB of RAM. I’ll pilfer some of that RAM as it comes with 8GB sticks and put it to good use elsewhere. Replace the 8GB sticks with 4GB that I have abundance of for a total of 72GB of RAM.