So, yeah, another build. Another server, to be precise. Why? Well, as nice of a
system ZEUS is, it does have two major shortcomings for its use as a server.

When I originally conceived ZEUS, I did not plan on using ZFS (since it was not
yet production-ready on Linux at that point). The plan was to use ZEUS' HDDs as
single disks, backing up the important stuff. In case of a disk failure, the
loss of non-backed up data would have been acceptable, since it's mostly media
files. As long as there's an index of what was on the disk, that data could
easily be reaquired.

But right before ZEUS was done, I found out that ZFS was production-ready on
Linux, having kept a bit of an eye on it since fall 2012 when I dabbled in
FreeBSD and ZFS for the first time. Using FreeBSD on the server was not an
option though since I was nowhere near proficient enough with it to use it for
something that important, so it had to be Linux (that's why I didn't originally
plan on ZFS).

So, I deployed ZFS on ZEUS, and it's been working very nicely so far. However,
that brought with it two major drawbacks: Firstly, I was now missing 5 TB of
space, since I had been tempted by ZFS to use those for redundancy, even for our
media files. Secondly, and more importantly, ZEUS is not an ECC-memory-capable
system. The reason this might be a problem is that when ZFS verifies the data on
the disks, a corrupted bit in your RAM could cause a discrepancy between the
data in memory and the data on disk, in which case ZFS would "correct" the data
on your disk, therefore corrupting it. This is not exactly optimal IMO. How
severe the consequences of this would be in practice is an ongoing debate in
various ZFS threads I've read. Optimists estimate that it would merely corrupt
the file(s) with the concerned corrupt bit(s), pessimists are afraid it might
corrupt your entire pool.

The main focus of this machine will be:

room to install more disks over time

ECC-RAM capable

not ridiculously expensive

low-maintenance, high reliability and availability (within reason, it's still
a home and small business server)

Hardware

The component choices as they stand now:

M/B: Supermicro X8DT3-LN4F

RAM: 12 GB ECC DDR3-1333 (Hynix)

CPUs: 2 x Intel L5630 Quad Cores, 40 W TDP each

Cooling: 2 x Noctua NH-UD9X 1366 (yes, air cooling! )

Cooling: A few nice server double ball bearing San Ace fans will also
be making an appearance.

Case: InWin PP689 (will be modded to fit more HDDs than in stock config)

Other: TBD

Modding

Instead of some uber-expensive W/C setup, the main part of actually building
this rig will be in modifying the PP689 for fitting as many HDDs as halfway
reasonable as neatly as possible. I have not yet decided if there will be
painting and/or sleeving and/or a window. A window is unlikely, the rest depends
mostly on how much time I'll have in the next few weeks (this is not a long-term
project, aim is to have it done way before HELIOS).

Also, since costs for this build should not spiral out of control, I will be
trying to reuse as many scrap and spare parts I have laying around as possible.

Teaser

More pics will follow as parts arrive and the build progresses, for now a shot of the
case:

anonymity protecting long legged ostriches. i expect nothing but the best.
i also love the combination of cheap, xeon and ecc ram.

Haha, yeah it took me a while to find something for which APOLLO could stand.

The L5630's were 60 USD a piece plus 20 USD shipping, less than a tenth of my X5680's
for HELIOS. The M/B was 200 USD plus 50 USD shipping plus 50 USD VAT ( ), still
pretty cheap considering it once cost nearly 600 USD. Come to think of it, the Noctua
CPU coolers were actually more expensive than the CPUs themselves...

I looked around quite a bit until I found the right balance. There's no need for high
performance equipment, so originally I thought I'd go with a very low-end LGA1155
single socket Xeon, but the M/B's for that platform which have some halfway decent
features are actually still pretty expensive, and the CPUs themselves are nowhere
near as cheap as their 1366 counterparts (you can get some L5639 hexacores
for ~80 USD a pop on eBay, which would actually be pretty neat if you get them onto
an SR-2 and get a decent overclock, so if I ever burn out my X5680's... ).

The one downside of LGA1366 server M/B's is that most of their integrated SAS
controllers do not support HDDs larger than 2 TB, but that can be worked around
by either having more HDDs of smaller sizes or buying a fairly cheap, slightly newer
host bus adapter card as an add-on.

I must however say I quite like the fact that there is much better vendor support for
Linux if you buy server equipment.

EDIT:
Besides the more pragmatic reasons: Dual socket systems are just cool IMO.

Also im quite interested on your opioion about ZFS, i been looking at converting my windows file server to a FreeNAS implementation and using 6x3TB disk in parity, just wondering if id see performance increase

M/B, CPUs and memory have all arrived. The CPUs and M/B seem to be working OK.
One of the memory modules seems to be having a bit of trouble being recognized,
the other five work fine. I'll see if it's really defective or if it's just the
IT gods screwing with me a bit.

The Noctua NH9DX 1366

The Noctua NH-U9DX 1366 is a cooler from Noctua's series specifically made for
Xeon sockets. For those who don't know, LGA1366 sockets have an integrated
backplate, just like LGA2011, which makes them much more convenient than their
desktop counterparts. It's quite a nice and sturdy backplate, too, in fact it's
among the most solid backplates I've come across yet. This does, however,
require a slightly different mounting system. You just have four screws which
you bolt directly into the plate.

Aside from that, the cooler is identical to its desktop counterpart as far as I
know. Why the 92 mm version? For one thing, it was in stock, unlike the 120 mm
version of this cooler. Also, the CPUs only produce 40 W TDP each, so there
really is no need for high-end cooling. And as a bonus, I got supplied some
awesome San Ace fans with my case, which also happen to be 92 mm.

The Noctua fans which come with the cooler are just 3 pin fans (the newer models
of this cooler for LGA2011 come with a PWM fan I think), but the San Ace fans I
got with my case are actually PWM controlled! Since the M/B has a full set of
PWM headers (8, to be exact, how awesome is that!? ) I will try the San Ace
fans and see how they play on lower rpm's (they run at 4,800 rpm on full speed ). This does not need to be a super-silent machine since it will be in its
own room, and since I really like the San Ace fans with regards to build quality
(and I'm a total sucker for build quality) I'd love to use them for this. The
Noctuas would admitteldy be better suited, but I'll see how things go with the
SA's first.

The Box

Unlike its shiny desktop counterparts, the NH-U9DX comes in a nice and subtle
(but sturdy) cardbord box with a simple sticker on it. I must admit I like this
box more than the shiny ones.

(click image for full res)

Contents

How it looks packaged...

(click image for full res)

... and out in the open.

(click image for full res)

Noctua Pr0n

A few glory shots of the cooler itself...

(click image for full res)

(click image for full res)

The San Ace 9G0912P1G09

There is no info about this fan on the web, I'm presuming it's something San Ace
makes specifically for InWin in an OEM deal.

I've hooked it up to a fan controller and got a max reading of 4,800 rpm, and
the Supermicro board turns them down to ~2,200 rpm on idle. They seem to be very
good fans, you can only really hear the sound of the air moving, no bearing or
motor noises so far. Also, they are heavy (~200 g per piece), which is always
nice for a build quality fetishist such as myself.

Note: Hooking such a fan up to a desktop board as its power source would not be
advisable, they are rated for 1.1 A and might burn out the circuits on a desktop
board. Server boards usually have better fan power circuitry since they are
desinged with high-performance fans in mind. Just as a side note.

(click image for full res)

Compared to the Noctua fan which comes with the coolers. I might still go with
the Noctuas, but it's not the plan at the moment.

(click image for full res)

The Noctua NH-U9DX 1366 San Ace Edition

I had to improvise a bit with mounting the San Ace's to the tower. The clips
which you'd use with the Noctua fans rely on the fan having open corners, which
the San Ace's do not. Ah well, nothing a bit of cotton cord can't fix.