Very interesting. I'm impressed with how far fanless solutions have come.

However, I'm skeptical of a few things...

#1: 15 minutes doesn't seem like long enough to verify the thermal stability of a fanless heatsink. I wonder if it would be as stable running overnight (say, running an overnight render). When I was testing heatsinks, the fanless versions took AGES to stabilize ... much longer than conventional versions. I remember a fanless case in particular that never did stabilize, but kept creeping up by a degree or two every hour until the test system shutdown sometime overnight.

#2: Ambient temperature in a case is likely to be 15-20° higher than in the test environment. A 62° rise from 20° ambient is tolerable. A 62° rise inside a 35° case will probably trigger a shutdown.

#3: I would really have liked to see an in-case test ... what effect *does* in-case airflow have? I'm left to speculate. Given the size of the thing, is it even possible to install a rear case fan without conflicts?

I guess I can't argue that this thing seems to beat the pants off the competition (and, I admit, I'm no longer up-to-date). But, I'm still skeptical that it really lives up to its claim to handle a 95W processor fanlessly in the real world.

I'd be curious to know how it stands up to our old anomalous Ninja sample ... my gut says the wider fan spacing would deliver a better real-world result thanks to ambient case airflow. But ... all I can do is speculate.

#2: Ambient temperature in a case is likely to be 15-20° higher than in the test environment. A 62° rise from 20° ambient is tolerable. A 62° rise inside a 35° case will probably trigger a shutdown.

#3: I would really have liked to see an in-case test ... what effect *does* in-case airflow have? I'm left to speculate. Given the size of the thing, is it even possible to install a rear case fan without conflicts?

My thought was it would perform better in a case with airflow than the open air test bed. For example, take any of the big honking passive VGA coolers. Adding a little bit of airflow goes a long way to reducing temps. This monster has so much surface area sticking up in the middle of the case that any case airflow should help. So, #3 might more than offset #2.

#2: Ambient temperature in a case is likely to be 15-20° higher than in the test environment. A 62° rise from 20° ambient is tolerable. A 62° rise inside a 35° case will probably trigger a shutdown.

#3: I would really have liked to see an in-case test ... what effect *does* in-case airflow have? I'm left to speculate. Given the size of the thing, is it even possible to install a rear case fan without conflicts?

My thought was it would perform better in a case with airflow than the open air test bed. For example, take any of the big honking passive VGA coolers. Adding a little bit of airflow goes a long way to reducing temps. This monster has so much surface area sticking up in the middle of the case that any case airflow should help. So, #3 might more than offset #2.

My unstated assumption in #2 was a fully fanless system ... not really a good idea, but presumably something people are thinking about.

I agree #3 would offset to some degree. The question is how much. I don't hold much hope it would more than offset #2 unless you are running very near the point of thermal runaway. My sense is that it's so large (and dense) that it probably blocks a significant amount of airflow, and probably doesn't gain as much from case airflow as a looser design might.

Very interesting. I'm impressed with how far fanless solutions have come.

However, I'm skeptical of a few things...

#1: 15 minutes doesn't seem like long enough to verify the thermal stability of a fanless heatsink. I wonder if it would be as stable running overnight (say, running an overnight render). When I was testing heatsinks, the fanless versions took AGES to stabilize ... much longer than conventional versions. I remember a fanless case in particular that never did stabilize, but kept creeping up by a degree or two every hour until the test system shutdown sometime overnight.

#2: Ambient temperature in a case is likely to be 15-20° higher than in the test environment. A 62° rise from 20° ambient is tolerable. A 62° rise inside a 35° case will probably trigger a shutdown.

#3: I would really have liked to see an in-case test ... what effect *does* in-case airflow have? I'm left to speculate. Given the size of the thing, is it even possible to install a rear case fan without conflicts?

I guess I can't argue that this thing seems to beat the pants off the competition (and, I admit, I'm no longer up-to-date). But, I'm still skeptical that it really lives up to its claim to handle a 95W processor fanlessly in the real world.

1. P95 is a tougher load than any render, I am sure. No change at 54C for several minutes with a full P95 load on an OC'd Intel CPU w/ 95W TDP clearly suggests it meets the maker's claims. This is at least 15C headroom.

2. Ambient temp in a highly ventilated case, one that's more perforated than any normal "silent" case, would certainly not be 15-20C higher. I'd guess maybe 10C higher at most.

3. An in-case test could still be done... but it's tedious, and the choice of case & components will affect the outcome.

I'm quite sure it could make a perfectly viable fanless setup with my suggested tips (in the conclusions) along with a 90% efficient PSU -- preferably on top -- and that on-demand back panel fan (which can always be mounted on the outside if necessary) for long high loads. At least in moderate temps (say under 30C ambient).

I love this product. If I was planning a build from scratch I'd plan the system around it, but right now I know it wouldn't work with my current RAM and I'd need to be measure for my graphics card and case. Happy with my heatsink for now but maybe for the next build...

I could definitely see this used in a HTPC build with a lowprofile motherboard, a lowprofile case, and the whole cooler sticking up through the top. With the mounting system used, you could even put the cooler in place last of all for a super-snug fit. If nothing else, the nickelplated version would make for a good-looking (although somewhat warm) fruit basket!

I wonder if anyone is still making motherboards with vacuum tubes such as this?

Perhaps too much in the way of compromise for many applications given the sheer size of the thing but I'm sure that for some systems it is ideal. I wouldn't go for the copper finished version though as it will tarnish so badly in time. The Nickel finished version will look better long term.

Nice !!Thanks for the review !With a super slow case fan blowing air out, I bet the performance is even better... but no longer fanless... If you want a mean machine, stick on of those on your video card as well and enjoy the copper extravaganza !!!

Or put a bowl in the middle, and enjoy your chocolate fondue !! (that's one way to get your significant other interested in quiet computing )

at some point when SSD can reach multi-terabyte capacity, it should be possible to combine no-fan w/seasonic fanless PSU for a truly silent pc.

Actually, I am just about there now. The only storage device in one of my PCs is a 240G SSD, this is plenty for my OS & apps, and the old HP MicroServer on my gigabit network feels fast enough for me to store big all the stuff -- audio, video, image, and software files. The CPU/board is an old Core2, but if I start with a modern graphics-integrated processor, a separate VGA will not even be needed for me.

at some point when SSD can reach multi-terabyte capacity, it should be possible to combine no-fan w/seasonic fanless PSU for a truly silent pc.

Like MikeC says; many of us are already there. All that's really needed is to place multimedia data on a separate server. HTPC's and regular desktops can easily be made 100% silent these days. Probably the easiest way is simply to use heatsink chassis. Only workstations with a regular heavy load needs something like this monster. And even a 95W TDP processor proably cools better with a heatsink chassi then a completely fanless regular chassi with a fanless CPU cooler such as this.

All that's really needed is to place multimedia data on a separate server.

Having data on a separate system is not ideal for many people and shifts the problems of heat, noise and energy consumption elsewhere. I don't like the idea of adding another system in just as a server at home, it's terrible from a green perspective and money wise. Now having a fully SSD setup would be better! I really could just use a 512Gb SSD and nothing else but the economics of it just aren't justifiable for me at this point.

All that's really needed is to place multimedia data on a separate server.

Having data on a separate system is not ideal for many people and shifts the problems of heat, noise and energy consumption elsewhere. I don't like the idea of adding another system in just as a server at home, it's terrible from a green perspective and money wise. Now having a fully SSD setup would be better! I really could just use a 512Gb SSD and nothing else but the economics of it just aren't justifiable for me at this point.

Off-topic, but here goes.

I highly approve of the "green" aspect, and not only because it reduces the heat load in my non-airconditioned home (heat being a direct consequence of imperfect energy consumption).

However, I feel the overall efficiency is highly dependent on the individual scenario, as NAS' can be very energy efficient! For example, even a 4-bay Synology DS411slim has a 9.6 W idle consumption and a 16.8 W "access" consumption; if we add to that 4x3 TB WD Reds (á 4.10 W idle/4.40 W r+w load/0.60 W standby), we're looking at a maximum consumption of 34.4 W according to manufacturer specs. This 34.4 W maximum is way less than the average consumption of a very frugal GPU (GTX 650's, not even Ti, is 48 W in the Techpowerup charts), meaning that just by casually using an ordinary desktop with discrete graphics, you are consuming more power than a fully-loaded 4-bay NAS at full power.

Even if we consider a CPU with integrated graphics (APUs), they cannot compare: even the idle consumption of such a system will be higher than the 4-drive NAS' maximum (36-52 W with an SSD system drive and Gold-rated PSU, according to X-Bit Labs' review of contemporary APUs), and that's without the 4 drives.

The implications are that you could actually SAVE power by using a NAS if you can replace your HTPC (APU+SSD) with it. Same goes for replacing a desktop PC acting as a data storage and distribution unit for devices like laptops, tablets, televisions/monitors (DLNA) and mp3 or video players (essentially an HTPC but with less efficient components), except you could save A LOT. Laptops can match NAS server efficiency, but they are hardly the best workstations.

In my case I could reduce my desktop's on-time about 4-6 hours daily, as it mostly acts as an HTPC in the afternoons and mornings. I have not measured my setup's consumption, but I assume the 650 Ti/i5-3570K combo uses more than enough compared to a NAS.

A NAS will still not be an economical purchase as it can effectively double your $/B cost, but in exchange you do get a versatile and highly efficient server. The materials and energy expended in making the device are "wasted", though, as you are adding redundant hardware - but that redundancy would be required somewhere for backups anyway, so might as well be in your own home, so you can make the most of it.

Edit: apparently the "slim" is for 2.5" drives, something I did not expect. The math with a 3.5" model like the DS413j is as follows: 7.7 W idle/31.6 W access, 49.2 W maximum with 4 drives in r+w. Not as compelling as the previously indicated figures, but the maximum is still just about level with a frugal GPU's average consumption, and indeed efficient even compared to APU+SSD setups (55-94 W with a single-thread process, according to the X-Bit Labs review quoted earlier - again sans the 4 drives).

I disagree in that for a typical home user the amount fo time that a system needs to be on for is small. By increasing the number of systems to 2, even if it is a NAS, that still means that the NAS has also got to be on as well. It might be in some low power level sleep state but it's still using power and the most efficient computer is one that is switched off properly. I always switch my computer off when I'm not using it and it's off at the wall overnight which saves several watts over soft off.

How many people really need so much data storage that a separate system is required? True, there are work applications where you might need so much data but I really think that a lot of people are now living in a state of data kleptomania/OCD simply because they can. It also kind of upsets me the frequency of posts here along the lines of "I want to build a home server for streaming media but I'm on a tight budget" - if they were really living on a tight budget they wouldn't be able to build one at all! Are people's priorities really such that having a home streaming server comes before housing and food?!?

The notion that a single-machine household is the norm sounds a bit outdated to me, and the general population are anything but effective users when operating manually.

The young adult homes I visit contain at least 2 PCs each, often more (1 desktop, several portables), multiple multimedia devices, and at least one digital camera. Typically the desktop is used as a data dump, chat station and (highly versatile) media player. The backups are on a flaky USB drive. I know a lot of tech-savvy folk, so the desktops typically stay on for hours - sure, messenger apps, internet surfing, games, videos and music are not NECESSARY, but people certainly do enjoy them.

Compared to browsing and chatting on tablets or netbooks (or smartphones) and using a NAS for data and media, that is simply inefficient - especially with the long uptimes and lax default power-saving schemes.

Data hoarding is real, but it has its upside: shared culture is preserved culture. I would also never give up local copies for clouds or streaming - the only real ownership is full ownership, and data can only be guaranteed to exist by making multiple copies of it. Even some contemporary works - never mind your personal files - can only be found in people's private (but shared) collections.

I think it's great how we have the same goal (green, lean homes) in mind, but have different approaches: minimalistic/solitary and diversified/network.

My view is that a NAS could serve as the mainframe in a home where now a desktop behemoth is used instead - not to mention various players and flaky auxiliary storage devices. Data storage, backup, sharing, and streaming, in case of media files, are more efficiently handled by a NAS compared to a desktop (and a selection of peripherals). By using a NAS, desktops could be reduced to something like the Intel NUC, or, in case of gaming and work machines, a mini-ITX rig. This would save a lot of resources (materials, energy, even precious square feet in a home). In my own experience, many homes would benefit from this arrangement even today, as terabyte-class capacities and multi-device access are required (and power is wasted), not to mention terabyte drives having all but become the standard in HDDs.

The minimalistic approach has strong merit for single-user, low-intensity scenarios, no doubt about that. Why buy a farm if you just need tomatoes. However, for the digital home of tomorrow, I see the diversified device structure as the optimal solution, at least until we get PCs with quantum storage and infinitely adjusting performance.

Most companies will urge the customers to put their data into their cloud storage, earning tons of money by selling you the storage space for a monthly fee. And once your data has reached a certain point of chaos, a few people will switch their storage provider.

Actually, my main work PC(s) go to sleep at 1~2W residual power draw while the HP Microserver runs 24/7. OK, sometimes the 40W HTPC runs all day too, if I'm recording stuff at different times. The server, with 4 Green/Red WD HDDs and an SSD, pulls typically 35~37W, maybe a little more when streaming, but barely. Torrents add hardly anything to the load, tho some spikes occur during big file transfers -- which never last more than say 30~60s on the gigabit network. I could experiment w/ letting the server sleep w/ LAN wake -- but I've found this somewhat unreliable in the past. In any case, it's certainly better than having all the capacity in a main 24/7 PC where the power draw would be double in idle. And as Das_Saunamies points out, a central server means anyone in the house can access all the photos, video, and other files from any computer on the network; my super-energy-efficient laptop on 802.11n wireless is where I do half my work these days, since every file ever created for SPCR (more or less) is on the server, too. Finally, I've got the "green luxury" of living in Vancouver BC where nearly 90% of all electricity is generated from hydro dams created decades ago: ie, minimal carbon footprint.

ps - backups: only of personal, business & SPCR data, not movies or tv shows, on to a USB 3TB external drive hooked up permanently to the Microserver. It's mostly manual: I turn it on and run the backup a couple times a week.

at some point when SSD can reach multi-terabyte capacity, it should be possible to combine no-fan w/seasonic fanless PSU for a truly silent pc.

Actually, I am just about there now. The only storage device in one of my PCs is a 240G SSD, this is plenty for my OS & apps, and the old HP MicroServer on my gigabit network feels fast enough for me to store big all the stuff -- audio, video, image, and software files. The CPU/board is an old Core2, but if I start with a modern graphics-integrated processor, a separate VGA will not even be needed for me.

I have the exact same solution

240GB SSD + HP MicroServer in the hall for centralised storage. The HP isn't particularly quiet, but it lives in the hall so it doesn't bother me and I feel no need to turn it into a silencing project. Yet.

I have 2x 1TB WD Red drives in the HP. Both are pooled (meaning I only have 1TB usable) and all data is duplicated across both drives. If the system drive fails I can pull either of the drives out, stick it in my PC and have access to my data instantly. If either of the storage drives fail it will work until I replace that drive.

The server back's up to an external host-powered drive in an Icy Box every night, and as mentioned in the previous post off-site is definitely a necessity in case of theft/fire/flooding so I'm seriously looking into CrashPlan or an alternative for cloud back-up.

I can not recommend Crashplan highly enough. Proper backup with versioning (and not only duplicating the files), and you can do it peer-to-peer to one or more friend's computers for free offline backup. Been running it for a few years now (my server acts as a backup server for 10+ computers) and it has never failed, despite more than a few restores.

I can not recommend Crashplan highly enough. Proper backup with versioning (and not only duplicating the files), and you can do it peer-to-peer to one or more friend's computers for free offline backup. Been running it for a few years now (my server acts as a backup server for 10+ computers) and it has never failed, despite more than a few restores.

Thanks for the mini-review

The only thing with being in the UK though is I can't use their Restore To Your Door Service. I'd really like to do this for the initial first-time back-up and also if I had to recover the whole dataset at once. To be totally honest that's the only thing that's made me hesitate as I'd like to find one in the UK that offers this service. I'm aware some people have two HPs in different locations and use them to back each other up, but I don't think my Mum would approve of one being on 24x7 at her house!

I can not recommend Crashplan highly enough. Proper backup with versioning (and not only duplicating the files), and you can do it peer-to-peer to one or more friend's computers for free offline backup. Been running it for a few years now (my server acts as a backup server for 10+ computers) and it has never failed, despite more than a few restores.

Thanks for bringing this up (Jordan too, of course), Crashplan sounds intriguing. So far I've just used a trusted home server as my off-site backup.

Who is online

Users browsing this forum: No registered users and 2 guests

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum