Can you supply the stacking parts from the MC-1 (plastic strips, screws and fan) separately? Or at least provide specs so to buy them directly? Would be nice to stack 4 HC-1s the same way. The MC-1 runs cool even with load, but a moderate lode on the HC-1 quickly gets to temps in the high 80s - it needs a fan.

Thank you mad_ady. The Cloudshell 2 is great but is is also very big, I do not need a display. I like the idea of the HC1, it is small, silent and needs not too much power. I am thinking of about two HC1 with a software raid 1 over network, or using rsync to backup all files every week.

Wha I want is an secure encrypted and energy efficient NAS to save pictues, videos, make home dir backups, .... Raid 1 by Hardware would good for that. At the moment I am waiting for the release of HC1+ or HC2. I want to place it inside a 19" Rack, the cloudshell is to high for that it would use too much space inside the rack.

Butterfly wrote:...secure encrypted and energy efficient NAS to save pictues, videos, make home dir backups, .... Raid 1 by Hardware would good for that. At the moment I am waiting for the release of HC1+ or HC2. I want to place it inside a 19" Rack, the cloudshell is to high for that it would use too much space inside the rack.

RAID is not a backup, you should rsync off site for a backup.A XU4 and a few usb drives will fit.Be aware of the crypo speed.

I know that RAID 1 is not a backup. My backup is an external USB 3.0 drive. Raid 1 is good to have also all new data on a second disk, that a hardware problem of one disk will not destroy the newest data. The USB Backup disk is normally not in the same building with the NAS system.

I want to try to make a NAS with LUKS encryption. The 20MB/s would be enough for the beginning. I hope that there are kernel modules in the future available that support multi threading for the LUKS encryption.

One idea of me is to use two HC1. Two options:- RAID 1 over network- One is master the second one is a backup device, that makes an rsync every day

One other idea is to use one HC1, the backup disk is attached via the USB port. Sadly it is only USB 2.0 and there is no free USB port left than.

So I am waiting for the HC1+ / HC2 to see what is different. A 3.5" disk would be good to build a NAS with more than 5TB. Otherwise 2.5" disk with the HC1 are very energy efficient.

Just a quick note to say that I stumbled across the HC-1 last month and was really excited about its potential to be used as workers in an HPCC Systems big data processing cluster. (https://hpccsystems.com/). The Gigabit LAN, local SSD, low-power, faster ARMs, with more memory than similar lowest price units might make it viable for this type or application.

I ordered a couple HC1 units and paired each of them with a 128GB SSD dirve. With a few hurdles, I was able to compile the HPCC Systems software and have a running system with 16 slaves. A standard test I run is to analyse web log data to calculate which sequences of 3 pages (aka trios) are the most commonly visited. Though it took a while, the system had no problem calculating the top ten trios from a data set of 10 billion page visits.

The system runs stably, but isn't ready for serious benchmarking yet. The software is highly threaded, and there appears to be an issue with the futex() system call "appearing to do a busy wait". As a result, instead of the software running near 100% user time ... more than 50% of the time repored is system time (mostly running futex() calls). I'll post separately concerning that issue ... though I'm open to suggestions concerning which topic would be the most effective.

I did, but did not get a reply from them. That's why I asked here. I figured if you will/have ship(ped) units to them they'll put it in their assortment.

Yeah, sometimes it can be hard to get support from Pollin. I wrote them three times to get an information about the availability schedule of the VU8C: the first two times with no reaction at all, the third time I told them that their support su*ks and that I gonna buy it elsewhere. After this third mail, I got answer on the same day . They told me that they did not receive my first two messages... ridiculous.What worked for me was to send my message to Pollin from within the "Verbesserungsvorschlag" category, while "Fragen zur Buchhaltung" seems not work at all.

I did, but did not get a reply from them. That's why I asked here. I figured if you will/have ship(ped) units to them they'll put it in their assortment.

Yeah, sometimes it can be hard to get support from Pollin. I wrote them three times to get an information about the availability schedule of the VU8C: the first two times with no reaction at all, the third time I told them that their support su*ks and that I gonna buy it elsewhere. After this third mail, I got answer on the same day . They told me that they did not receive my first two messages... ridiculous.What worked for me was to send my message to Pollin from within the "Verbesserungsvorschlag" category, while "Fragen zur Buchhaltung" seems not work at all.

I sent an email directly to service@pollin.de. Thx, I'm gonna try send the message within this category.

I see the touch display is avaiable now. Do you recall how long it took from hardkernel product release to avaiability on pollin?

@odroidOk, thanks! We asked that to Pollin as well, but the information is actually not clear. But at least they are aware of the new product, so we can be sure, that it well be available sooner or later

campbell wrote:I don't know if this has been asked, but why are these and other new products based on the XU3/XU4 rather than the C2?

because XU3/XU4 has a USB3 interface which allows a SATA connection, C2 does not.also XU3/4's CPU is more powerful, and the GPU is more powerful too.

Add to that - the XU4 has better kernel level software software support from the chip vendor. Clustering platforms like Docker Swarm and Kubernetes play better with mainline kernel, which is not generally available on c2 (at least not available in a stable form).

One advantage to the C2 is its ARM64 architecture. While 64-bit isn't all that important in a small soc with small memory footprint, there are some important software packages whose 32-bit versions are fairly hobbled (like MongoDB) and a growing set that are not even being built with 32-bit binary distributions anymore.

Jojo wrote:@odroidOk, thanks! We asked that to Pollin as well, but the information is actually not clear. But at least they are aware of the new product, so we can be sure, that it well be available sooner or later

I got answer from Pollin: the HC2 will be available there in ~3 weeks.

I might be completely unqualified to make this suggestion -- after all, I just bought my first 4 XU4's for R&D. But given that, let me toss out my wish list for the MC2:

I like the 4 high. It is a good size for me to built out a frame for.

background: I liked the photo you had of the 4x4 with the one power supply, but that is where my issue started. I have built out a lot of data-centers. The biggest pain in the butt is the power and network cables. That is the make-or-break of most data-centers. I'm looking at the potential of building out a beowulf-based set of clusters that would use each MC1 as a cluster. I'd have, say a group of ten of them at each site.

What I'd love to see would be the MC1 where:* the four boards are already connected to a single power line,* the fan is also connected to the power line,* a 5-port gigabit switch is connected, as a 5th item (top or bottom).

The questions then come up as what kind of power connection would go to the 6 units (the four boards, switch, and fan)? The most universal option would be a 110/240 unit; you could even sell one. Y'all are pretty sharp in creating the XU4 to begin with, so I'm betting y'all can come up with an elegant solution. But the "neatness" of only having two cords going to the unit (the network and the power) would make it much more versatile. Perhaps not everyone would need or want it. But it if was an option, I would certainly get it. This could allow me to connect the ten MC2's to a single 12-port gigabit switch/router, and one UPS, and have a clean "data-center" on a shelf.

BTW: I ordered my units last night from ameridroid. I'm hoping this can become the new direction we take. Thanks!

But it should be noted that software/settings matter. The +100 MB/s shown in the test require good software support (kernel and low level settings). With 'distros' that do not take care of the important stuff performance can drop down to below 40 MB/s: http://dietpi.com/phpbb/viewtopic.php?f=9&t=2686#p11652

I power my cluster off a single 180W 19V power brick. I've got a buck converter for each of the boards: http://i.imgur.com/peg0YfN.jpg - it might be a bit more DIY than you want, but it was my only solution as I also don't want multiple power bricks.

@JenniferT The only caveat I would give to others about your (quite neatly done) approach of converters is that many of those units put out an immense amount of digital noise, especially when approaching their ratings. I found that keeping this kind of cheap converter below 50% of its rating reduces the hash they give off. If you have a shortwave radio receiver around you will see the 's' meter slowly rise as you load the converter. I've had a couple of these so noisy they actually glitched my XU4.

Looks like you put in some decent bypassing. You seem to have a grasp of power tech, but a lot of SBC experimenters take power as binary - there or not. Noise and other issues sting them in the forms of glitching, write errors, video faults, etc. It's hard to tie those errors to power as they aren't so obvious.

We test the daylights out of power here as we've found it to be the root of a large percentage of issues that people think are actually data problems. Clean power rules especially with SBC's as they expect the filtering to be done at the PS level.

For those of you wanting to experiment with this kind of power converter, read about 'bypass capacitors' and try to get a unit that is rated twice what you expect to draw from it. The Chinese are notorious for ratings called 'max' that really means 'it burns out at this level.' So if your SBC is rated for 2A draw, put a 4A converter on it! (The HC1 peaks at 3A, so a '2A' converter would eventually fail at boot... http://com.odroid.com/sigong/blog/blog_list.php?bid=189)

I've played with a lot of power modules, all but two of these fail! The units on the right are inexpensive DC electronic loads that simulate the draw from your device in a way you can measure the power sources voltage drop, current and noise.

power_exp.jpg (157.38 KiB) Viewed 66655 times

I would suggest acquiring a simple DC electronic load and a decent voltmeter. It will go a long way to 'prove' your power system can handle the draw and I bet you learn a crapload about it! A simple set of resistors if you know ohms law well or something like this https://www.ebay.com/sch/sis.html?_nkw= ... 2749.l2658 is cheap enough to give you some insight to your power.

Everyone loves the RD-65A 12v / 5v power supply from Meanwell. Here we are using two DC loads to prove this power supply exceeds its rating easily. Note the voltmeters and current meters - they stay ABOVE the rated voltage at ABOVE the rated current. The unit got fairly warm after about an hour but never faltered.

Not entirely sure - about 30 years ago an elderly Native American approached my mother out of no where and told her "Take this, one day it will give your son power." Some years back, I went as an underdog to a heavy hitter meeting. I cleaned up and left a grey goat on my face, wore a three piece pin-stripped suit and that bolo ... The 'Southern Gentleman' walked away with the contract ;]

I know I've seen that bear symbol before but I have no clue where. Pewter and Turquoise, nice!

We have several HC1s and want to stack them like an MC1 COMPUTE stack that "just happens" to have attached storage.

We would like to run these HC1s as a compute farm at maximum processor speeds for days or weeks at a time and are concerned about silicon breakdown.

As a test:

We ran DietPi just for testing individual devices. DietPi has a CPU test named `stress` that will run all 8 processors at 100% while monitoring temperatures.

We ran a set of tests on one device at 85º C ambient temperatures, and then again at 65º C ambient temperatures -- with **No cooling fan••.

In both sets of tests the on-chip temperature rises consistently over the course of about 5 minutes until the chip hits 90º C, and then the on-chip temps are constant (89-91º) throughout the duration of the test.

While the test was running at 90º C we also ran the DietPi `cpu` command which tests the CPU speeds and temperature. According to this tool, the ARM is running 1400/2000 for all cores during the test.

So unless there is some other test we should run, it looks like the cores are not throttling down to maintain the 90º C temperatures.

We plan to mount the devices in a custom wind-tunnel-like enclosure with a large high-CubicFeetPerMinute fan, but in preliminary tests with a fan we only saw a few degrees lower on-chip temperatures.

Based on the tests, It seems like the ARM chip in these HC1s basically lives at 90º C when it is under full load.

However, the DietPi software CPU test specifically gives the user scary warnings any time the temperature is above 50-60º :

It should be ok for extended load - some people are mining on them 24/7.The problems are - a unit running full load will age quicker than an idle unit (e.g. you may need to clean/replace the thermal paste every year). There's an article by hominid in Odroid Magazine about long term maintenance.One other problem will be heat going into your hdd/storage and prevent it from cooling, so see how your storage survives.

Thank you for the info about the article from hominid. He basically is running the devices at 1.7Ghz and then monitoring the temperatures to keep them in the on-chip range of 70-75ªC.

Before your answer I never even considered that the heatsink thermal paste might need **changing** periodically. So, thanks for that insight.I may actually go back and check some Intel CPU's that I run hard to see if they need attention.

We will try to run some tests with hominid's 1.7Ghz limit and see if we can keep the temps in his range.

Hominid was also concurrently using the GPUs for Mining, but in our case we won't be using the GPU's, so we might be able to increase the clock speed and compensate with a high volume of airflow.

It's also possible to manipulate the thermal control into managing the clock speeds for you rather than having to clamp the max clock down manually. This allows the device to get higher clock speeds if the load/thermals allow but while aiming for a lower overall temp than the defaults target (90C). This command should get you in the 70-75C range for long term loads

I am completely new to Odroid, I have some recent, limited experience of Raspberry Pi and have used Linux distros extensively in the 90's and 00's, but this is exciting stuff! My first Odroid hasn't been delivered and I'm thinking of my next purchase already. Great work and 4gb memory sounds about right! I have been reading about and have purchased a couple of 3.5V peltier cooling plates to test on a Raspberry Pi, i doubt I can use them as first envisaged as a heat sink sitting directly on the cpu, but I wonder if you might not mount one connected to the steel cases behind the large fan on a four stack? Would the cooling effect on the cases assist in overall temperature reduction?