Many users have been using the ODROID-XU4 for Server, NAS, Cluster, Mining and Build-farm applications thanks to its high computing performance and connectivities.They've kept requesting some easier and cheaper solutions for the SCALABILITY with a stripped-down version of XU4.So we decided to make them happy with our solutions.

1. Developing a network-storage server friendly single board computer. ODROID-HC1.Since we've seen a lot of bad USB cables and USB-to-SATA bridge chipsets make users struggling due to physical/electrical tolerance issues as well as driver compatibility issues.Therefore, we had to mount a SATA connector on the PCB with a fully tested SATA bridge controller.To lower the cost, we had to minimize the board size because the 10-layers PCB is quite expensive.So we had to remove some features like HDMI output, eMMC connector, USB 3.0 hub, power button, slide switch and so on.Any type of 2.5inch SATA HDD/SSD storage can be installed. We've tested 7mm, 9.5mm, 12mm and 15mm thick storages.Seagate Barracuda 2TB/5TB HDDs, Samsung 500GB HDD and 256GB SSD, Western Digital 500GB and 1TB HDD, HGST 1TB HDD and other storages were fully tested with UAS and S.M.A.R.T. function.We are going to sell this product from 21 August at $49 including the metal frame heatsink. The big metal frame body was made with Aluminium for better cooling efficiency.We hope this new product can be widely used for building an affordable and powerful Home Cloud server. So want to call it ODROID-HC1.Please enjoy pictures of some engineering samples.

2. Developing a cluster friendly single board computer. ODROID-MC1.We spent several days to build this big cluster computer for testing the Kernel 4.9 stability.It was built with 200pcs of XU4 to establish 1600 CPU cores and 400GB RAM.We really needed a simpler solution for whom need an affordable and powerful personal cluster.Stack-able computer was a great idea and we could just remove the SATA interface circuits from the ODROID-HC1 PCB and minimize the heatsink size fortunately.Attaching a large cooling fan was very helpful to keep the system cool for the very heavy computing load.We can call it ODROID-MC1 which stands for My Cluster.I think we are able to sell the MC1 from the middle of September at $200 including fully assembled four units with a USB cooling fan.Meanwhile, we are looking for some cluster software solutions like Docker-Swarm with forum members.We are willing to send free samples to forum members who have some cluster computing experiences.

PCB Pictures.Top-side: Several connectors and over one hundred passive components.

Bottom-side: SoC, PMIC, SATA bridge(JMS578) and so on. The SoC and PMIC are glued with epoxy resin to increase the reliability.The surface of SoC(Exynos-5422) will be covered with thermal paste and attached to the metal frame for cooling.

So you had to keep the USB3 controller to get gigabit lan. Couldn't you have wired the usb port for USB3 and ditched the USB2 controller? Or does the USB3 controller provide USB2 as well and it was too difficult to route the extra data lines? I'm guessing having a USB3 port allows users to add new drives for quick transfer, and also add a powered hub...Also, will you ship it attached to the heatsink, or will you provide paste and it will be a DIY job?

There are two USB 3.0 hosts and one USB 2.0 host in Exynos-5422.One is used for Gbit Ethernet and the other one is used for SATA bridge.So there is no more USB 3.0 port if we don't add a USB 3.0 hub controller.

The heatsink(metal frame) is included by default. We will apply the thermal paste and assemble it in our production line.

odroid wrote:There are two USB 3.0 hosts and one USB 2.0 host in Exynos-5422.One is used for Gbit Ethernet and the other one is used for SATA bridge.So there is no more USB 3.0 port if we don't add a USB 3.0 hub controller.

The heatsink(metal frame) is included by default. We will apply the thermal paste and assemble it in our production line.

I forgot you needed to attach the SATA bridge somewhere That makes sense then, sorry for my silly question.

I've been thinking about a concept like that.As a cloud designer I miss one thing that could help a lot: a load switch and easy serial console access.The serial port should be on the edge. Maybe with an rj45 like connector (but not rj45, as you kill the exynos instantly when hooking it up to the switch).I've rescued a few XU4 installations using minicom and uboot from far far away... The current connector is good of course, but I really hate it. Once plugged it doesn't really let go of the connector.A load switch on board might also fix another problem: the need to power cycle it.Of course that can happen in 2 ways: have a load switch on the other side of the barrel, making the "chassis" a bit bigger, or have one on board, on the same connector as the serial port.That way you have ethernet to the switch, power to the thick power rails, and controls to the C2 or C1 based chassis controller.On a side note: I would love to see a way to reach at least one of the GPIO's (to control the load-switch) of the usb-serial bridge.

As a design note, I would probably equip them with cheap SSD, and use the SSD as a bcache for an FCoE san (using vn2vn).I did similar with my old gaming machine and a 60GB disk, and it had no problems handling my steam account.

@odroidi have 2 questions/suggestions :- home cloud : is it possible to attach the drive using screws ? i don't see any screw hole in the pictures- my cluster : do you plan to provide a power supply that has 4 DC outputs ? would be a good idea in my opinion

@nobe,It needs a microSD card. It can't boot from Ethernet either HDD/SSD becasue Exynos-5422 ROM code doesn't support it.There is a screw hole on the bottom side of the big heatsink to fasten the HDD/SSD. We will show you some detail assembly guide pictures once we are ready to sell.Yes, we are testing a PSU which has 5Volt 15Amp output for 4 units.

@ard,Good points. We will consider your ideas when we have a plan to revise the PCB design.In fact, some other people requested the PoE capable RJ45 too. If the PoE switching hub has a power control capability, it will be very useful for the power cycle.But we had to minimize the material cost. A Gbit PoE circuit is too much expensive.

There should be a custom board designed that sits on the top and ..1) Provides a 6 port GB switch [4 cluster + 1 LAN/WAN + 1 Management (see below)]2) Has a micro-controller or other processor invisibly attach to the switch.3) Provides 4 serial console ports that attach to each unit in the cluster accessed through #2.4) Provides switchable 4 power ports that attach to each unit in the cluster controllable through #2.

Effectively, its a cluster management device/power supply that stacks on the top and is IP accessible.

I like @crashoverride's idea, though the switch could be optional (e.g. connect to a larger 48 port switch). Maybe one could adapt the C0 with 4 relays for power connected to the C0's GPIO pins. Not sure if it's possible to multiplex the serial port (a multiplexor commanded by two GPIO pins maybe)?I'm guessing that's why Odroid doesn't use a public discussion thread for new developments - everybody keeps adding/changing the design

There should be lots of low cost ARM+Switch chips meeting the criteria since they are used in home gateway/routers. The only thing I am uncertain about is the 4 serial ports. You need each active at the same time to record a persistent log so they should not be multiplexed.

Yes, we added it only for internal testing. Once we connected another 12V power supply to the header, we could use a 3.5inch HDD.But we can't install any 3.5inch HDD into the current metal frame(big heatsink).

Next question must be another new HC2 model which can work with 3.5inch HDD. Yes, we are planning it for November launching.Yes, it will have a single 12volt input.

Your pictures shows two different metal frames. Will the supplied frame / heatsink be the short version or the long version with room for a harddisk?

They are 2 different offers: the HC1 with a single 2.5" disk enclosure, and the MC1, with 4 compute nodes (4x stripped hc1, not capable of sata) and a fan.For best results I would advice 4xHC1 for FCoE or Ceph and an MC1 for compute.

I am just wondering what is that backup plug? Is it really a lithium battery/UPS thing?

There should be a custom board designed that sits on the top and ..1) Provides a 6 port GB switch [4 cluster + 1 LAN/WAN + 1 Management (see below)]2) Has a micro-controller or other processor invisibly attach to the switch.3) Provides 4 serial console ports that attach to each unit in the cluster accessed through #2.4) Provides switchable 4 power ports that attach to each unit in the cluster controllable through #2.

Effectively, its a cluster management device/power supply that stacks on the top and is IP accessible.

My original idea for the U3 was to use the OTG port as ethernet.You can mesh connect a few odroids using USB3 with a slightly higher speed than a gnic, and only have one or 2 bridge that to the outside world.

The 4 serial console ports I was thinking about an exar 4 port usb serial bridge (xr21v141x), and using the GPIO on the exar to control load switches that power on/off the XU4. (My specific idea was generic though, and waiting at the hardware designer to draw a pcb . Load switches can go as low as 4mOhm @ 5V/4A)That way you can control 4 xu4's per usb port. The usb port can be connected to a "chassis controller" like a C1 . The C1 will be able to control 16 nodes this way and serve serial console.The most interesting thing with the MC1 would be to power them on-demand. Otoh: without all the other hardware, the power usage of an idle compute node would be nothing. So we only need power cycling to get a crashed exynos back to health.

I don't think you would need to design/develop a switch. Just use off-the shelf components. Even if you designed one, you'd probably only need an IC that handles all the switching and some components for the electrical ports. Not sure if you could get an off-the-shelf one with only the motherboard and fit your design. Or, it could be completely external, but it would be nice to have some points to anchor a generic switch.

It would be self-contained, network enabled and provide access should one board fail. But if two boards fail, you might lose serial access (without manual intervention). Some shorter USB - micro-USB cables might be needed in order not to clutter everything.

I love this. I've been waiting for some more plug and play style arm board clusters. I've been building my own cluster out of cheap arm boards - I've had my eye on odroids for some time but they're a bit more expensive than some of the cheapies I got for my cluster - but these at $49 racked up and ready to have a 2.5" HDD is a great price for what I'm looking to do.

I originally had planned to hook up my HDDs to an orange pi plus, but when I found it was SATA on USB2, I threw up a little in my mouth. I ended up finding the pcduino3 nano lite for $15 on clearance with the A20 which has gigabit and SATA on SoC, so they drive my HDDs now. But these odroids with SATA on USB3 should be plenty fast enough. I've got 6 2TB HDDs, but I want to expand that to have greater redundancy. I'd like at least 10 HDDs in the cluster so expanding it with four of these odroids should be perfect.

Here's an album of pics of my cluster:

As you can see its pretty messy with all the cables - the odroid solution looks like it would be far neater. I also like the idea of the diskless nodes too as I want to be able to easily expand compute power.

I'm still building the software for the cluster - I am using buildroot to build the base images and looking to try to twist it to build container images. I have plans to give Hasicorp's Nomad a try as the cluster scheduler, but maybe kubernetes if it is not too heavyweight for these smaller nodes (they all have 1GB RAM).

@crashoverride: I had a very similar idea for the control plane - I wanted to build a simple gigabit ethernet switch with a microcontroller to configure VLANs (and maybe some sort of software defined networking), and a separate power board that had the serial consoles on it. I'm not sure I could squeeze ethernet, console and power onto one board in my form factor and since gig ethernet is sensitive to track length I figured it's better to have that on its own. I was thinking of using one of the serial control lines to be able to remotely switch the power on and off (hence having power and console on the same board). I went as far as researching the ICs needed to build these but it is very hard to find GigE ICs that have a published datasheet. I wanted a 9-10 port one (8 external ports, plus one for the microcontroller, and maybe another to chain switches). I couldn't find anything. The power and serial consoles were easier so I might start with that.

@ordoid - I'm interested in the samples you mentioned. I have quite a bit of experience with cluster computing - I was a Google SRE for 5+ years so I know quite a bit about BIG clusters. But these days for work I'm working with docker, kubernetes and openshift. And I have my personal cluster as I mentioned. Let me know if I can help.

@odroid - one option for network booting is to add some cheap SPI flash to the board that is enough to hold uboot with a device tree. If uboot has the network drivers built-in, you can easily network boot from there. Not sure if the SoC you're using supports that, though.

camh wrote:@odroid - one option for network booting is to add some cheap SPI flash to the board that is enough to hold uboot with a device tree. If uboot has the network drivers built-in, you can easily network boot from there. Not sure if the SoC you're using supports that, though.

Or use really small (128-256MB) microsd cards, though not sure you can buy any easily

mad_ady wrote:Or use really small (128-256MB) microsd cards, though not sure you can buy any easily

I've got a bunch of 4GB and 8GB ones that I use for that - which is close to the smallest you get now - but they fail in two ways. 1. the sdcards seem to be cheap - I've got a handful of failed cards. 2. I've had one SD card ejector mechanism fail. Moving parts.

The nice thing about SPI flash on-board is that it allows you to put the DTB there, which means you can boot a generic kernel, like you can do with PCs. If every arm board were to start doing that, it would do wonders for making them more widespread with non-custom software. You still need to have the drivers in mainline of course, but that's getting better every day too.

Wow, very nice designs HK! I can't wait to get a close look at these. Looks like you put a lot of user's requests into a nice looking and functional design with great cooling capability. Just one more way HK distinguishes themselves from any of the competition. Can you put Lego snap together connectors on the bottom too?

Nice offerings, HK!'am no expert, but a couple of observations:1) One thing that concerns me, is the use of only microsd cards. Due to their hi failure rates, they"d have to be replaced more often, making "remote" deployments impossible, not to mention "bad PR".2) The microsd receptacle can be turned 180 deg, so they will not protrude out, making for a compact case for the MC1.3) Option to "add" a small 3.x" (status) display for MC1 may be useful get a "feel" for good health of stack.4) Instead of an always-ON fan, an option to use the fan "tied" to some OR'd "logic" for the 4 members, so fan is turned on only when temp on 1 or more SOCs go up. Will reduce power footprint.

crossover wrote:But I don't know how to use the personal cluster computer MC1.Do you guys know who needs the ARM cluster computer? Only for research and educational purpose?

Think more in direction of servers and productivity:Imagine a docker cluster, you need a new WebServer for testing latest WordPress, or new instance to test a software you build to boot up on a "clean system", or a quick test with some NFS servers, boot up a new docker container do your testings, and throw it away.You don't need to worry if one specific ODROID still has the capabilities to boot it up, it's automatically done and distributed within the clusters.Or for people that compile large projects.. distcc, or maybe a small farm for Jenkins to build your projects on, one runs centos, the other debian, the last ubuntu. one click task and it's build for all systems simultaneously.Things like this are some of the use-cases I guess.

I'm guessing they removed the emmc both to save cost and board space. If you want reliable board storage you could:1. use an emmc + adapter in the sd slot (though would be slower and easy disconnected by accident2. Use the sd only for boot and use a nfs rootfs (cluster setup)3. Use the sd only for boot and use sata rootfs (nas setup). The problem with this is that your disk might not go to sleep because of constant log writes.

I'm doing something similar, I have a LXD cluster of 4 machines which run about 10 virtual machines. Currently its on x86 but these new Odroid machines will mean you could do it on ARM and use considerably less power.

Another option for folks who want to avail the "stackable heat-sink case" design, for the reg XU4, would be to offer a new heatsink for the XU4, so it can "attach" to this/new aluminum case. This will allow folks to use reg ($59) XU4's for all its powerful features (including eMMC if need be), but in a stackable config. The existing XU4 fan JSTs can be tied up to a "smart" case fan, providing needed cooling...

I, too, have concerns about the medium-term reliability of booting off microSD.

I'm actually interested in the possibility of using these for a file server cluster, but I'd also like to see you guys offer a power supply that can power at least four of these off one plug. The thought of having to plug four bricks in kinda makes this whole thing shaky to me, and since they're designed to be stacked it makes more sense to have an arrangement that can share the power cable.

odroid wrote:@camhBTW, do you have your own blog? or can you explain the purpose of your cluster computing?

I always meant to blog about it, but never got around to it. I think I prefer writing code over words.

My main purpose of the cluster is to explore clustered computation at a smaller scale than what I had been exposed to at work, using open source technologies. I want to replace my home server with a cluster with no single points of failure. I want a modular expandable system that scales horizontally, both with CPU and storage. Software systems scale though low coupling and high cohesion and microservices are the latest embodiment of that principle. I want a hardware architecture that pushes you in that direction - lots of lower-power nodes over fewer higher-power nodes.

I want a seamless transition between computation in the cloud and computation in the home (on premises) and have workloads and data migrate across this boundary as required (and stay on your side as required!).

The two products you've announced in this thread are the two elements I've wanted for a while. Nice simple stackable storage nodes, and nice simple stackable compute nodes. I've been waiting for boards without HDMI, audio, IR, etc - all those things of little use for my use case. I've been thinking about the SO-Pine as an option for compute power, but I'm not likely to build a backplane for them and I don't see one coming from elsewhere. My current storage solution is fine for now, but in looking to expand it I don't see a lot of options. I need good I/O speeds (gig ethernet, non-USB2-SATA), and there's not a lot that does that. The espressobin looked promising, but the price for the 2GB version is too high.

Having deployed lots of software on clusters, I don't want to go back to the traditional model of the OS as a platform. It's really nice to just tell borg/kubernetes "run 20 instances of this job somewhere" and not have to think about installing software, handling machine failures, moving workloads, etc.

I could probably keep going on, but that'll do for now. Got some bugs to fix...

Firehawke wrote:but I'd also like to see you guys offer a power supply that can power at least four of these off one plug. The thought of having to plug four bricks in kinda makes this whole thing shaky to me, and since they're designed to be stacked it makes more sense to have an arrangement that can share the power cable.

I power my cluster off a single 180W 19V power brick. I've got a buck converter for each of the boards: http://i.imgur.com/peg0YfN.jpg - it might be a bit more DIY than you want, but it was my only solution as I also don't want multiple power bricks.

Great ! It's funny, this board look like Nanopi neo board with NAS kit. Have you got any issue with the hard drive like disconnection, tic tic noise or freeze ? I remember with the XU4 it was terrible. Have you plan a 4 GB version ? 2 GB is ok but with LXC containers, memory is quickly used.

Are there any holes in the back for easy wall-mount? I don't see any on the pictures, but a few holes (one each corner) would make it very easy to mount the boards on a wall.I've drilled holes though the plastic cases on some of my existing units, but making drill holes in aluminium might be difficult

I think a four-port power and potential buck converter stackable fitting into the same form factor just knocks this out of the park. Add in some direct management for simple control (e.g. forced power reboot) and anything else I would ask for (e.g. 4gb option) is pure gravy. There is no doubt I will order a stack of these, the only question will be when I stop ordering them and after how many. This potentially solves some very large problems for some very large clients who are suffering severe lack of capital problems.