considering you would want the "minicomputers" (as a general term independent of the exact model you want to use) mount horizontally (for airflow), have you also considered to mount them rotated? Either by using the network and USB ports to plug into your board directly (USB for power, I don't think they have protection to prevent that) or so that USB and network remain accessible on the top. Doing so should increase the number of pis you can have in each row.

I don't think using the SPI-Ethernet bridge would be a problem in you case, because the the spi port is not that slow (megabit/s range) compared to an average internet connection, but having the network switch integrated on the board would be cool.

As for potential "daughter"-boards. I don't think there is a reason to connect through all 40 pins. Just power and the network connection should be sufficient. So even two to one, or three to one adapter boards should not require big connectors, although mechanical stability might be a major concern.

If you don't require easy access to the individual "minicomputers" later on, you also might want to consider to stack two mainboards on top of each other as a second layer, instead of using 20-30 2-to-1 adapters, in order to get a better "fill-factor" of the case.

I chuckled when I saw Dave pointing out the bodge on the G5 motherboard. A good friend of mine who still works at Apple was responsible for that bodge. There was a timing bug in the first production run of northbridge ASICs. That wire is the proper length to insert the a delay to "fix" the bug. Ah, memories of weeks spent in lab figuring that bug out.

I know your focus tends to be hardware, but have you considered a more 'software' alternative to this project? That is, something different than Ethernet to connect your boards?

Thinking being...* You don't need high performance, you just need connectivity* You've got a lot of header pins, SPI, etc to create a shared comms bus* One Pi on the bus could act as a gateway to the rest of the world via its Ethernet (using NAT)* The missing link would seem to be a network driver implemented on SPI (i.e., SPI between Pis, not to an Ethernet controller)[Edit 1/7: I've learned that no SPI slave driver has been developed for the RPi, so you'd have to bit-bang your own bus+network driver using I/O pins.]

I'm on a similar quest, but I haven't yet gone hunting for such a driver, so I don't have a turnkey solution for you. But boy, would it simplify the design - I've got to think someone has coded this by now. You could practically use an IDC ribbon cable for the bus and some standoffs to stack the Pis. (Or, for a PCB design, a right-angle female 2x20 header pairs nicely with an IDC shrouded pin header - no PCB slot required. Cost for headers from Adafruit is cheaper than Digikey, at least for the US.)

The above would have a bonus of being a low-cost solution for Pi Zero, which might make it $/GHz more attractive. If you could circumvent the need for an SD card (which would be a whole separate project), that practically cuts the Pi Zero cost in half.

Speaking of Zero, the new Orange Pi Zero looks pretty darned attractive at USD$7 with quad core 1.2GHz, 256MB RAM, and wireless. All it'd need is a couple pins for power. OS support seems sketchy out the gate, but I expect that'll resolve itself in due time. If it works well, it seems like game-over for this kind of 'clustering' project.

And Pi Compute Module... has previously offered small form-factor, but higher cost. Since the CM3 is claimed to be backward-compatible at the connector with CM1, I expect it to similarly be a subset of Pi3. But it does have MMC instead of SD, which is nice. It's handier for integrating into a commercial product, but if you can make a Pi3 work it'll probably be a cheaper 'Raspberry' option (i.e., if you ignore Orange).

If the point of this is to process SETI@HOME or other things via BOINC, then the focus on the ethernet controllers seems to be a waste of time, Its not a component of the design that matters.

On the other hand making a sexy board design that the Pi's/OrangePi's can connect into would be great and from that point of view, maximising density, minimising parts counts (ethernet switches, cables, USB power etc) would be a great goal. Especially with the use of SPI networking or similar, where the ethernet and wireless chips can be shutdown to save power and more importantly heat on each Pi.

I thought I would have a go with this as well (not the electronics part yet), with a bit more of a focus on the computing side for a bit of fun. To start I went with the following

I setup the three nodes, as rp3-0, rp3-1 and rp3-2, currently they all attach to my wireless network and directly collect work units from the internet, my next change is to run the rp3-0 as an access point with a seperate SSID and route the network data via this node to my cabled network. The end result will be one ethernet connection with however many Pi's that I add as I go forward connected wirelessly, inside of a metal PC case with fans and a single power supply.

The network bandwidth that the PI's need is insignificant as they download a work unit, then process it for some time before uploading the result to the internet. In my case, my ADSL has a download rate of approx. 3mbits/s so even the worst wireless network performance (11mbits/s) easily outpaces this.

On the other hand if you want to run actual cluster applications (not what has been mentioned in the video's, something like the video below), using MPI or an equivalent, then having a common file system (NFS or NAS share), better network performance etc would be important.

One item that I haven't really noticed you mention or focus on in the videos as yet is what you will do to assist with the cooling of the Pi's other than just some fans. In my case I have stacked the three Pi's with 30mm risers between them and have a fan blowing across them. I found that they were running at 60-70c when running 4 BOINC threads per Pi (it was about 26-28c in the room) after a couple of hours. I then added some small heat sinks to the processor, dropping the processor temps to 50-52c. note: Temps are from the systems software reported temps, not actual measurements.

Then I had a bit of a play with the overclocking parameters on the Pi's, increasing the CPU to 1.325GHz, increasing the memory bus speed to 450 and increasing the core voltage setting to 4. This lifted the operating temps to 52-54c, but with an improvement of about 17% in the processing performance of the Einstein@Home work units across all three devices.

Im keenly following your progress. I am also starting to have a play with MPI on the cluster to see if I can use it to do something useful with it, other than its current task of monitoring the temps on all nodes in the cluster.

I'm no expert in this stuff, but I'll just leave this link here, and let those who understand it decide if it's worth looking into. Basically, someone got Intel’s open source Threading Building Blocks running on the Raspberry Pi to "achieve better parallelism in his OpenCV project". Intel TBB wasn’t available in Raspbian, due to some kind of issue with creating a build to run on ARM, but this dude was able to create a working build, and made it available to download.

I don't know if it's relevant for this build, but I'll just leave that there for those interested.

As far as the build itself goes, I have to say i rather like the idea of using multi-level risers that mount at least 2 boards deep, and PCB "strips" that have the male RJ45 plugs acting as board to board interconnects. For airflow reasons, I see no reason that the Ethernet PCB strips need to be rectangular either. Routing the boards at follows:

[_]_[]_[]_[]_[]_[]_[]_[]

This would optimize airflow, since you only need boards as wide as the traces themselves need to be. You might even be able to optimize manufacturing by having two boards mirrored against one another, so the protrusions for each male RJ45 connector intermesh during manufacture, to minimize wasted board space. One would assume one end would be larger, so as to support the Ethernet switch chip, and an RJ45 connection to either an external Ethernet switch, or to another riser board, designed to bridge all the ethernet PCB strips to the motherboard.

____btw... What the heck is up with this forum not supporting extended unicode characters... I had a GREAT diagram for that PCB made up from unicode, and it failed to post properly. Had to redo it in basic ASCII. What is this? forum software from 2002?

RPF have relelased information on some new features in Pi 3 bootloader - mainly netboot is interesting here. This means you can set up the cluster to boot off a single Pi over network, without a mess of SD cards.

Forgive me if it's a no-no to bump this thread, but I'd really like to hear if Dave's done any more with this. I'd really like to see the finished project as it sounds like it's going to be quite the sight when done.

I originally came across these videos about a year ago and got rolled by life ( ) and came over to have a look to see if there's something here about an update to the project, but alas no .

Logged

I run a small IT/Electronics shop out of my garage. The electronics part came about because my son is getting old enough to use a soldering station by himself and does some pretty interesting things (and he has plenty of ideas!) and e-waste recycling for parts (except for the eBay stuff).