I've been previously running my Drakan server on an old Intel Atom netbook with Win XP.
Although its tiny power consumption (around 12-15W) was quite advantageous, the hardware was otherwise rather ancient; barely keeping up with the demand of running the server together with the antivirus+firewall.

Also, because of its size, I didn't have a suitable spot where I could put it down and connect it with an Ethernet cable, so it was on a Wi-Fi connection instead. While that appeared to be fine for the most part, there were some random lag spikes at times, most likely caused by channel interference.

Manually setting the wireless channel to use helped a bit - but in the end, wireless is simply not suitable for this task.
Especially since due to an unfixed (and possibly unfixable) bug in Drakan netcode, some levels (eg. Auropolis, Lava Land) fail to load for the clients when there are excessive packet delays on the server side (the dreaded "stuck on 99% loading" bug) - and wireless certainly doesn't help with that, so it had to go anyway.

So I decided to go for a proper, fixed installation; as befits a permanent server.
That also involved providing it with a cable connection to my router; in this case, it required laying a new Cat5e cable.

The design goals for this build were:

Smallest possible physical size,

Power consumption below 25-30W (during normal operation),

No audible noise from fans, etc.,

Significantly overbuilt for this task, for possible future utilization in additional duties.

The technically most obvious solution would've been to go for a modern model of the Intel Atom (eg. the NUC); however, because of my extreme prejudice against anything Intel, I decided to do this the hard way, and go with AMD.
Another reason was that at the time, there was some suspicion that the "stuck on 99% loading" bug was related to running the server on Intel CPUs; but note that since then, it's been proven to not be the case though.

But why do I say "the hard way"?
Well, for one, there aren't any "Slim Mini-ITX" motherboards for the AMD APU processors, only regular Mini-ITX ones. This already ruled out some very nice, passively cooled Slim Mini-ITX cases.
Additionally, I was unable to find any passively cooled, non-Slim Mini-ITX cases - at least, none that anyone would sell.
Finally, none of the available AMD APUs have TDP below 35W , which didn't bode well for this build - both in terms of the cooling requirements, as well as for meeting the "low power consumption" goal.
It was obvious that some serious underclocking & undervolting would be in order here; now that's something I've never tried before (it's always been the opposite way for me ).

So, without further ado, let's take a look at the selection of parts for this build:

I ended up buying the Noctua NH-L9a cooler, which the seller (a reputable computer parts shop) had claimed to be compatible with AM4 sockets (but it isn't) - and for whatever reason, I failed to double-check that before buying.
Instead of returning it, however, I elected to convert it to be compatible with AM4 after all.
Since the only real difference between AM3 and AM4 is in the spacing of cooler mounting holes, this involved cutting new mounting slots in the cooler mounting bracket, as well as modifying the original motherboard backplate (since the blackplate supplied with the cooler would require far too much grinding to convert to AM4). Unfortunately, I don't have any pictures of that.

Actually, the stock "BOX" cooler bundled with the 9800E APU would just about exactly fit into this Mini-ITX case, although the fan shroud would be pretty much touching the side panel.
However, the stock AMD fan is far too loud for this application, rendering the whole point well and thoroughly moot.

This Mini-ITX case is very tight, there's barely enough space for anything extra inside. Even simply inserting the motherboard is quite a challenge, since it's there's only just enough room to do that.
Interestingly, the manufacturer provided a mounting plate for attaching 2.5" or even 3.5" HDDs - which is a complete joke, since such a drive would mechanically interfere with pretty much everything else: CPU cooler, RAM sticks, and even the voltage converter PCB.
Unfortunately, the included ATX power harness is too short to allow proper cable routing; I was barely able to prevent it from interfering with the fan blades. As it is now, the side panel can just about close - but with no clearance to spare.

I also had difficulties with the power supply requirements, since during OS installation I was running stock speeds and voltages - that ended up drawing up to 80W of power at times of heavy CPU usage, overloading the 60W "brick" adapter and causing system instability/crashes, due to sagging power rail voltages.
For the purpose of setting it up, I temporarily connected another, 120W power adapter, and that did the trick.
However, after I was done with the underclocking, the included 60W adapter proved more than adequate for the job

Here's what it looks like fully assembled (minus the case side panel, obviously ):

Assembled server.jpg (152.73 KiB) Viewed 450 times

And the end result, sitting in its intended location:

Server next to big tower case.jpg (79.24 KiB) Viewed 450 times

The PC next to it is a "Big Tower"-type case - which, well, towers above the tiny server
Yet despite the size difference, that PC is only about 4x more powerful than the server in its current configuration.

Now, as for the clock & voltage settings:
In the current configuration, it's underclocked to 2.0GHz (CPU) and 1333MHz (RAM); just under 2/3rd of the stock speeds.
The voltages have been lowered almost to the lowest limit of how far the BIOS will allow - with only Vsoc being a few notches above minimum, due to some observed graphics driver instability at the lowest voltage setting.
CPU and SOC loadline calibration had also been set to "Low".

I've stress-tested this configuration for a few hours with Prime95 (with no issues), then increased all voltages 2 notches and decreased the CPU temperature target by 5C, just to ensure further stability.
Note that I didn't test it with 100% GPU load - but in any case, that's not revelant for the intended application, since the Drakan server doesn't actually use any 3D graphics at all.

Here's what Gigabyte's Hardware Monitor has to say about this whole setup, while it's running the Drakan dedicated server:

Server HwMon output.png (34.1 KiB) Viewed 450 times

Power consumption in that state is around 22-25W (at the wall plug - I measured about 18-20W at the power adapter output, but the adapter's only about 85% efficient at best).
At 100% CPU load, it goes up to almost 50W.
Not quite at the same level as the old Intel Atom - but still far superior on a performance-per-watt basis, when considering the vastly higher performance compared to the Atom.

BTW, this server normally operates without any user interface devices - it's only connected with a power cord and a LAN cable; I connect to it via Windows Remote Desktop.
It's also set up to shut down properly after pressing the Power button, and to start up automatically (including starting the Drakan dedicated server) when it gets connected to AC power (or - more relevant - after any power outages).

I need to convert all my variable-framerate recorded gameplay videos to fixed framerate before any video editing software can handle them without desyncing the audio - but HandBrake crashes instantly on my main PC for some bizarre, undetermined reason.

So I drop the files on the server and run an encode job in HandBrake - with the lowest possible priority, as not to interfere with other operations.
Takes much longer than it otherwise would on my main PC, but since it's a hands-off operation, I can just leave it to grind away for days on end