KPTree Home Server Setup

Home Server Web Notes Summary

The main reason for these notes is a reference to assist me with maintaining my home server. This includes upgrading the existing or setting up a new server in the future.

There are many reasons to setup a home server and many different options available. For me one of the big reasons is the tinkering and learning associated with such a set up. There are many other benefits. Perhaps also one of the largest negatives is also the time invested in this endeavour, it will certainly not be for everyone!

I have published these notes on my public website KPTree.net, for my own access and also and possible benefit to others. At this time I am not interest in adding advertising to this site. As these are my personal notes, provided without cost, I assume no obligations in anyway should anyone in anyway use them in full or part. YOU USE THESE NOTES AT YOUR OWN RISK!

I have used many references from the Internet to assist me with the development of my home server and these notes. In general these references links are provided in the relevant section of the notes. Many of these reference links are also provided in the KPTree-Miscellaneous Links. The biggest single source of information and arguably inspiration has come from Havetheknowhow.com, this is certainly a good starting point if you are interested in a Linux based home server!

My Home IT Setup

A special mention goes to the OpenSprinkler sprinkler controller, that is probably the best network interfaced sprinkler controller available, both for home or commercial use.

Another special mention is Snapraid. I believe this to be the best solution for a modern home server solution, giving the best compromise between performance, reliability and power saving. It should be considered that traditional full time raid systems require all harddisks spinning when in use, compromising long term reliably of all the included disks and increased power consumption. A key benefit of many traditional raid systems, is increase bandwidth (speed) due to use of simultaneous disks, however a modern 3.5" harddisk has a data bandwidth similar to a 1Gb/s Ethernet, so the tradition raid speed benefits are of little value unless a more exotic network arrangement is used. I use an SSD for my main system drive and 2x 6TB hard disks for main datastore + 1 extra 6TB HD for a parity harddisk with snapraid. All the 6TB hardisks are programmed to spin down after 20 minute of no access use. Further to this I back up the 2x 6TB HD to external drives intermittently and have addition 2.5" portable drive with regularly used data and irreplaceable personal data. Some photos, the main irreplaceable data are with other family members, giving some limited effective offsite data backup. I should consider off site backup of the irreplaceable data; to be sure, to be sure.

My Home IT Setup - Other

Network Setup

The home server I have has 4 Intel Gigabit NICs. The past couple of years I have only been using 1 NIC to a main 24 port gigabit switch. This is described in the Basic Network Setup below. The home server has 4 drive, 1 SSD system drive, 2 larger data storage drives and 1 drive used as a parity drive for off line raid for the data storage drives. For most the time a single NIC will provide sufficient bandwidth between the server and switch. However there exists server capacity to saturate bandwidth of a single gigabit NIC. To increase effective bandwidth there is an option to bond 2 or more NIC together to combine their bandwidth. This is call NIC Bonding. To allow virtual machine NIC access the NIC(s) must be setup in bridge mode. Furthermore bridging NICs can also allow the NICs to act as a switch, obviously where more than one NIC is available. The Full Network Setup section below, describes seting up the system with bonded and bridge NICs. Both setups were found to operate well.

I tried earlier to use static assigned IP setup, but had problems with operation and used setup with dhcp, which worked. I then setup the dhcp sever to assign a fix IP address to the eth0 address.

Full Network Setup

As noted in the main section I have a server with 4 built in Intel NICs. To reduce performance reduction due to limited Ethernet bandwidth using only one NIC I propose to use 2 NICs in bonded configuration and also use bridging to allow server virtual machine access to the NICs and also use the remaining 2 NICs effectivily as a switch.

802.3ad or 4 requires a switch that is corespondingly setup with IEEE 802.3ad Dynamic link aggregation.

bond-lacp-rate, only required for 802.3ad mode, Option specifying the rate in which we'll ask our link partner
to transmit LACPDU packets, default is slow or 0:

slow or 0, Request partner to transmit LACPDUs every 30 seconds

fast or 1, Request partner to transmit LACPDUs every 1 second

bond-xmit_hash_policy

layer2 (default)

layer2+3

layer3+4

layer2 and layer2+3 options are 802.3ad compliant, layer3+4 is not fully compliant and may cause problems on some equipment/configurations.

bond-slaves

bond-master

hwaddress ether xx:xx:xx:xx:xx:xx

The MAC address xx:xx:xx:xx:xx:xx must be replaced by the hardware address of one of the interfaces that are being bonded or by a locally administered address (see this Wikipedia page for details). If you don't specify the Ethernet address then it will default to the address of the first interface that is enslaved. This could be a problem as it is possible for various reasons that the hardware address of the bond could change, and this may cause problems with other parts of your network.

Bonding Benefits and Limitations

Benefits

Increased Ethernet speed/bandwidth (with limitations)

Link Redundancy (not a feature of particular interest to me)

Limitations

More complex setup

Not as fast and flexible as faster ethernet connection as each transport connection only uses one media connection to prevent packet mixup, hence maximum connection speed limited to speed of one bond lane

Modern harddisks are generally faster than a 1 Gb/s ethernet connection, SSDs significantly so. Yet many individual data demand usages are significantly slower, e.g. video 0.5 to 30Mb/s, audio 100 - 400 kb/s. Furthermore most external internet connection are still normally slower then 100Mb/s, with only larger offices having 1Gb/s or more bandwidth. So the biggest speed/time impact is when coping files across a speed limited ethernet LAN connection or where a server is used to provide information to multiple clients. Ethernet bonding can help improve server performance by sharing multiple simultaneous client connections between the bonded ethernet connections.

Wifi quoted speeds are particularly bogus / optimistic. The quoted speed is usually the best possible speed achievable. Wifi bandwidth is often shared with many simultaneous users, with each n user only often getting at best a 1/n share of bandwidth. There are also latency and interferance issues with Wifi that can affect performance. Wired LAN ethernet connection tend to provide more reliable consistent performance. That being said Wifi is convenient and in most, but certinly not all, cases fast enough.

Full Network Setup 18.04

This is the setup for my new server with 4 built in Intel NICs, with Ubuntu 18.04 To reduce performance reduction due to limited Ethernet bandwidth using only one NIC I propose to use 2 NICs in bonded configuration and also use bridging to allow server virtual machine access to the NICs and also use the remaining 2 NICs effectivily as a switch.

To check available interfaces and names: "ip link"

Netplan does not require the bridge utilites to be loaded however these utilities can be used upon the bridge: "sudo apt install bridge-utils"

Under netplan the bonded configuration does not need ifenslave utility loaded, as this utilities is dependant upon ifupdown. Do not install "sudo apt install ifenslave"

The new server board does not have any back USB3 ports. No great loss, never used them yet.

As instructed in the system created yaml file "/etc/netplan/50-cloud-init.yaml", create the file "sudo vim /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg" and add the line "network: {config: disabled}"

The qemu defined networks can be listed with the command: "virsh net-list --all"

You can list networks with "networkctl list"

Some helpful commands and comments:

To see bridge status information: "brctl show"

To see bond setup status: "cat /proc/net/bonding/bond0"

To list network configuration: "ifconfig", "ip a", "ip route"

Kernal IP routing table: "route"

The NetworkManager is not required on a server, as the base ifconfig and related commands provide full functionality. NetworkManager may conflict with base configuration. Remove with "sudo apt remove NetworkManager". (To see information on system network start-up and ongoing status: "sudo systemctl status NetworkManager" or more comprehensivily "journalctl -u NetworkManager")

Time Date Related Setup

Setup NTP server

The NTP server setup is quite simple, I used the reference from Setting up NTP on Ubuntu 14.04. I replace the pool servers with my local ones, "sudo vim /etc/ntp.conf".

I basically follow the Have the know how instructions, but instead of "sudo apt install gnome-core", use "sudo apt install xfce4 xfce4-goodies". I have been using vnc4server, not tightvncserver. Also in ~/.vnc/xstartup, only:

The xfce screen-saver seems to default on and use significant system resources, and is basically unnecessary on a headless server. To disable perform the following:

In the xfce desktop go to the "Applications Menu > Settings > Screensaver" and disabled the screensaver, then from the "File" menu chose "Kill Daemon".

Then go to the "Applications Menu > Settings > Session and Startup" and un-checked "Screensaver (launch screensaver and locker program)" in the "Application Autostart" tab.

(The xfce screensavers actually look quite nice, and may make sense on a standard desktop install.)

The xfce default shell seems to be sh (/bin/sh), I prefer bash (/bin/bash). To check the current shell, type: 'echo $SHELL". To use bash simply type "bash". To make permanent add the line "exec /bin/bash" to the end of "vim ~/.profile". You will need to restart VNCserver for this to take effect.

Some other important tips:

To start server "vncserver -geometry 2200x1340". (I have 2 preferred geometries, one for smaller screens (1880x1040) and one for larger (2200x1340))

To stop server "vncserver -kill :1" or :2

The server log files are stored in ~/.vnc, "less ~/.vnc/KPTreeServer:1.log" or :2. (A log files may contain a number of errors and warnings, however this does not necessarily mean the vncserver will not operate correctly.)

The .pid files in ~/.vnc generally show which vnc are currently running, performance can be checked by viewing the log file. The running vnc server process(es) can also be checked with the command "ps -A | grep vnc"

The vncserver startup configuration file: "vim ~/.vnc/xstartup"

I set up cron to run the following script at boot: "vim ~/Myscripts/StartVNC.sh", StartVNC.sh:

(I elected not to use the systemd setup described in the Digitalocean set instructions as I normally run 2 vncservers with different geometries to allow better performance on tablet/laptop/desktop computers.)

Gnome file manager; package: nautilus. (CLI: "gksudo nautilus &", but be very careful if using in root...)

Gnome disk utility; package: gnome-disk-utility. (CLI: gksudo gnome-disk &", but be care if using in root...)

Gnome disk usage utility; package: boabab. (CLI: boabab &)

SWAP Files

As I have a computer with enough memory I see no need or value in a SWAP partition. In fact as I am using a SSD for the system drive a SWAP is a concern to the reliability of his drive. The following is a list of method to check and disable SWAP function.

To prevent a SWAP partition being mounted at boot comment out the swap partition in /etc/fstab, "sudo vim /etc/fstab". (Another open is to instead use the swapoff -a command in a boot cron job. This allows the swapon -a option to be later used.)

The command "free -hw" shows the current memory status.

Some links:

How do I disable swap? This article also refers to another that explains why turning of SWAP, even with a lot of RAM, may not be best, however this was written at a time SSD were not common and system RAM availability were in general significantly lower.

How To Add Swap Space on Ubuntu 16.04. It is interesting that this article also warns against use of SWAP partitions with SSD storage. This article also mentions the swappiness and vfs_cache_pressure setting.

Other Setup Tips

BASH Customisation

The standard BASH colour configuration uses a blue colour for listing directories (ls) which is difficult to read on a black background. While this is the "standard colour", due to the impracticality I have decided to change it.

The personal BASH user configuration file is: "~/.bashrc". Simply add the following line to this file: "LS_COLORS='di=1;32' ; export LS_COLORS" The code 1;32; is for a light green colour.

The .bashrc file also has a number of other interesting "features" and options, such as aliases and colour prompts. If you turn on the colour prompt option (force_color_prompt=yes), again the dark blue colour may be difficult to read so I change the prompt color code from 34 to 36.

To update the terminal, without logging off type: ". ~/.bashrc" or "source ~/.bashrc". The command "exec bash" will also work.

BASH History Customisation and Use

VIM Customisation

I use the VI (or VIM) editor. It comes standard on most Linux and UNIX distributions, or can otherwise be installed. A key feature I configure is the VIM colour scheme, as the standard colour scheme does not work well with black background terminal windows I prefer to use. Simply create the file on home directory, ".vimrc" ("sudo vim .vimrc") and add the line ":colorscheme desert".

The different VIM colour scheme definition files are located at "/usr/share/vim/vim74/colors"

VIM Text Editor

A powerful text editor, standard in most Linux distributions and available in Windows. Need some time and effort to learn though, particularly if moving from graphical user environment.

There are two (2) main modes: Command mode and Insertion mode. You only normally type text in Insertion mode. The Esc (escape) key enters command mode and the i or INS(insert) keys return to Insertion mode.

If like me you use a keyboard without an insert key, eg. Microsoft Surface, you can get into insert mode directly from command mode by typing i. When you open VIM you are in command mode, so simply type i (or insert) to get into insert mode.

To copy, cut and paste:

First go into command mode (ESC or CTRL-[)

Move using cursor keys to place to start highlight, hit v key and highlight area to be copied (or cut)

key y to copy, or d to cut

Move to place to paste, key P to paste before cursor or p to paste after

Rsync - File sycronisation, full feature file copy

Simlinks

A symlink is a soft or hard link to a directory location to another directory location or file. I am only interested in the soft link. It effectivily allows a directory tree to be made for different non-structured directory locations, even across partitions.

Simple use is: 'ln -s "path/directory or file" "path/symlink name", where option -s is to create a symlink. See ln --help or man ln for more information. Another good reference is The Geek Stuff The Ultimate Linux Soft and Hard Link Guide (10 Ln Command Examples)

To remove symlink 'rm "path/symlink name"'

To list symlink 'ls "path/symlink name"'

To list symlink directory contents 'ls "path/symlink name/"'

Symlink ownership is not particularly important as it has full permissions (777) and file access is determined by real file permissions.

KVM

Use the built-in clone facility: "sudo virt-clone --connect=qemu://example.com/system -o this-vm -n that-vm --auto-clone", Which will make a copy of this-vm, named that-vm, and takes care of duplicating storage devices.

Trip wire check system files to check for any changes and alarms / alerts upon changes.

Set Up and Ubuntu APT Cache

The apt-cacher-ng looks to be a self container apt caching server. Basically the apt cacher stores all the relevant apt update and upgrade related files and and acts as a proxy server to multiple clients. A handy feature to improve speed and reduce Internet bandwidth where a virtual machine server is used with multiple clients. There is another package called apt-cacher but it depends upon the installation of a separate webserver.

There is also APT-mirror that retrieves all packages from the specified public repository(s). Where as apt-cacher only retrieves each package when called and stores for subsequent use by other clients. APT caching looks the way to go and apt-cacher-ng the best overall option. I installed apt-cacher-ng on the VM server, not a VM client. The clients are setup to obtain their apt updates and upgrades via the server.

To check running process with open for openvpn(/del, for deluge) "ps -A | grep open"

To change time zone from command line: "sudo dpkg-reconfigure tzdata".

Networkd -> systemd-networks

Resolved -> systemd-resolved

Some related links

Ubuntu Network Setup Links

Links relating to bridged and bonded Networking

A bridged network allows different networks to be connected, both physical, like NICs or Wifi and virtual, allowing virtual machine to connect to a physical network and even be assigned a LAN IP address. Bonding allows physical networking devices such as NICs or Wifi to be bonded to allow increased bandwidth or redundancy. Sadly there seems to be alot of information out there that isceither for older version of software or other purposing.

Disclaimer: All data and information provided on this site is for informational purposes only. kptree.net makes no representations as to accuracy, completeness, currency, suitability, or validity of any information on this site and will not be liable for any errors, omissions, or delays in this information or any losses, injuries, or damages arising from its display or use. All information is provided on an as-is basis. kptree.net does not collect any personal information about its visitors.