https://makerforce.io/https://makerforce.io/favicon.pngMakerForcehttps://makerforce.io/Ghost 2.1Sat, 09 Mar 2019 20:27:45 GMT60Building your own Linux OS from scratch is no dark magic, believe me! As long as you feel comfortable using a command line, it isn't such a daunting task, only requiring a fair amount of patience.

We'll be setting up an environment, compiling the kernel, userspace tools, a root filesystem

]]>https://makerforce.io/make-your-own-linux/5a8c4096c2aaf40001f3794bSun, 09 Sep 2018 08:51:40 GMTBuilding your own Linux OS from scratch is no dark magic, believe me! As long as you feel comfortable using a command line, it isn't such a daunting task, only requiring a fair amount of patience.

We'll be setting up an environment, compiling the kernel, userspace tools, a root filesystem and then test booting it. I'll assume you already run Linux on a machine, or in a Virtual Machine. Let's dive right in!

Environment

We'll first install the programs necessary for building. All these tools should be available in the major Linux distributions, but I'll only give the commands to install them on Debian/Ubuntu and Alpine Linux.

After downloading the source, we can again use the menu-based configuration tool to customize our Busybox build. One thing you must enable is "Build static binary" under Settings, to ensure that we only depend on Busybox. Next, we can build it:

$ make -j12
$ make install

Busybox will produce a root filesystem in _install/ for us. Take a look, it's mostly just symbolic links to busybox

$ ls -l _install/bin/
$ cd ..

Root filesystem

Linux can boot into a root filesystem on a hard disk, or boot using an initramfs. A initramfs is an archive that the kernel will extract into memory to provide basic utilities that can be used for system maintenance.

Linux will look for a file /init when it boots and execute it. Linux trusts the script at being responsible for booting the rest of the system. Below is an example of a very basic script that will set up the basic mounts for a working system and then start the Busybox init daemon.

Now we can go ahead and build the entire root filesystem and busybox into an image. This image is packed using the CPIO file format and then compressed, but Linux provides tools to hide all that for us.

Test

If it boots successfully to a shell, you've successfully compiled the kernel and initial filesystem from scratch!

I put all this commands into a Git repository here, nicely wrapped up in scripts.

In the upcoming second part to this article, I will show you how to install a bootloader and put your tiny OS onto a flash drive to be bootable.

]]>Ad-Hoc networks are useful when you just need a connection between a few machines with Wireless networking and no routers. It's an underused feature, but sometimes may be useful.

Setting up an Ad-Hoc network is pretty simple. In macOS, open the wireless panel in the menu bar and select "

]]>https://makerforce.io/setting-up-a-ad-hoc-network/5b7d48b0ca386600012ca71bWed, 22 Aug 2018 23:59:53 GMTAd-Hoc networks are useful when you just need a connection between a few machines with Wireless networking and no routers. It's an underused feature, but sometimes may be useful.

Setting up an Ad-Hoc network is pretty simple. In macOS, open the wireless panel in the menu bar and select "Create Network..."

After that, set your network name and choose any channel.

Your status bar should show this icon and you'll be connected to your Ad-Hoc network.

From here on, you can connect other devices to the network the typical way.

]]>It is not hard to get your hands dirty on computer networking basics and operating networking equipment. I've been running my own home network for the past 4 years and messing around with IPv6, VLANs and multiple networks, all without expensive racked routers or switches.

Router

The most important device

]]>https://makerforce.io/networking-on-the-cheap/5b7ab452b8ba4700010cc77aMon, 20 Aug 2018 13:47:53 GMTIt is not hard to get your hands dirty on computer networking basics and operating networking equipment. I've been running my own home network for the past 4 years and messing around with IPv6, VLANs and multiple networks, all without expensive racked routers or switches.

Router

The most important device you'll need is a router. Any computer with at least one Ethernet port is already a router. You can even use an old laptop as a router. (It comes with a free keyboard and mouse too!)

Alternatively, if you have an old consumer router and access point combination (wireless router) that comes from your ISP, it too can be used as an advanced router.

Usually, your computer or wireless router won't support advanced features like VLANs and have restrictive configuration options, so software is the next step in getting these devices bend to your will. There are a ton of router operating systems that you can install onto devices, or if you want to spend the extra effort, you can also use a Linux distribution.

Here's some examples of wireless router operating systems you can flash onto your router. In the process, you'll also void your warranty and may end up with a bricked device, so be careful. Sadly, some routers aren't supported by these firmwares, so when you get your next router consider checking for compatibility.

pfSense, OpenWRT and DD-WRT come with a web user interface for easy configuration and monitoring, but if you want to take your router to the next level, you can use your favourite Linux distribution (without a desktop) and Vim/Emacs.

For computers, plug your WAN into the Ethernet port and configure it. I bought an Intel box PC from Taobao and have been using it as my main router for quite a while, it's pretty okay.

Switch

Another thing you'll definitely need to experiment with more than just one network is a network switch. For those of you using a wireless router, fortunately it comes free of charge! Based on my limited experience with wireless routers, some switches in them are reconfigurable or run in software, and those are amazing because you get to configure VLANs.

If you plan to use a computer, try to get a smart/managed switch because that will give you VLANs and other QoS management functions. I got myself a TP-LINK SG108E Easy Smart Switch via Carousell for a pretty good price ($30).

If your computer has only one Ethernet port, you can actually use a managed/smart switch to connect your computer to WAN and LANs at the same time using VLANs which is also pretty nifty.

If you need more switches, old wireless routers from Carousell or thrift shops are more value as compared to standard switches because they're mass produced more. Another side effect is you also get more Access Points to play with!

Cables

Finally, cables. My rule of thumb is: don't buy new cables in local stores. Find cables on Carousell or online shops, and if you plan to have more than five devices plugged in, you can even get yourself a reel of CAT5e cable, a crimp tool and Ethernet connectors to make your own. In my opinion it's more fun to accidentally crimp them incorrectly and watch them fail, any may even save some money if you're looking on the right websites. Beyond the cables themselves, you will also need to figure out how to run it from your ISP's terminal to your router and other devices.

With consumer routers and switches and free software, you can gain the freedom of messing with networking for a really low cost. My router box and switch cost under $200, and with pfSense running on it, I have a really configurable setup. pfSense gave me easy access to a static IPv6 range from Hurricane Electric, and makes setting up OpenVPN and port forwarding to servers easy. It also lets me run my server network separate from my home network and set up firewall rules between them.

Have fun!

]]>With the new infrastructure, I wanted the entire setup to be reproducible from a set of configuration files and scripts. This would mean that I can restore a fresh state anytime in the future, and provide a way to track changes in configuration. This also means that I can quickly]]>https://makerforce.io/infrastructure-2017-configuring-coreos-and-kubernetes-part-1/5a7fb6c8cc1dce00019e401cSat, 07 Oct 2017 02:29:28 GMTWith the new infrastructure, I wanted the entire setup to be reproducible from a set of configuration files and scripts. This would mean that I can restore a fresh state anytime in the future, and provide a way to track changes in configuration. This also means that I can quickly spawn another Kubernetes setup to test new features safely.

Before I created installation automation scripts, I spent a while learning about how Kubernetes works by manually running the generic multi-node scripts Kubernetes provides, and failing repeatedly. A while back Sudharshan gave me one of his old OEM desktops, and it became really useful for testing Kubernetes installs. CoreOS is best installed using a Ignition configuration file. It's a JSON file with a specific format that CoreOS would read on the first boot and install the appropriate files or configurations specified. Usually, it is preferred to write these files in YML and then transpile them to JSON with a special tool. A great thing about CoreOS is it's ISO images boot from RAM (and the ISO image itself can read Ignition configuration), and they include a tool that would download the image directly from the web and write it to a disk together with the Ignition configuration.

I spent time playing with the CoreOS configuration options. Testing it, however, is a slow and painful process, but I learnt a lot from it. My scripts are on GitHub but are mostly not ready.

It's been months since I last experimented with CoreOS since school responsibilities took over. A lot has changed, including the deprecation of the coreos-kubernetes repository in favour of Tectonic. In the follow-up, I'll describe the final setup I will be using on my production cluster.

]]>Caddy is a easy-to-use web server and reverse proxy. You can use it to enable HTTPS on your self-hosted app with little effort.

To start off, first download caddy for your platform. Place the executable in a nice folder. We'll call that your working directory.

Now in the same folder,

]]>https://makerforce.io/quick-start-to-https-with-caddy/5a7fb6c9cc1dce00019e4022Sun, 27 Aug 2017 13:57:00 GMTCaddy is a easy-to-use web server and reverse proxy. You can use it to enable HTTPS on your self-hosted app with little effort.

To start off, first download caddy for your platform. Place the executable in a nice folder. We'll call that your working directory.

Now in the same folder, you will need to write a Caddyfile, which is just a text file. Open your text editor and paste this:

:2015 {
proxy / localhost:8080
}

Save it as Caddyfile without any file extension.

This will start a normal HTTP server at port 2015 and proxy all requests to your app at port 8080.

Now, in the terminal or command prompt, cd into the working directory and run Caddy. On Windows you would type caddy.exe and on macOS or Linux ./caddy

Next, you have to get a domain to point to your server. You can get free domains from freenom.tk, or use your dynamic DNS provider. You can test that your domain works by visiting your-domain.com:2015 over mobile data or from a different network.

After that, enabling HTTPS is really simple. Open your Caddyfile and modify it to look like this:

your-domain.com {
proxy / localhost:8080
}

Save the file and restart Caddy. You may have to run Caddy as root (with sudo) or in an administrator command prompt. Caddy will ask for your email on the first time you run it, and then it will automatically verify that you own the domain, generate and sign a HTTPS certificate for you! Caddy makes it so convenient to set up HTTPS.

]]>Previously, DNS has been a pain to maintain. I was using a cloud DNS service, so for every subdomain, I would have to log on to CloudNS and use their web interface to update DNS records. This made it hard to switch DNS providers and easily edit records. And their]]>https://makerforce.io/infrastructure-2017-dns-setup/5a7fb6c8cc1dce00019e401fFri, 23 Jun 2017 16:29:55 GMTPreviously, DNS has been a pain to maintain. I was using a cloud DNS service, so for every subdomain, I would have to log on to CloudNS and use their web interface to update DNS records. This made it hard to switch DNS providers and easily edit records. And their three domain limit made it impossible to add my other domains, thus I had to run another DNS server. I made the decision to run BIND, but I had to run a service to update my zone files when my IP address changes because I didn't purchase a static IP.

To simplify my complex setup, I decided to host all my DNS locally and solve the dynamic IP problem at once. I made the switch from BIND to CoreDNS due to its extensibility. To ensure that parent DNS servers are always pointing to the right IP, I got a couple of free domains (an example is "ns.makerforce-infrastructure.gq") and pointed it to HE's DNS service, where I set up dynamic DNS updates. In my local zone files, I could also use CNAME records to point to the same dynamically updated DNS record, thus needing to only update the address in one place. However, CNAME records can't be put on the zone apex, so I solved this by creating a CoreDNS plugin to replace CNAME records on the zone apex with the resolved records. This meant that a request for "makerforce.io" would return the actual IP address, instead of a CNAME record, which made it compliant with DNS standards. And since all my records are stored locally, I can painlessly edit them, rather than having some of them hosted on a cloud service.

After getting the hardware and installing embedded pfSense to a flash drive is configuration. My initial intention was to have my servers and home network (which guests use) on separate VLANs. However, I quickly realised that enabling VLANs caused poorer network performance, so I went back to a single network

After getting the hardware and installing embedded pfSense to a flash drive is configuration. My initial intention was to have my servers and home network (which guests use) on separate VLANs. However, I quickly realised that enabling VLANs caused poorer network performance, so I went back to a single network and used static DHCP allocations.

But I was still getting lower than 500Mbps speeds. My CPU was running at a 100%. While messing with settings, I found an odd solution: Enabling PowerD in System > Advanced > Miscellaneous. With it enabled, I could finally get close-to-gigabit speeds on wired clients!

My guess would be that PowerD allowed the CPU to run at higher clock rates.

After using this machine for a few days, I'm satisfied at the performance it delivers, for only $120 SGD! I'd recommend this to you for building a home server or router as a low-cost, low-power setup.

The case for this PC is not rackmounted, so I created a small shelf using a slab of wood and L brackets, and used zipties to secure the router, power brick and switch to the shelf. It now sits in my rack happily eating packets.

Next infrastructure upgrade: migrations to CoreOS! Stay tuned.

]]>October last year, I switched my routing to a virtual machine running pfSense, in the hopes of having better control over my home network. Turns out, many hiccups have occurred since the move. Issues with OpenVPN (which I have since disabled), Linux bridges being reassigned after software updates and other]]>https://makerforce.io/infrastructure-2017-router-hardware/5a7fb6c8cc1dce00019e4016Wed, 19 Apr 2017 18:02:57 GMT

October last year, I switched my routing to a virtual machine running pfSense, in the hopes of having better control over my home network. Turns out, many hiccups have occurred since the move. Issues with OpenVPN (which I have since disabled), Linux bridges being reassigned after software updates and other seemingly random issues. The virtual network card also caused a reduction in maximum throughput, saturating at 200Mbps instead of the 800Mpbs previously on the RT-N56U.

Since then, I've also been wanting to make the switch from services (like this blog and GitLab) running in user accounts and virtual machines to containers. Containers are isolated environments for processes to run in, providing the isolation of a virtual machine with close to native performance.

So here's the start to a series on the upgrade of our infrastructure to a new setup powered by containers! I'll also be documenting progress and code on GitHub.

To start off the upgrade, I needed a better router (because my siblings were complaining about the frequent loss of internet connectivity). A friend of mine suggested a N3050 box he discovered that had two Ethernet ports built-in, and both of us bought one each. For the price of around $90 SGD without RAM and the added cost of shipping from China, it was well priced. It has a sturdy aluminium casing around it, also for heat dissipation, two HDMI ports and six USB ports. It's powered off an external power brick.

]]>In this blog post we will be setting up nginx to reverse proxy your webapp. You'll need nginx set up, and your webapp running and listening on a known port.

Let's edit the default site. Here's the default configuration, with less comments:

sudo nano /etc/nginx/sites-enabled/default

server {
root

]]>https://makerforce.io/linuxbash-reverse-proxying-a-webapp/5a7fb6c8cc1dce00019e4013Mon, 20 Mar 2017 08:00:00 GMTIn this blog post we will be setting up nginx to reverse proxy your webapp. You'll need nginx set up, and your webapp running and listening on a known port.

Let's edit the default site. Here's the default configuration, with less comments:

sudo nano /etc/nginx/sites-enabled/default

server {
root /var/www/html;
# Add index.php to the list if you are using PHP
index index.php index.html index.htm index.nginx-debian.html;
server_name _;
include hhvm.conf;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ /index.php?$args;
}
}

Reverse proxying in nginx is using the directive (configuration option) proxy_pass. The documentation describes the syntax as:

]]>In this blog post we will be setting up a systemd unit for our webapp. systemd helps manage background system services and mount points. A systemd unit is a definition for that service. systemd also can do dependency management.

]]>https://makerforce.io/linuxbash-running-a-webapp/5a7fb6c8cc1dce00019e4011Mon, 20 Mar 2017 08:00:00 GMTIn this blog post we will be setting up a systemd unit for our webapp. systemd helps manage background system services and mount points. A systemd unit is a definition for that service. systemd also can do dependency management.

There are many types of units in systemd, but I will go through the most relevant unit, the service unit.

Service units define how to start, stop and reload the process, when to restart the process in the case of an error, dependencies the service requires to have started, and many other options.

Take a look at the manual page for systemd service unit files:

man systemd.service

Unit files are placed in /etc/systemd/system/, and there are three main sections [Unit], [Install] and [Service].

[Unit] defines information about the unit that is independent of the type of unit. This includes things like description, dependencies. This is an example from MariaDB:

Most user-defined unit files you create will have the option WantedBy=multi-user.target. This option would cause the unit file to be started when the system enters the multi-user target.

[Service] is the section where you define how to start and restart a service. There are multiple types of mechanisms that services can be monitored and restarted, but we will stick with the simplest type of service.

This is a modified extract from the unit file that commongoods uses to run it's webapp. Running services as root can open up the system to security issues, so the User and Group options ensures that the application runs as the user myuser.

WorkingDirectory specifies the starting directory to run the service in. It's similar to running cd in bash before running a command.

Environment sets environmental variables. They are another way, besides by arguments, you can pass data to an application. In this example, I set the variable PORT to be 8011 and within my application I would look up this environmental variable to determine the port to listen on. You can read more about environmental variables at the Arch Linux wiki.

ExecStartPre are some commands to run before the actual service. These can be initialisation commands. I run npm install in ExecStartPre to ensure that all the Node.js dependencies are installed.

ExecStart, when Type=simple, would be the command that is run in the background. Here, I am running npm start to run the Node.js application in the background.

Restart=always restarts the application when it exits. RestartSec defines the number of seconds to wait before retrying.

Now you can put together your first unit file! We will be using a Node.js webapp as an example. First, let's install Node.js and the webapp.

Press Control-C to stop the webapp. (Control-C sends a SIGTERM to the process, asking it nicely to exit. Signals are another way to communicate with running processes, and there are many other types of signals)

]]>Now that we have nginx, HHVM and MariaDB installed, we can get into installing Wordpress! Wordpress is a comprehensive blogging platform and content management system (CMS) written in PHP. (This blog runs on Ghost, which is good if you're only blogging.) If you're building a content-driven website, Wordpress is something]]>https://makerforce.io/linuxbash-getting-wordpress/5a7fb6c8cc1dce00019e400fMon, 06 Mar 2017 00:01:00 GMTNow that we have nginx, HHVM and MariaDB installed, we can get into installing Wordpress! Wordpress is a comprehensive blogging platform and content management system (CMS) written in PHP. (This blog runs on Ghost, which is good if you're only blogging.) If you're building a content-driven website, Wordpress is something to consider.

As mentioned in the previous posts, HHVM and nginx gives us a faster-performing PHP webserver, as compared to a default LAMP installation. With some more effort, HHVM and nginx can be tuned even further for better performance, but that's an article for another day.

Wordpress is known for it's ease of installation. First, let's download Wordpress:

Inside the resulting wordpress folder, you can see that Wordpress is just a bunch of PHP files.

To install it, simply copy the relevant files into your webroot:

sudo cp -r ~/wordpress/* /var/www/html/

The asterisk in the command is a shell expansion. Shell expansions let us type less. The asterisk in this command tells the shell to search for the files in ~/wordpress/ and replace the ~/wordpress/* with the list of files. This is equivalent to:

You can call the user whatever you wish, and do use a different password. All the SQL queries should run successfully. You can now proceed with the installer

Click on "Run the install", and set up your blog right away!

Fill up your information, remember the password, and click on "Install WordPress". You're done! Visit the admin page (http://localhost/wp-admin/) and log in to the admin panel to get started.

]]>In this blog post we will be installing MariaDB, continuing off the previous blog post where we installed HHVM and nginx. MariaDB is a fork of MySQL dedicated to keep MariaDB open. It is fully compatible with MySQL, except for the introduction of extra features.

MariaDB, like HHVM, is available

]]>https://makerforce.io/linuxbash-installing-mariadb/5a7fb6c8cc1dce00019e400dMon, 06 Mar 2017 00:00:00 GMTIn this blog post we will be installing MariaDB, continuing off the previous blog post where we installed HHVM and nginx. MariaDB is a fork of MySQL dedicated to keep MariaDB open. It is fully compatible with MySQL, except for the introduction of extra features.

As mentioned in the previous article, this imports public keys that are used to sign the packages published in the repository, and then adds the repository to the list of repositories for Ubuntu to look up.

sudo apt update
sudo apt install mariadb-server

During the installation, the command will prompt you to set the password for the root database user. Set one and remember it. DigitalOcean has an article on changing it if you lost access as the root user.

To secure your MariaDB installation, you should run:

mysql_secure_installation

This ensures that your database is secure. Now you can log in to your database as root! The -p flag lets MariaDB prompt you for a password.

]]>In this blog post I'll guide you through installing HHVM (HipHop VM) to run Hack/PHP on a web server. I'll be using Nignx and Debian/Ubuntu. HHVM requires a 64-bit operating system, so be sure to download the 64-bit edition of Ubuntu.

Firstly, install nignx. nginx is a high-performance

]]>https://makerforce.io/linuxbash-running-php/5a7fb6c8cc1dce00019e4005Mon, 27 Feb 2017 00:01:00 GMTIn this blog post I'll guide you through installing HHVM (HipHop VM) to run Hack/PHP on a web server. I'll be using Nignx and Debian/Ubuntu. HHVM requires a 64-bit operating system, so be sure to download the 64-bit edition of Ubuntu.

Firstly, install nignx. nginx is a high-performance web server and load balancer. You will be using the load balancing feature in a future article, but for now, nginx will help to serve your static files (typically CSS and JS) and pass your dynamic files (PHP files, in this case) to HHVM to process.

sudo apt install nginx

Here, you are using Ubuntu's package manager apt to install the package nginx. You can search for packages on the command line by doing apt search <query>.

To ensure nginx starts on bootup, you need to enable it.

sudo systemctl enable nginx
sudo systemctl start nginx

The second program starts nginx immediately, thus you do not need to reboot right now.

Your Linux machine is now a web server! You can replace the default page with our own. The default location that stores the files is /var/www/html/. The /var/ directory in Linux is for variable files, like database files, that are either changed by programs or by you.

cd /var/www/html/
ls

You can see that the package manager installs some files to show the default HTML page.

sudo nano index.html

This opens up the nano text editor to edit the file index.html. The index file is the file that is served up when a URL points to a folder. For example, http://localhost/some-folder/ would serve up /var/www/html/some-folder/index.html if it exists.

You'll notice the use of the sudo command here to open the file as root. This is because the directory is owned by the user root:

The entry for . refers to the current folder (/var/www/html/), while .. refers to the parent folder (/var/www/).

Enter some HTML into the editor, write to the file by pressing Ctrl-O then the enter key, and exit by pressing Ctrl-X. Reload the web browser

HipHop Virtual Machine (HHVM) is a just-in-time (JIT) PHP-compatible virtual machine that executes the PHP or Hack language, developed by Facebook. It performs faster than PHP's default interpreter due to it's JIT compilation of source code into HipHop bytecode. You can add the HHVM repository to Ubuntu with these commands:

FastCGI is a nginx module that passes scripts to a FastCGI-compatible backend. In this case, we are passing any files that end with the .hh or .php extension to the HHVM FastCGI service running on port 9000.

Now, you can test that PHP works by creating a new PHP file in the web root, /var/www/html/.

sudo nano /var/www/html/info.php

You can call the phpinfo() function to get information about your PHP environment.

<?php
phpinfo();
?>

If you try to access the page right now, you may run into a 502 error. This is because the /var/www/ folder is owned by root and HHVM needs to write to a file in the folder to run. You can fix the file permissions of /var/www/:

sudo chown -R www-data:www-data /var/www

www-data is the user that HHVM runs as, so by changing the owner of /var/www to www-data, HHVM can write that file.

]]>In this blog post, I'll guide you through setting up a Git remote repository in a Linux server. This guide assumes that you have SSH set up, and understand the basics of Git.

Git remotes are just minimal Git repositories without the working tree. The working tree enables you to

]]>https://makerforce.io/linuxbash-git-hosting-over-ssh/5a7fb6c8cc1dce00019e400bMon, 27 Feb 2017 00:00:00 GMTIn this blog post, I'll guide you through setting up a Git remote repository in a Linux server. This guide assumes that you have SSH set up, and understand the basics of Git.

Git remotes are just minimal Git repositories without the working tree. The working tree enables you to edit the files within the repository, while the actual revision history is hidden in the .git folder. Minimal Git repositories, formally called bare repositories, are created as so:

Now, if you were to share your bare repository at ~/magician.git with someone else (maybe via a flash drive), he can clone the repository and make changes at the same time as you. To merge your commits with his, he could push to the repository, share it back with you and then you could pull and push to merge the changes.

Instead of relying on passing flash drives around, you could have your friend SSH into a server that hosts the Git repository, similar to Github, except locally.

Since you already have the bare repository at ~/magician.git, cloning it over SSH is easy!

git clone user@hostname:~/magician.git

user would be your username and hostname the IP address of the machine that hosts your Git repository. If you are using port forwarding with VirtualBox, the following command would specify the SSH port to use when cloning:

git clone ssh://user@hostname:1022/~/magician.git

As long as someone has access to the same user on the machine, he can clone the repository. If you do not want the people you are sharing your repository to have access to your other personal files, you could create a user account just for Git repository hosting, or move to an advanced solution like GitLab, which serves as your own personal GitHub.

]]>In this blog post I'll go through the basics of using SSH.

SSH stands for Secure Shell. It is a "cryptographic network protocol for operating network services securely over an unsecured network." Wikipedia It is modernly used to connect to Linux servers for management and occasionally

SSH stands for Secure Shell. It is a "cryptographic network protocol for operating network services securely over an unsecured network." Wikipedia It is modernly used to connect to Linux servers for management and occasionally used by programs for secure connections between machines.

SSH is a service that runs on Linux. Most Linux distributions don't come with SSH installed by default. You can install it in Ubuntu with the following command:

Now that SSH is up, we can try SSHing into our Linux system from another machine on the same network. If the machine is a physical one, you need to obtain the local IP address of the machine. If it's virtual, you can either port forward it or obtain the local IP address of the virtual interface.

Windows

The most common SSH client for windows is PuTTY. You can get it by visiting the website and downloading putty.exe.

In the "Host Name" box, enter the IP address. If using port forwarding for a virtual machine running on the same machine, use "localhost". In the "Port" box, you can leave it as 22 or if you configured a different port when port forwarding or installing SSH, use that port.

Click on "Open" and you can log in with your username and password.

macOS, Linux

macOS comes with a built-in command-line SSH client, exactly the same as the one in most Linux systems.

To use the client, open a Terminal and type:

ssh username@hostname

Hostname is the IP address of the Linux machine, or localhost if using port forwarding. If a non-default port is used, you can specify it as such: