When you first log in to a Unix system from a terminal, the system normally starts a login shell. A login shell is typcally the top-level shell in the “tree” of processes that starts with the init process. Many characteristics of processes are passed from parent to child process down this “tree” — especially environment variables, such as the search path. The changes you make in a login shell will affect all the other processes that the top-level shell starts — including any subshells.

So, a login shell is where you do general setup that’s done only the first time you log in — initialize your terminal, set environment variables, and so on. […]

So you could think about a login shell as a shell that is started at startup by the init process (or systemd nowadays). Or as a shell that logs you into the system by your providing a username and a password. A nonlogin shell, by contrast, is a shell that is invoked without logging anybody in.

Is My Current Shell a Login Shell?

There are two ways to check if your current shell is a login shell: First, you can check the output of echo $0: if it starts with a dash (like -bash), it’s a login shell. Be aware, however, that you can start a login shell with bash --login, and echo $0 will output just bash without the leading dash, so this is not a surefire way of find out if you are running a login shell.

The Difference Between Login and Nonlogin That Actually Matters

Practically speaking, the difference between a login shell and a nonlogin shell is in the configuration files that Bash reads when it starts up. In particular, according to man bash:

[…] it first reads and executes commands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the first one that exists and is readable.

You can observe this behavior by putting echo commands in /etc/profile, ~/.bash_profile, ~/.bash_login and ~/.profile. Upon invoking bash --login you should see:

echo from /etc/profileecho from ~/.bash_profile$

If the shell is a nonlogin shell, Bash reads and executes commands from ~/.bashrc. Since we are starting a nonlogin shell from within a login shell, it will inherit the environment. Sometimes, this will lead to confusion when we inadvertantly get a login shell, and find out that our configuration from ~/.bashrc is not loaded. This is why many people put something like the following in their .bash_profile:

[[ -r ~/.bashrc ]] && source ~/.bashrc

This test whether .bashrc is readable and then sources it.

Why You Sometimes Want a Login Shell

When you switch users using su you will take the environment of the calling user with you. To prevent this, you should use su - which is short for su --login. This acts like a clean login for a new user, so the environment will not be cluttered with values from the calling user. Just as before, a login shell will read /etc/profile and the .bash_profile of the user you are switching to, but not its .bashrc. This post on StackOverflow shows why you might want to prefer to start with a clean environment (spoiler: your $PATH might be “poisened”).

Conclusion

In this article we saw that the main difference between a login and a nonlogin shell are the configuration files that are read upon startup. We then looked at what the benefits are of a login shell over a nonlogin shell.

You have a drive that you want to encrypt and use in Linux and other OSes. Then Veracrypt, the successor of Truecrypt, is a good choice. The prerequisite for this tutorial is that you already have created a partition on a drive. See my previous blog post on how to accomplish that. Creating a volume on a partition with data on it will permanently destroy that data, so make sure you are encrypting the correct partition (fdisk -l is your friend).

Encrypt a volume interactively from the command line using Veracrypt…

(The # sign at the beginning of the code examples indicates that the command should be executed as root. You can either use su - or sudo to accomplish this.)

# veracrypt -t --quick -c /dev/sdXX

-t is short for --text (meaning you don’t want the GUI) and should always be used first after the command name. The --quick option is explained in the docs:

If unchecked, each sector of the new volume will be formatted. This means that the new volume will be entirely filled with random data. Quick format is much faster but may be less secure because until the whole volume has been filled with files, it may be possible to tell how much data it contains (if the space was not filled with random data beforehand). If you are not sure whether to enable or disable Quick Format, we recommend that you leave this option unchecked. Note that Quick Format can only be enabled when encrypting partitions/devices.

So, using --quick is less secure, but not specifying it could take (a lot) longer, especially on traditional hard drives (we’re talking hours for 500GB).

Finally, the -c or --create command allows us to specify on which partition we want to create a veracrypt volume. Make sure you change the /dev/sdXX from the example above to the appropriate output of fdisk -l (for example, /dev/sdc1).

The missing $HOME/.lessrc

I often wondered how I could make certain options for less permanent, like -I, for example, which will make search case insensitive. In GNU/Linux, preferences are often stored in rc files. For vim we have .vimrc, for Bash .bashrc, etc:

See the overall progress of rsync

By default, rsync will show the progress of the individual files that are being copied. If you want the overall progress, you have to add some flags:

$ rsync -a --info=progress2 --no-i-r src dst

--info=progress2 shows the total transfer progress. (To see all available options for --info, execute rsync --info=help). --no-i-r is short for --no-inc-recursive and disables incremental recursion, forcing rsync to do a complete scan of of all directories before starting the file transfer. This is needed to get an accurate progress report, otherwise rsync doesn’t know how much work is left.

Human-readable output can be obtained by passing the -h or --human-readable option.

Say we bought an external hard drive to back up some stuff from a crashed computer. We can use a Live USB to get at the data and put the data on the external hard drive. Because the data needs to be accessible by Windows, we are going to use format the drive with NTFS.

Create partition

Connect the external hard disk to your computer. Use sudo fdisk -l to find the device name. Output should look something like this:

As can been seen above, the name of the device is /dev/sdb. We use to name to run fdisk:

$ sudo fdisk /dev/sdb

Notice how we use the name of the device, and not the name of the partition (so /dev/sdb without any numbers attached at the end).

After entering the command above, an interactive menu will be facing you. Type a letter and press Enter to confirm. Changes will only be applied when you type w, so if you make a mistake, just stay calm and press q and you will exit fdisk with your pending changes discarded.

Delete all your existing partitions by pressing d. Depending on the amount of partitions, you might have to repeat this several times. If you want to check the current partition table, press p.

After all old partitions are deleted, add a new partition by pressing n. If you just want to create a single partition on your drive, accept all the defaults by pressing Enter on each prompt. This will leave you with a single partition that will take up all space on the drive.

Back in the main menu, type t to change the partition type. Press L to see all partitions types. Here we are going to choose 7 (HPFS/NTFS/exFAT). “The partition type […] is a byte value intended to specify the file system the partition contains and/or to flag special access methods used to access these partitions” (source). Linux does not care about the partition type, but Windows does, so we have to change it.

Press w to write your changes to the disk and exit fdisk.

Format partition with NTFS

Now we create the actual NTFS file system on the drive:

$ sudo mkfs.ntfs -Q -L label /dev/sdX1

(If you don’t have mkfs.ntfs installed, use your distro’s package manager to install it (on Arch Linux it’s in a package called ntfs-3g)).

Breakdown:

-Q is the same as --quick, -f or --fast. This will perfom a quick format, meaning that it will skip both zeroing of the volume and and bad sector checking. So obviously, leave this option out if you want the volume to be zeroed or you want error checking. Depending on the size of your partition, this might take quite a while.

-L is the same as --label: it’s the identifier you’ll see in Windows Explorer when your drive is connected.

dev/sdX1: change the X to the actual letter of your drive we found earlier in this tutorial. You always format a partition, not a drive, so make sure that you put the correct number of the partition you want formatted at the end.

Check the output and find the device name of the USB (for instance /dev/sdc). Make sure this device is not mounted, otherwise the next command will fail. Also make sure you note the device name, and not a partition (indicated by a numeral at the end: /dev/sdc1, for example).

Copy Arch Linux image to USB drive

of, likewise, points to the output file, which is a device in this case. Note that /dev/sdX needs to be replaced with the device name we found in the previous step.

bs=64K indicates the block size, which means that dd will read and write up to 64K bytes at a time. The default is 512 bytes. It really depends what the optimal block size is, but several sources indicate that 64K is a good bet on somewhat modern to modern hardware.

oflag stands for “output flag”. The sync flag will make sure that all data is written to the USB stick when the dd command exits, so it will be safe to remove the USB stick.

Notice that the device does not need to be partitioned or empty before this operation. When dd writes to a device rather than a partition, all data on the drive – including partitions – will be erased anyway.

TL;DR

Export your contacts from Google in vCard version 3 format, split the contacts file and use cadaver to upload all files individually to your address book.

The struggle

Last week, I did a fresh install of Lingeage OS 14.1 on my OnePlus X and decided not to install any GApps. I have been slowly moving away from using Google services and, having found replacements in the form of open-source apps or web interfaces, I felt confident I would be able to use my phone without a Google Play Store or Play Services. (F-Droid is now my sole source of apps.)

To tackle the problem of storing contacts and a calendar that could be synced, I installed a Nextcloud instance on a Raspberry Pi 3. Having installed DAVdroid, I got my phone to sync contacts with Nextcloud, but not all of them: it would stop synchronizing after some 120 contacts, while I had more than 400.

I decided to try a different approach, so I exported the contacts on my phone in vCard format and tried to upload them to Nextcloud using the aptly named application "contacts" for this. However, this also failed unexpectedly. I’m using Nextcloud version 12.0.3 and version 2.0.1 of the contact app, but it refuses to accept vCard version 2.1 (HTTP response code 415: Unsupported media type). This, naturally, is the version Android 6 uses to export contacts.

After some searching, I found out that if you go to contacts.google.com, you can download your contacts in vCards version 3. Problem fixed? Well, not so fast: importing 400+ contacts into Nextcloud using the web interface on a Raspberry Pi 3 with an SD card for storage will take a long time. In fact, it never finished over the course of a couple of hours (!), so I needed yet another approach.

Fortunately, you can approach your Nextcloud instance through the WebDAV protocol using tools such as cadaver:

$ cadaver https://192.168.1.14/nextcloud/remote.php/dav

Storing your credentials in a .netrc file in your home directory will enable cadaver to verify your identity without prompting, making it suitable for scripting:

machine 192.168.1.14
login foo
password correcthorsebatterystaple

cadaver allows you to traverse the directories of the remote file system over WebDAV. To put a single local contacts file (from your working machine) to the remote Raspberry Pi, you could tell it to:

I had a single vcf file with 400+ contacts in them, but after uploading it this way, only a single contact was being displayed. Apparently, the Nextcloud’s contacts app assumes a single vcf file contains only a single contact. New challenge: we need to split this single vcf file containing multiple contacts into separate files that we can then upload to Nextcloud.

This separates the contacts on the record separator END:VCARD and generates a random filename to store the individual contact in. (I also wrote a Java program to do the same thing, which is faster when splitting large files).

Obviously, it would be convenient now if we could upload all these files in one go. cadaver does provides the mput action to do so, but I did not get it to work with wildcards. So instead, I created a file with put commands:

This may take a while (it took around an hour for 400+ contacts), but at least you get to see the progress as each request is made and processed. And voilà, all the contacts are displayed correctly in Nextcloud.

I was attending a meetup that was taking place in a bar in Utrecht. The first thing you want to do is to make a connection to the internet and get started. The location used a captive portal, however. You know: you have the name of the wireless network (SSID) and the password, but when you try to open any web page, you are directed towards a login page where you have to accept the terms and conditions of whoever is operating the network.

But what if you use an always-on VPN? You cannot connect to the network, because your MAC and IP address are not whitelisted yet by the operator. And you cannot get to the login page, because you do not allow any traffic outside your VPN.

The captive portal page.

UFW

I use ufw (uncomplicated firewall) as my firewall of choice, mainly because the alternative, iptables, always looked too complicated and ufw served its purpose. The rules I have for ufw are currently:

By default, I deny all incoming and outgoing connections. I only allow incoming connections from hosts on the same network. As for outgoing connections, I only allow them to other hosts over the LAN (to any port), and everywhere else only over port 1194 (the OpenVPN port). (wlp1s0 is the name of my wireless interface, tun0-unrooted of the VPN tunnel. These rules were inspired by this Arch Linux article).

So we need to allow some traffic outside of the VPN tunnel to accept the terms and conditions and to register our machine at the captive portal. The best thing to do would be to allow a single, trusted application to access this portal, one that would be used exclusively for this task. If you would allow you regular browser to bypass the VPN, it would send all kind of traffic over the untrusted network for the rest of the world to freely sniff around in (think add-ons, other browser tabs, automatic updates). So we would need a dedicated web browser for this task. I’m on Linux using Firefox as my default browser, so GNOME Web would be a good choice for this purpose. (Gnome Web was previously known as Epiphany, and is still available under that name on a lot of distributions) .

First, we need to determine what kind of traffic we want to allow. The application will need to have outbound access to ports 80 (HTTP) and 443 (HTTPS) for web traffic, and it will also need to be able to resolve domain names using DNS, so port 53 should also be opened.

UFW Profiles

However, it’s not that easy to allow one particular application access to the internet if you use UFW. When you look at the man page for UFW, you see you can specify “apps”. Apps (or application profiles) are basically just text files in INI-format that live in the /etc/ufw/applications.d/ folder.

To list all (predefined) profiles:

# ufw app list

To create a profile for our purposes, we put the following in a file called ufw-webbrowser:

The ‘ports’ field may specify a ‘|’-separated list of ports/protocols where the protocol is optional. A comma-separated list or a range (specified with ‘start:end’) may also be used to specify multiple ports, in which case the protocol is required.

In our case we allow TCP traffic over ports 80 (HTTP) and 443 (HTTPS) and both UDP and TCP traffic over 53 (DNS). We can now use this profile:

(Note that we use insert 1 to make sure the rule is placed in the first position. With UFW, the first rule matched wins.)

Now we can use our dedicated browser to go to the captive portal page and accept the terms and conditions.

Checking traffic with wireshark

You need to be careful, however, not to use any other applications during this time. If you launch Firefox, for example, it can also use the opened ports to communicate with the outside world. I’d like to use Wireshark to see what communications are taking place during this time.

Use wireshark to see what packets are send in the open.

When you are registered with the WiFi provider and are done with the captive portal, you should first disable the profile again with UFW. We can do this by specifying the rule we added earlier, but prepending delete:

# ufw delete allow out to any app Epiphany

Another way to delete rules from UFW is by first doing ufw status numbered and then ufw delete <number>. However, since we have added 6 rules, this may take a while. Also, if you can’t remember the exact rule that was used, you can use ufw show added to show all added rules and their syntax.

Better solutions: beyond UFW

Now, we see that using UFW isn’t exactly ideal to deal with always-on VPN and captive portals. What if you have an email application (or something else) running in the background when you have allowed all those ports to bypass the VPN tunnel? And also, you have to enable and disable the application profile every time you encounter a captive portal. It would be better if we could allow only a single, named, demarcated application to bypass the VPN.

One solution I’ve read about makes use of what I like to call “the Android way”: every installed application is a user with its own home directory. This means that applications don’t have access to each other files, but more importantly, this gives the opportunity to allow only a specific user to access the internet outside of the VPN. This way, we could create a user epiphany that runs Gnome Web to access the captive portal.

AFWall+, an open-source Android application, uses this method to implement a pretty effective firewall. It also uses iptables as a back-end. I might have to finally bite the bullet and learn iptables after all…