Monthly Archives: November 2016

firewalld (Dynamic Firewall Manager) tool provides a dynamically managed firewall. The tool enables network/firewall zones to define the trust level of network connections and/or interfaces. It has support both for IPv4 and IPv6 firewall settings. Also, it supports Ethernet bridges and allow you to separate between runtime and permanent configuration options. Finally, it supports an interface for services or applications to add firewall rules directly.

Disable firewalld

To disable firewalld, execute the following command as root or using sudo:

systemctl disable firewalld

Enable firewalld

To enable firewalld, execute the following command as root or using sudo:

systemctl enable firewalld

Stop firewalld

To stop (or deactivate) firewalld,execute the following command as root or using sudo:

systemctl stop firewalld

Start firewalld

To start (or activate) firewalld, execute the following command as root or using sudo:

systemctl start firewalld

Status of firewalld

To check the status of firewalld, execute the following command as root or using sudo:

systemctl status firewalld

CONCEPTS

systemd provides a dependency system between various entities called “units” of 12 different types. Units encapsulate various objects that are relevant for system boot-up and maintenance. The majority of units are configured in unit configuration files, whose syntax and basic set of options is described in systemd.unit(5), however some are created automatically from other configuration, dynamically from system state or programmatically at runtime. Units may be “active” (meaning started, bound, plugged in, …, depending on the unit type, see below), or “inactive” (meaning stopped, unbound, unplugged, …), as well as in the process of being activated or deactivated, i.e. between the two states (these states are called “activating”, “deactivating”). A special “failed” state is available as well, which is very similar to “inactive” and is entered when the service failed in some way (process returned error code on exit, or crashed, or an operation timed out). If this state is entered, the cause will be logged, for later reference. Note that the various unit types may have a number of additional substates, which are mapped to the five generalized unit states described here. — From man systemd

The above, in a nutshell:

enabled is a service that is configured to start when the system boots

disabled is a service that is configured to not start when the system boots

active is a service that is currently running

inactive is a service that is currently stopped and may be disabled, but it can be started and become active

#include <stdio.h>
#include <stdlib.h>
#define __USE_GNU
#include <sched.h>
#include <errno.h>
#include <unistd.h>
// The <errno.h> header file defines the integer variable errno, which is set by system calls and some library functions in the event of an error to indicate what went wrong.
#define print_error_then_terminate(en, msg) \
do { errno = en; perror(msg); exit(EXIT_FAILURE); } while (0)
int main(int argc, char *argv[]) {
// We want to camp on the 2nd CPU. The ID of that core is #1.
const int core_id = 1;
const pid_t pid = getpid();
// cpu_set_t: This data set is a bitset where each bit represents a CPU.
cpu_set_t cpuset;
// CPU_ZERO: This macro initializes the CPU set set to be the empty set.
CPU_ZERO(&cpuset);
// CPU_SET: This macro adds cpu to the CPU set set.
CPU_SET(core_id, &cpuset);
// sched_setaffinity: This function installs the cpusetsize bytes long affinity mask pointed to by cpuset for the process or thread with the ID pid. If successful the function returns zero and the scheduler will in future take the affinity information into account.
const int set_result = sched_setaffinity(pid, sizeof(cpu_set_t), &cpuset);
if (set_result != 0) {
print_error_then_terminate(set_result, "sched_setaffinity");
}
// Check what is the actual affinity mask that was assigned to the thread.
// sched_getaffinity: This functions stores the CPU affinity mask for the process or thread with the ID pid in the cpusetsize bytes long bitmap pointed to by cpuset. If successful, the function always initializes all bits in the cpu_set_t object and returns zero.
const int get_affinity = sched_getaffinity(pid, sizeof(cpu_set_t), &cpuset);
if (get_affinity != 0) {
print_error_then_terminate(get_affinity, "sched_getaffinity");
}
// CPU_ISSET: This macro returns a nonzero value (true) if cpu is a member of the CPU set set, and zero (false) otherwise.
if (CPU_ISSET(core_id, &cpuset)) {
fprintf(stdout, "Successfully set thread %d to affinity to CPU %d\n", pid, core_id);
} else {
fprintf(stderr, "Failed to set thread %d to affinity to CPU %d\n", pid, core_id);
}
return 0;
}

The <pre> tag element is often used when displaying code blocks because it preserves indentation and line breaks.
Text identified by a <pre> tag is rendered with all spaces and line breaks intact.
By default, <pre> does not support wrapping, so when you render a ‘large’ line, it will force your browser to show a horizontal scroll bar.
By using that scroll bar you will be able to to read the whole line part by part.
Having this feature enabled, the process of reading that line will not be as convenient as it would be after the line would wrap to the following lines like a book. It make sense as you will never be able to see the whole line in one screen.

To mitigate the problem we used the following css snippet, which will instruct most browsers to wrap the contents of all <pre> tags.

The following commands will search in the .tar archives found in the specified folder and print on screen all files that their paths or filenames match our search token. We provide multiple solutions, each one for a different type of .tar archive depending on the compression used.

Background

Recently we started using the UC232A USB-to-Serial Converter to connect to a board.
The software we used was TeraTerm on a 64bit Windows 10without installing custom drivers.

Our serial port configuration was the following:

Baud rate: 115200

Data: 8 bit

Parity: none

Stop: 1 bit

Flow control: none

Transmit delay:
5 msec/char
5 msec/line

The problem

We noticed that something was wrong with the process as the terminal would not operate consistently.
Some times keystrokes did not appear on screen, in other times results would not appear correctly (they could be truncated or mixed with other data) and in general, the system acted like it was possessed by a ghost.

Troubleshooting

We played around with the configuration parameters, hoping that it was an issue like having the need to add large transmit delay but it did not change anything, the communication with the board was unstable.
Afterwards, we switched to another cable, of a different company, and everything worked as expected. The data on the screen was consistent and the ghost was banished. The UC232A was brand new so we tested that it works on a GNU/Linux machine, which turned out to be OK. Doing so, these two tests led us to the conclusion that since both the cable operates properly on GNU/Linux and the board operates properly using the other cable, that the issue we had was the automatically installed Windows 10 drivers.

Solution

While the cable was unplugged, we installed the official drivers we found here.
To find the drivers on that page, click on Support and Download tab at the bottom and then click on the Software & Drivers panel.
From the new table that will appear, under the category Windows Legacy Software & Driver we used the latest version that was available at the time that this post was written, which was v1.0.082 dated 2016-01-27uc232a_windows_setup_v1.0.082.zip (
uc232a_windows_setup_v1.0.082.zip (4254 downloads)
retrieved on the 23rd of November 2016).
After the download was finished, we restarted the machine, plugged in the cable and gave it another go.
The system was working as expected.

Following, you will find the screenshots from the device manager, after we got the cable working right.

Recently, we did some cleanup in certain GNU/Linux virtual machines, where we hoped that VirtualBox would release the disk space that is not used and shrink the size of the VDI files.
Unfortunately, that did not happen even after freeing more than 100GB of space from the guest machine.

We did manage though to reclaim the empty space manually, using the zerofree and VBoxManage utilities.

Part 1: Clean-up the guest machine using zerofree

We needed to find the unallocated, blocks with non-zero value content in the ext2, ext3 or ext4 filesystem (e.g. /dev/sda1) and fill them with zeroes.
Since the filesystem has to be unmounted or mounted as read-only for zerofree to work, we decided to use a Live CD to complete this task as it would be the simplest solution to follow.

Step 3: Perform the cleanup

Give it some time to complete the task, the larger the partition, the more the time it will take.

Part 2: Shrink the guest machine disk images using VBoxManage

We used VBoxManage with the parameter --compact, which it is used to compact disk images, i.e. remove blocks that only contains zeroes. It shrinks dynamically allocated images by reducing the physical size of the image without affecting the logical size of the virtual disk. Compaction works both for base images and for diff images created as part of a snapshot. For this operation to be effective, it is required that free space in the guest system first be zeroed out and that is why we had to perform Step 1 using zerofree before.
Please note that compacting is currently only available for VDI images.

To use, just issue the command "C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" modifyhd --compact <DISK_PATH> pointing to the disk you just cleaned up using zerofreePlease note that the virtual machine should be stopped before starting this operation.

Press the key combination Win + R to pop up the Run prompt.
Type cmd in the input box and hit the Enter key.

When you try to find all files that contain a certain string value, it can be very costly to check binary files that you might not want to check.
To automatically prevent your search from testing if the binary files contain the needle you can add the parameter -I (capital i) to prevent grep from testing them.
Using grep, -I will process a binary file as if it did not contain matching data, this is equivalent to the --binary-files=without-match option.

-l (lambda lower case) or --files-with-matches Suppress normal output, instead print the name of each input file from which output would normally have been printed. The scanning will stop on the first match.

-I (i capital) or --binary-files=without-match Process a binary file as if it did not contain matching data.

Since you are searching for this issue, you must have realised that git does not support storing empty folders/directories.

Currently the design of the Git index (staging area) only permits files to be listed, and nobody competent enough to make the change to allow empty directories has cared enough about this situation to remedy it.

All the content is stored as tree and blob objects, with trees corresponding to UNIX directory entries and blobs corresponding more or less to inodes or file contents. A single tree object contains one or more tree entries, each of which contains a SHA-1 pointer to a blob or subtree with its associated mode, type, and filename.— From https://git-scm.com/book/en/v2/Git-Internals-Git-Objects

Below we propose two solutions, depending on how you want to use those empty folders.

Solution A – The folders will always be empty

There are scenarios where the empty folders should always remain empty on git no matter what the local copy has inside them.
Such a scenario would be, wanting to add on git folders where you will build your objects and/or put temporary/cached data.
In such scenarios it is important to have the structure available but never to add those files in git.

To achieve that, we add a .gitignore file in every empty folder containing the following:

# git does not allow empty directories.
# Yet, we need to add this empty directory on git.
# To achieve that, we created this .gitignore file, so that the directory will not be empty thus enabling us to commit it.
# Since we want all generated files/folders in this directory to be ignored by git, we add a rule for this.
*
# And then add an exception for this specifc file (so that we can commit it).
!.gitignore

The above .gitignore file, instructs git to add this file on the repository, and thus add the folder itself while ignoring ALL other files.
You can always update your .gitignore file to allow additional files to be added on the repository at any time.
This way you will never get prompted for temporary files that they were modified/created as part of the status of your repository.

Automation

A way to achieve this automatically, and place a copy of the .gitignore file in every empty folder would be to use the following command to copy an existing .gitignore file in all empty folders.

The above command assumes that there is a .gitignore file in the folder that we are executing from, which it will copy in every empty directory inside the folder that the variable $PATH_TO_REPOSITORY is pointing to.

Solution B – Files will be added eventually to the folders

There are scenarios where the empty folders will be filled at a later stage and we want allow those files on git.
Such a scenario would be, wanting to add on git folders where right now are empty but in some time we will add new source code or resources there.

To achieve that, we add an empty .gitkeep file in every empty folder.

The above .gitkeep file is nothing more than a placeholder.
It is not documented, because it’s not a feature of Git.
It’s a dummy file, so git will not process the empty directory, since git tracks only files.
Once you add other files to the folder, you can safely delete it.
This way you will always get prompted for files that they were modified/created as part of the status of your repository.

Automation

A way to achieve this automatically, and place a copy of the .gitkeep file in every empty folder would be to use the following command to create an empty .gitkeep file in all empty folders.