Fedora People

Recently I have been working to clean up the configuration file syntax
and parsing in rpminspect. Several months back there were suggestions
on fedora-devel to improve things with the configuration files. The
ideas were good improvements, so I added them to my to do list and am
now at a point where I can work on making those changes. The main ideas:

Move the configuration files out of /etc and in to /usr/share. Have
these be the defaults.

Let local overrides exist in /etc.

Allow for multiple rpminspect-data vendor packages to be
concurrently installed.

In addition to the above, I was planning on implementing support for a
local configuration file to be sourced last. Sort of like having
pylintrc in a Python project to drive pylint. I wanted the ability to
have rpminspect read a final configuration file for local package
configuration. My thinking is that package maintainers could put a
per-package rpminspect configuration file in the dist-git repo.

Picking A Parser

Before doing this rearrangement, I was looking at the syntax of the
configuration file. It has evolved over the past year as new features
have been added. The configuration file follows an INI style layout which is the ‘key =
value’ style syntax. This is a long established common practice for
any kind of configuration file which spans many different kinds of
formats. INI file syntax is understood and easy to follow. I have
been using libiniparser in
rpminspect to handle reading the file. This works but has presented a
challenge for two types of settings I need to represent in the
configuration file. The first is a simple list. INI syntax does not
really allow for this in a well defined way. I get around the
limitation by having my lists be space-delimited strings which I then
tokenize in the source code. Not ideal because the obvious limitation
is that I have now made it difficult to have a list member with a
space in it. The second data type is a hash table. I want to capture
user-defined key=value settings for a particular category. I get
around this by making the setting be the section name (e.g.,
‘[products]’) and within that section reading every key and value and
adding them to my hash table. It’s not entirely clear in the
configuration file and the syntax could lead to confusion. So
cleaning all of this up has been on the to do list.

What to do? The program has existed in the wild for over a year so
the existing format is now established. I need to either honor the
existing format or make a flag-day style change and migrate
everything. The latter is possible since the configuration data for
rpminspect is nearly exclusive to the vendor data packages. If I had
already established the per-package configuration file functionality,
this would be a harder change.

Looking at options, here’s how I broke down things:

Continue using the INI style format, possibly switching libraries.
libconfini offers a
bit more on top of the INI format, but still does not get me all the
way there. There are other libraries and I could extend one of the
existing ones. I would want any extensions to go upstream and that
may or may not happen.

Investigate new formats and switch everything over to something else.

Define a new format and implement a lexer and parser in rpminspect.

I spent a lot of time looking at different INI libraries available.
They all more or less provide the same type functionality which left
me with limited or no list or hash table options. I then looked at
defining a new configuration file format based on what I was already
doing and implementing a parser in yacc. While this is possible, I
was not really interested in going down this path because I didn’t
want to run in to situations where the config file format was limiting
a feature for some reason and then get stuck. Basically, I don’t want
to be in the business of defining a config file format. Lastly, I
moved on to looking at different existing options for configuration
file formats. Here’s what I looked at:

JSON - Already in use for
the license database (inherited from another project). Already
using the json-c library.
The syntax is frustrating, which would make it a pain for a
configuration file. Brief survey of applications show that JSON is
not really used in this capacity.

XML - I have used XML for
configuration data and libxml provides a
reasonable API for this. But it suffers from the same problem JSON
has in that it’s a pain to edit and maintain by hand.

YAML - My experience with
YAML is limited and what YAML files I have seen, I do not like. The
files I’ve seen tend to be very brief and cryptic and offer no real
clue as to what is a setting and what is a value. Short files that
might look like this:

What is the significance of the hyphen? What are possible options?
What am I even looking at? This file is not really helpful at
discovering what you can do with a program, which is one thing I
expect out of configuration files.

TOML - This looked exactly
like what I was wanting. Looks like INI style but adds more types
and lists and things like that. The downside here is the lack of
available libraries. I found libtoml on github which may or may not
completely implement the specification and it’s made no releases. I
consider this specification evolving and may look at it in a few months.

There are other things to consider for the configuration file format.
Who are the target users? In the case of rpminspect it would be
developers and package maintainers. The program runs in a CI capacity
in Fedora. Of the formats above, YAML has been established for a
number of scenarios, many driven by the use of Ansible. What about my
converns with YAML? I decided to look in to things a bit more.

I found that YAML does allow comments, so that’s a huge win. And
indentation can be more than the nearly unreadable 2 spaces that I see
commonly used. Sections are denoted by indentation and hyphens are
used for list members. Key=value pairs are of the form ‘key: value’.
I rewrote rpminspect.conf as a YAML file and looked at the result. I
kept comments and used 4 space indentation. The result was very
readable to me so I decided to use this format. The libyaml library provides an entirely
usable API for working with YAML data streams.

Making The Change

Because of the parser change, I decided to rename the configuration
file to rpminspect.yaml. This both reflects the specification used
but also keeps it distinct from the existing configuration file format
used. I bumped the major version of rpminspect to ‘1’ as well as on
the data packages to account for this change. The profile
configuration files will also end with ‘.yaml’.

I rearranged the rpminspect.yaml file as well and broke up what used
to be the [settings] section. I give each inspection its own block in
the configuration file for more clarity. Some sections do not tie to
a specific inspection but are for the entire run of the program. I
may move those in to a larger section on its own, but I am not sure yet.

The file parsing happens in lib/init.c so that was where the bulk of
the changes went. And moving to YAML meant a lot of this code could
be deleted. That is always satisfying even though it’s code that I
wrote in the first place.

The project also drops a dependency on the libiniparser library, so I
updated the documentation and the meson.build files. With all of
these changes in place, I built the program and ran the test suite. I
fixed up various things until the test suite passed and pushed the
commits. The first big part of this change was now complete.

These changes have been pushed to the master branch and current Copr
builds now use YAML configuration files for both the main
configuration file and profiles. The next steps are adjusting things
to allow for concurrent data package installation and honoring an
rpminspect.yaml file in the current directory.

I like the new configuration file layout. libyaml is easy to use and
I like having fewer runtime dependencies. I do feel that there will
come a time where we talk about using YAML for these types of files
like we talk about XML for config files now. There is not a lot I can
do about that though, so we will stick with YAML for now.

Here’s your report of what has happened in Fedora this week. Elections voting is open through 11 June. I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. Announcements Help wanted Upcoming meetings Releases CPE update Announcements Orphaned packages seeking maintainers […]

So, the other shoe is about to drop. Gov is planning on providing everyone a wearable contacting tracing device to counter the limitations of apps that run in mobile phones (from the Apple ecosystem) – like the TraceTogether (the downstream of OpenTrace).

It also seems that a “newer” version of TT will now ask for NRIC (National Registration Identity Card) in addition to mobile phone number at registration.

This report claims that the hardware device will be like TT/OT and only do contact tracing. What the device would be doing is to implement the BlueTrace protocol. This device can take on the form factor of a watch, a pen, or a key fob. It should be easy to design and build. And once built, put the designs, schematics etc on a Open Hardware License and published on Thinkiverse (or anywhere else). There are plenty of examples of wearables there. No need to reinvent the wheel.

Why does this matter? First, we need to build trust in these devices. This is the same effort as in open sourcing of TraceTogether that was done in April and helped significantly to raise the level of trust in the application. There are challenges in adoption of the app because of battery and application run issues in the Apple mobile phone ecosystem. We need a usage population of about 65% of the local population in Singapore for it to be useful. TT is apparently at about 1.5 million of downloads, but there is no way to know if it is actually running.

If there is a separate device that runs the same BlueTrace protocol, it will operate with devices that run TT (or OT) so we can have a good chance to go highly reliable contact tracing.

E-waste

I can already see that perhaps in a year or two from now, there will be millions of these devices that are thrown away and adding to the enormous waste – batteries etc. The device has to be designed with recycling as a default. This is 2020 and we must, as default, build devices that can be recycled trivially.

We don’t have to wait for G to do the design and build and distribution. The local open source community can step up and do this. We can design something that can then be sliced and diced as needed with different form factors.

If you are keen to work on this, please leave a comment or send me email at h dot pillay at ieee dot org. I will be calling for an online meeting of interested developers, designers, engineers soon.

The Fedora CoreOS team released the first Fedora CoreOS testing release based on Fedora 32. They expect that this release will promote to the stable channel in two weeks, on the usual schedule. As a result, the Fedora CoreOS and QA teams have organized a test day on Monday, June 08, 2020. Refer to the wiki page for links to the test cases and materials you’ll need to participate. Read below for details.

How does a test day work?

A test day is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

Download test materials, which include some large files

Read and follow directions step by step

The wiki page for the test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results.

The Fedora CoreOS team has released the first Fedora CoreOS testing release based on Fedora 32. They expect that this release will promote to the stable channel in two weeks, on the usual schedule. As a result, the Fedora CoreOS and QA teams have organized a test day on Monday, June 08, 2020. Refer to […]

No, SELinux is not the cause of all permission troubles on Linux. For example, syslog-ng makes use of the capabilities system on Linux to drop as many privileges as possible, as early as possible. But it might cause problems in some corner cases, as even when running as root, syslog-ng cannot read files owned by a different user. Learn from this blog how you can figure out if you have a SELinux or capabilities problem and how to fix it if you do.

SELinux

Yes, SELinux is the primary suspect when something does not work as expected on your RHEL or CentOS system. It can cause any kind of mysterious file permission problems and can even prevent network connections when the configuration contains an unusual port number. To verify if a problem is caused by SELinux, check the audit logs on your system, normally in /var/log/audit/audit.log. If it is SELinux preventing syslog-ng from running as expected, you will see one or more related messages in that file. Check my earlier blog at https://www.syslog-ng.com/community/b/blog/posts/using-syslog-ng-with-selinux-in-enforcing-mode for more information about how to resolve these problems.

No, turning off SELinux does not solve your problems, merely treats the symptoms. Unless you are just quickly testing something, you should take the time and create the additional rules for SELinux. Even if SELinux comes from the NSA, it actually enhances the security of your systems. Just search for “vulnerabilities stopped by SELinux” on Google or your favorite search engine.

Capabilities

Spotting a Linux capabilities problem is not as easy as for SELinux, since there are no audit logs mentioning them. Right now, I am only aware of a file permission problem. Even when syslog-ng is running as root, it cannot read files owned by a different user.

This problem was reported to me as something related to TLS. So – using the TLS guide I created many years ago – I configured and tested an encrypted connection between syslog-ng instances. Then, I started to play with file ownership and permissions, and it turned out that the problem is not limited to certificates, but more generic instead. That moment, Linux capabilities came to my mind, and a minute later I had a working solution for the problem.

Why might you have files with different owners? A typical source of the problem is when you compile syslog-ng as a regular user and then try to run it as root to gain additional privileges, like opening network ports under 1024. Another case is when syslog-ng is running as root, but certificates or configuration are managed by scripts running as a regular user. In either case, Linux capabilities support enabled in syslog-ng prevents reading these files.

Most syslog-ng packages on Linux have capabilities support enabled. You can check it from the command line by running syslog-ng with the -V option:

The “Enable-Linux-Caps: on” line shows that capabilities support is enabled. This way, syslog-ng can drop most of its privileges on start.

Workaround / fix

Just like with SELinux, there are multiple ways of resolving this problem. One way is to disable capabilities support in syslog-ng completely. You can do this with the --no-caps command line option of syslog-ng. Even if there were no known security problems within syslog-ng for a long time, I do not recommend using it. Just like with SELinux, it can protect against unknown problems.

If you take a look at the syslog-ng manual page, you can see a nice, long list of capabilities. You can modify it to get file reading working by adding a single “e” to cap_fowner parameters. The full command line would look like this:

Checking something quickly from the command line using --no-caps is definitely easier. For production environments, I would rather recommend using the longer form, as it enables just a single additional privilege instead of everything.

Depending on your Linux distribution, the configuration of services might be different. In CentOS 7, you can pass command line parameters to syslog-ng using the /etc/sysconfig/syslog-ng file and adding the following line to it:

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik.

It is becoming more popular to read content on smartphones. Every phone comes with its own ebook reader. Believe or not, it is very easy to create your own ebook files on Fedora.

This article shows two different methods to create an EPUB. The epub format is one of the most popular formats and is supported by many open-source applications.

Most people will ask “Why bother creating an EPUB file when PDFs are so easy to create?” The answer is: “Have you ever tried reading a sheet of paper when you can only see a small section at a time?” In order to read a PDF you have to keep zooming and moving around the document or scale it down to a small size to fit the screen. An EPUB file, on the other hand, is designed to fit many different screen types.

Method 1: ghostwriter and pandoc

This first method creates a quick ebook file. It uses a Markdown editor named ghostwriter and a command-line document conversion tool named pandoc.

You can either search for them and install them from the Software Center or you can install them from the terminal. If you are going to use the terminal to install them, run this command: sudo dnf install pandoc ghostwriter.

For those who are not aware of what Markdown is, here is a quick explanation. It is a simple markup language created a little over 15 years ago. It uses simple syntax to format plain text. Markdown files can then be converted to a whole slew of other document formats.

Now for the tools. ghostwriter is a cross-platform Markdown editor that is easy to use and does not get in the way. pandoc is a very handy document converting tool that can handle hundreds of different formats.

To create your ebook, open ghostwriter, and start writing your document. If you have used Markdown before, you may be used to making the title of your document Heading 1 by putting a pound sign in front of it. Like this: # My Man Jeeves. However, pandoc will not recognize that as the title and put a big UNTITLED at the top of your ebook. Instead put a % in front of your title. For example, % My Man Jeeves. Sections or chapters should be formatted as Heading 2, i.e. ## Leave It to Jeeves. If you have subsections, use Heading 3 (###).

<figure class="wp-block-image size-large"></figure>

Once your document is complete, click File -> Export (or press Ctrl + E). In the dialog box, select between several options for the Markdown converter. If this is the first time you have used ghostwriter, the Sundown converter will be picked by default. From the dialog box, select pandoc. Next click Export. Your EPUB file is now created.

Note: If you get an error saying that there was an issue with pandoc, turn off Smart Typography and try again.

Method 2: calibre

If you want a more polished ebook, this is the method that you are looking for. It takes a few more steps, but it’s worth it.

<figure class="wp-block-image size-large"></figure>

First, install an application named calibre. calibre is not just an ebook reader, it is an ebook management system. You can either install it from the Software Center or from the terminal via sudo dnf install calibre.

In this method, you can either write your document in LibreOffice, ghostwriter, or another editor of your choice. Make sure that the title of the book is formatted as Heading 1, chapters as Heading 2, and sub-sections as Heading 3.

Next, export your document as an HTML file.

Now add the file to calibre. Open calibre and click “Add books“. It will take calibre a couple of seconds to add the file.

<figure class="wp-block-image size-large"></figure>

Once the file is imported, edit the file’s metadata by clicking on the “Edit metadata” button. Here you can fill out the title of the book and the author’s name. You can also upload a cover image (if you have one) or calibre will generate one for you.

<figure class="wp-block-image size-large"></figure>

Next, click the “Convert books” button. In the new dialog box, select the “Look & Feel” section and the “Layout” tab. Check the “Remove spacing between paragraphs” option. This will tighten up the contents as indent each paragraph.

<figure class="wp-block-image size-large"></figure>

Now, set up the table of contents. Select the “Table of Contents” section. There are three options to focus on: Level 1 TOC, Level 2 TOC, and Level 3 TOC. For each, click the wand at the end. In this new dialog box, select the HTML tag that applies to the table of contents entry. For example, select h1 for Level 1 TOC and so on.

<figure class="wp-block-image size-large"></figure>

Next, tell calibre to include the table of contents. Select the “EPUB output” section and check the “Insert Inline Table of Contents“. To create the epub file, click “OK“.

Mark your calendars for next Monday, folks: 2020-06-08 will be the very first Fedora CoreOS test day! Fedora QA and the CoreOS team are collaborating to bring you this event. We'll be asking participants to test the bleeding-edge next stream of Fedora CoreOS, run some test cases, and also read over the documentation and give feedback.

All the details are on the Test Day page. You can join in on the day on Freenode IRC, we'll be using #fedora-coreos rather than #fedora-test-day for this event. Please come by and help out if you have the time!

IMPORTANT: this is a medium sized release which includes
minor security fixes, many improvements & bug-fixes and translations
in several new languages. It is the second release to include
contributions via our
open source bounty program.
You can explore everything at
https://public.tenant.kiwitcms.org!

Reverting to older historical version via Admin panel now redirects
to object which was reverted. Fixes
Issue #1074

Documentation updates

Important

Starting from v8.4 all supported bug trackers now feature
1-click bug report integration! Here's an example of how they look like
for GitHub and JIRA:

Note

Some external bug trackers like Bugzilla & JIRA provide more
flexibility over which fields are required for a new bug report.
The current functionality should work for vanilla installations and would
fall back to manual bug reporting if it can't create a new bug
automatically!

Database

Force creation of missing permissions for m2m fields from the tcms.bugs app:

bugs.add_bug_tags

bugs.change_bug_tags

bugs.delete_bug_tags

bugs.view_bug_tags

bugs.add_bug_executions

bugs.change_bug_execution

bugs.delete_bug_execution

bugs.view_bug_executions

Warning

TCMS admins of existing installations will have to assign these by hand
to users/groups who will be allowed to change tags on bugs!

Some of the translations in Chinese and German and all of the strings in
Japanese and Korean have been contributed by a non-native speaker and are
sub-optimal, see
OpenCollective #18663.
If you are a native in these languages and spot strings which don't
sit well with you we kindly ask you to
contribute a better translation
via the built-in translation editor!

Vote for Kiwi TCMS

Our website has been nominated in the 2020 .eu Web Awards and
we've promised
to do everything in our power to greet future FOSDEM visitors with
an open source billboard advertising at BRU airport. We need your help
to do that!

<figure aria-describedby="caption-attachment-147" class="wp-caption alignnone" data-shortcode="caption" id="attachment_147" style="width: 639px"><figcaption class="wp-caption-text" id="caption-attachment-147">I used of Toy Story 3 trailer as a test video and saw it a thousand times during the VA-API debugging. I should definitely watch the movie one day.</figcaption></figure>

Yes, it’s finally here. One and half year after Tom Callaway, Engineering Manager @ Red Hat added the patch to Chromium we also get hardware accelerated video playback for Firefox. It’s shame it took too long but I’m still learning.

The VA-API support in Firefox is a bit specific as it works under Wayland only right now. There isn’t any technical reason for that, I just don’t have enough time to implement it for X11 so Bug 1619523 is waiting for brave hackers.

The contributor list is not exhaustive as I mentioned only the most active ones who comes to mind right now. There are a lot of people who contribute to Firefox/Wayland. You’re the best!

How to enable it in Fedora?

When you run Gnome Wayland session on Fedora you get Firefox with Wayland backend by default. Make sure you have the latest Firefox 77.0 for Fedora 32 / Fedora 31.

You also need working VA-API acceleration and ffmpeg (valib) packages. They are provided by RPM Fusion repository. Enable it and install ffmpeg, libva and libva-utils.

Intel graphics card

There are two drivers for Intel cards, libva-intel-driver (provides i965_drv_video.so) and libva-intel-hybrid-driver (iHD_drv_video.so). Firefox works with libva-intel-driver only, intel-media-driver is broken due to sandboxing issues (Bug 1619585). I strongly recommend to avoid it all cost and don’t disable media sandbox for it.

AMD graphics card

AMD open source drivers decode video with radeonsi_drv_video.so library which is provided by mesa-dri-drivers package and it comes with Fedora by default.

NVIDIA graphics cards

I have no idea how NVIDIA cards are supported because I don’t owny any. Please refer to Fedora VA-API page for details.

Test VA-API state

When you have the driver set it’s time to prove it. Run vainfo on terminal and check which media formats are decoded on the hardware.

There’s vainfo output from my laptop with integrated Intel UHD Graphics 630. Loads i965_drv_video.so driver and decodes H.264/VP8/VP9 video formats. I don’t expect much more from it – seems to be up.

Configure Firefox

It’s time to whip up the lazy fox At about:config set gfx.webrender.enabled and widget.wayland-dmabuf-vaapi.enabled. Restart browser, go to about:support and make sure WebRender is enabled…

…and Window Protocol is Wayland/drm.

Right now you should be able to decode and play clips on your graphics cards only without any CPU interaction.

Get more info from Firefox log

VA-API video playback may not work from various reason. Incompatible video codec, large video size, missing system libraries and so on. All those errors can be diagnosed by Firefox media log. Run on terminal

MOZ_LOG="PlatformDecoderModule:5" MOZ_ENABLE_WAYLAND=1 firefox

and you should see something like

“VA-API FFmpeg init successful” claims the VA-API is up and running, VP9 is the video format and “Got one VAAPI frame output…” line confirms that frame decoding works.

VA-API and Youtube

Unfortunately Youtube tends to serve various video formats, from H.264 to AV1. Actual codec info is shown after right click on video under “Stats for nerds” option.

VA-API with stock Mozilla binaries

Stock Mozilla Firefox 77.0 is missing some important stability/performance VA-API fixes which hit Firefox 78.0 and are backported to Fedora Firefox package. You should grab latest nightly binaries or Developer/Beta versions and run them under Wayland as

MOZ_ENABLE_WAYLAND=1 ./firefox

Mozilla binaries perform VP8/VP9 decoding by bundled libvpx library which is missing VA-API decode path. If your hardware supports it and you want to use VA-API for VP8/VP9 decoding, you need to disable bundled libvpx and force external ffmpeg. Go to about:config and set media.ffvpx.enabled to false. Fedora sets that by default when VA-API is enabled.

Yesterday the Tor Browser 9.5 was released. I am excited about this release for
some user-focused updates.

Onion-Location header

If your webserver provides this one extra header Onion-Location, the Tor
Browser will ask the user if they want to visit the onion site itself. The user
can even mark to visit every such onion site by default. See it in action here.

To enable this, in Apache, you need a configuration line like below for your
website’s configuration.

This is the first in what I hope to make a monthly series summarizing the past month on the Community Blog. Please leave a comment below to let me know what you think. Stats In May, we published 31 posts. The site had 4,964 visits from 2,392 unique viewers. Readers wrote 13 comments. 202 visits […]

Also there is the spam problem: there is still a lot of spam out there. And since this is an ongoing fight mail server admins have to constantly adjust their systems to newest tricks and requirements. Think of SPF, DKIM, DMARC and DANE here.

Last but not least the market is more and more dominated by large corporations. If your email is tagged as spam by one of those, you often have no way to figure out what the problem is – or how to fix it. They simply will not talk to you if you are not of equal size (or otherwise important). In fact, if I have a pessimistic look into the future of email, it might happen that all small mail service providers die and we all have to use the big services.

Thus the question is if anyone should run their own mail server at all. Frankly, I would not recommend it if you are not really motivated to do so. So be warned.

However, if you do decide to do that on your own, you will learn a lot about the underlying technology, about how a core technology of “the internet” works, how companies work and behave, and you will have huge control about a central piece of today’s communication: mail is still a corner stone of today’s communication, even if we all hate it.

My background

To better understand my motivation it helps to know where I come from: In my past job at credativ I was project manager for a team dealing with large mail clusters. Like, really large. The people in the team were and are awesome folks who *really* understand mail servers. If you ever need help running your own open source mail infrastructure, get them on board, I would vouch for them anytime.

And while I never reached and never will reach the level of understanding the people in my team had, I got my fair share of knowledge. About the the technological components, the developments in the field, the challenges and so on. Out of this I decided at some point that it would be fun to run my own mail server (yeah, not the brightest day of my life, in hindsight…).

Thus at some point I set up my own domain and mail server. And right from the start I wanted more than a mail server: I wanted a groupware server. Calendars, address books, such a like. I do not recall how it all started, and how the first setup looked like, but I know that there was a Zarafa instance once, in 2013. Also I used OpenLDAP for a while, munin was in there as well, even a trac service to host a git repository. Certificates were shipped via StartSSL. Yeah, good times.

In summer 2017 this changed: I moved Zarafa out of the picture, in came SOGo. Also, trac was replaced by Gitlab and that again by Gitea. The mail server was completely based on Postfix, Dovecot and the likes (Amavisd, Spamassassin, ClamAV). OpenLDAP was replaced by FreeIPA, StartSSL by letsencrypt. All this was setup via docker containers, for easier separation of services and for simpler management. Nginx was the reverse proxy. Besides the groupware components and the git server there was also a OwnCloud (later Nextcloud) instance. Some of the container images were upstream, some I built myself. There was even a secondary mail server for emergencies, though that one was always somewhat out of date in terms of configuration.

This all served me well for years. Well, more or less. It was never perfect and missed a lot of features. But most mail got through.

Why the restart?

If it all served me well, why did I have to re-create the setup? Well, a few days ago I had to run an update of the certificates (still manually at that time). Since I had to bring down the reverse proxy for it, I decided run a full update of the underlying OS and also of the docker images and to reboot the machine.

It went fine, came back up – but something was wrong. Postfix had problems accepting mails. The more I dug down, the deeper the rabbit hole got. Postfix simply didn’t answer after the “DATA” part in the SMTP communication anymore. Somehow I got that fixed – but then Dovecot didn’t accept the mails for unknown reasons, and bounced were created!

I debugged for hours. But every time I thought I had figured it out, another problem came up. At one point I realized that the underlying FreeIPA service had erratic restarts and I had no idea why.

After three or four days I still had no idea what was going on, why my system was behaving that bad. Even with a verified working configuration from backup things went randomly broken. I was not able to receive or send mails reliably. My three major suspects were:

FreeIPA had a habit in the past to introduce new problems in new images – maybe this image was broken as well? But I wasn’t able to find overly obvious issues or reports.

Docker was updated from an outdated version to something newer – and Docker never was a friend of CentOS firewall rules. Maybe the recent update screwed up my delicate network setup?

Faulty RAM? Weird, hard to reproduce and changing errors of known-to-be-working setups can be the sign of faulty RAM. Maybe the hardware was done for.

I realized I had to make a decision: abandon my own mail hosting approaches (the more sensible option) – or get a new setup running fast.

Well – guess what I did?

Running your own mail server: there is a project for that!

I decided to re-create my setup. And this time I decided to not do it all by myself: Over the years I noticed that I was not the only person with the crazy idea to run their own mail server in containers. Others started entire projects around this with many contributors and additional tooling. I realized that I would loose little by using code from such existing projects, but would gain a lot: better tested code, more people to ask and discuss if problems arise, more features added by others, etc.

Two projects caught my interest over time, I followed them on Github for quite a while already: Mailu and mailcow. Indeed, my original plan was to migrate to one of them in the long term, like in 2021 or something, and maybe even hosted on Kubernetes or at least Podman. However, with the recent outage of my mail server I had to act quickly, and decided to go with a Docker based setup again.

Both projects mentioned above are basically built around Docker COmpose, Postfix, Dovecot, RSpamd and some custom admin tooling to make things easier. If you look closer they both have their advantages and special features, so if you think to run your own mail server I suggest you look into them yourself.

For me the final decision was to go with mailu: mailu does support Kubernetes and I wanted be prepared for a kube based future.

What’s next?

So with all this background you already know what to expect from the next posts: how to bring up mailu as a mail server, how to add Nextcloud and Gitea to the picture, and a few other gimmicks.

This will all be tailored to my needs – but I will try to keep it all as close to the defaults as possible. First to keep it simple but also to make this content reusable for others. I do hope that this will help others to start using their own setups or fine tuning what they already have.

FastAPI is a modern Python web framework that leverage the latest Python improvement in asyncio. In this article you will see how to set up a container based development environment and implement a small web service with FastAPI.

Getting Started

The development environment can be set up using the Fedora container image. The following Dockerfile prepares the container image with FastAPI, Uvicorn and aiofiles.

You now have a running web service using FastAPI. Any changes to main.py will be automatically reloaded. For example, try changing the “Hello Fedora Magazine!” message.

To stop the application, run the following command.

$ podman stop fastapi

Building a small web service

To really see the benefits of FastAPI and the performance improvement it brings (see comparison with other Python web frameworks), let’s build an application that manipulates some I/O. You can use the output of the dnf history command as data for that application.

First, save the output of that command in a file.

$ dnf history | tail --lines=+3 > history.txt

The command is using tail to remove the headers of dnf history which are not needed by the application. Each dnf transaction can be represented with the following information:

id : number of the transaction (increments every time a new transaction is run)

command : the dnf command run during the transaction

date: the date and time the transaction happened

Next, modify the main.py file to add that data structure to the application.

This function makes use of the aiofiles library which provides an asyncio API to manipulate files in Python. This means that opening and reading the file will not block other requests made to the server.

Finally, change the root function to return the data stored in the transactions list.

Conclusion

FastAPI is gaining a lot a popularity in the Python web framework ecosystem because it offers a simple way to build web services using asyncio. You can find more information about FastAPI in the documentation.

Josh and Kurt talk about a grab bag of topics. A DNS security flaw, port scanning your machine from a web browser, and CSV files running arbitrary code. All of these things end up being the result of corner cases. Letting a corner case be part of a default setup is always a mistake. Yes always, not even that one time.

Show Notes

On January 2nd, 2020, I started as the Head of Software Services at Spearline, an audio quality testing company based out of Skibbereen, Ireland. At Spearline, most of our infrastructure is in Amazon’s cloud, but we do have over a hundred callservers around the world. These servers are the ones that actually place the phone calls that we use to check the audio quality on the lines that we’re testing. One of my tasks is to improve security, and one way I’ve done that is to move our callservers behind a VPN that’s connected to our primary Amazon VPC.

Now, to give a bit of background, most of our work and all of our data processing happens in the eu-west-1 region, but we do actually have VPCs with one or two servers setup in most of the available AWS regions. These regions are connected with all other regions with a Peering Connection, which allows us to, for example, have a server in Singapore connect to one of our servers in Ireland using private IP addresses only.

The problem is that we have many callservers that aren’t in AWS, and, traditionally, these servers would have been whitelisted in our infrastructure based on their public IP address. This meant that we sometimes had unencrypted traffic passing between our callservers and the rest of our infrastructure, and that there was work to do when a callserver changed its public IP address. It looked like the best solution was to setup a Wireguard VPN server and have our callservers connect using Wireguard.

Since the VPN server was located in eu-west-1, this had the unfortunate side effect of dramatically increasing the latency between the callserver and servers in other regions. For example, we have a non-AWS callserver located in Singapore that was connecting to a server in the AWS region ap-southeast-1 (Singapore) to figure out where it was supposed to call. The latency between the two servers was about 3 ms, but when going through our VPN server in Ireland, the latency jumped to almost 400ms.

The other problem is that Amazon VPC peering agreements do not allow you to forward traffic from a non-VPC private IP address. So, if the private IP range for our Ireland VPC was 10.2.0.0/16 and the private range for our callservers was 10.50.0.0/16, Singapore would only allow traffic coming from the Ireland VPC if it was from 10.2.0.0/16 and drop all traffic originating from a VPN client. AWS does allow you to create Transit Gateways that will allow extra ranges through, but they cost roughly $36 a month per region, which was jacking up the cost of this project significantly.

Diagram of VPN server per region configuration

My solution was to setup a VPN server (mostly t3.nano instances) in each region that we have servers. These VPN servers communicate with each other over a “backbone” VPN interface, where they forward traffic from the VPN client to the appropriate VPN server for the region. So, for example, if a VPN client connected to the vpn-ireland server wanted to connect to a server in the ap-southeast-1 region, the vpn-ireland server would forward the traffic to the vpn-singapore server, which would then send the traffic into our ap-southeast-1 VPC. The server in the VPC would respond, and since its target is a VPN address, the traffic would go back to the vpn-singapore server, which would send it back to vpn-ireland, which would then pass it back to the VPN client.

Traffic route from VPN client in Ireland to server in Singapore

I then wrote a simple script to run on the VPN servers to compare each client’s latest handshake with the other VPN servers and automatically route traffic to the appropriate server. This led me to my final optimization. I did some investigation, and Amazon has product, the AWS Global Accelerator that allows you to forward a single public IP address to different servers in different regions, depending on where the client connecting to the IP is located. Because Wireguard is stateless, this allows us to have clients automatically connect to the closest VPN server, and, within about five seconds, all the VPN servers will be routing traffic appropriately.

Using the Singapore example above, this setup allows our non-AWS Singapore server to once again ping a server in AWS region ap-southeast-1 with a latency of 3 ms, without affecting its latency to Ireland in any significant way. And the best part is that we don’t have to tell the Singapore server which VPN server is closest. It goes to the closest one automatically.

Building the VPN

To setup your own multi-region Wireguard VPN network, do the following. Note that we use ansible to do most of it.

Setup a VPC in each region you care about. For each VPC, setup a peering connection with all of the other VPCs. Make sure each VPC uses a different subnet (I’d suggest using something like 10.1.0.0/16, 10.2.0.0/16, etc). Creating a VPC is beyond the scope of this blog entry.

Setup a t3.nano instance in each region you care about in the VPC you created above. I would suggest using a distribution with a new enough kernel that Wireguard is built-in, something like Fedora. Make sure each instance has an Elastic IP.

Verify that each VPN server can ping the other VPN servers using their private (in-VPC) IPs

Turn on IP forwarding (net.ipv4.ip_forward=1) and turn off the return path filter (net.ipv4.conf.all.rp_filter). Also, make sure to disable the “Source destination check” in AWS.

Setup a new route table called backbone in /etc/iproute2/rt_tables

Open up UDP ports 51820-51821 and TCP port 51819 in the firewall.

Setup a “backbone” Wireguard interface on each VPN server, using the config here as a starting point. Each server must have a unique key and unique IP address, but they should all use the same port. Each server should have entries for all the other servers with their public key and (assuming you want to keep AWS traffic costs down) private IP address. AllowedIPs for each entry should include the server’s backbone IP address (10.50.0.x/32) and the server’s VPC IP range (10.x.0.0/16). This will allow traffic to be forwarded through the VPN server to the attached VPC. Ping the other VPN server backbone IP addresses to verify connectivity over the VPN.

Add the backbone interface to your firewall’s trusted zone

Setup a “client” Wireguard interface on each VPN server, using the config here as a starting point. This should contain the keys and IP addresses for all your VPN clients, and should be identical on all the VPN servers

Start the wg-route service from wg-route on GitHub on all the VPN servers. The service will automatically detect the other VPN servers and start exchanging routes to the VPN clients. Please note that the VPN server time needs to be fairly synchronized on all the VPN servers

Connect a VPN client to one of the VPN servers. Within five to ten seconds, all the servers should be routing any traffic to that VPN client through the server that it’s connected to. Test by pinging the different VPN server’s backbone IP addresses from the client

Start the wg-status service from wg-route on GitHub on all the VPN servers. This service will let the Global Accelerator know that this VPN server is ready for connections

Setup an AWS Global Accelerator and add a listener for the UDP port setup in your “client” Wireguard interface. For the listener, add an endpoint group for each region that you’ve setup a VPN server, with a TCP health check on port 51819. Then, in each endpoint group, add the VPN server in the region as an endpoint.

Point your VPN client to the Global Accelerator IP. You should be able to ping any of the VPN servers. If you login to one of the VPN servers and run journalctl -f -u wg-route -n 100, you should see a log message telling which VPN server your client connected to.

Problems and limitations

If you bring down a VPN server (by running systemctl stop wg-status), any clients connected to that server will continue to stay connected to that server until there’s been 30 seconds of inactivity on the UDP connection. If you’re using a persistent keep-alive of less than 30 seconds, that means the client will always stay connected, even though a new client will bee connected to a different server. This is due to a bug in the AWS Global Accelerator, and, according to the Amazon technician I spoke to, they are working on fixing it. For now, a script on the VPN client that re-initialized the UDP connection when it’s unable to ping the VPN server is sufficient.

If a VPN server fails, the VPN clients should switch to another VPN server (see the limitation above), but they will be unable to access any servers in the VPC that the failed VPN server is in. There are two potential solutions. Either move all servers in the VPC onto the VPN, removing the speed and cost benefits of using the VPC, or setup a network load balancer in the VPC, and spin up a second VPN server. Please note that the second solution would require some extra work on backbone routing that hasn’t yet been done.

Did you ever need to write XML parser from scratch? You can have a parser ready in few minutes! Let me introduce you to xsd2go.

Why bother?

Most of my readers will probably have an experience with the wide spread XML applications like RSS or Atom feeds, SVG, XHTML. For those well known XML applications you will find good library encapsulating the parsing for you. You just include existing parser in your project and you are done with it. However, what would you do if you cannot use it (think of license mismatch), or what would you do if there was no parsing library at all?

What is XSD?

You already know that, but let me briefly run through it. XSD stands for XML Schema Definition. For a given XML application (think of Rss) it describes how a well-formed document looks like, describing the structure really well. XSD will tell you, what attributes each element has, what sub-elements can be found in each element and what are the cardinalities, meaning how many of sub-elements of certain type you can expect and which are optional and which are not.

In effect (and by design), XSD can be used to automatically assess documents adherence to a given standard.

Little snark: XSD is true XML application and hence it is expressive and you will find many ways to achieve the same descriptive result.

What is XSD2Go?

XSD2Go is minimalistic project that converts XSD to golang code. For given set of XSD files, xsd2go produces golang code that contains structures/models relevant for parsing given XML format. Produced golang code contains XML parsing hints that can be used together with standard encoding/xmlgolang package to parse the XMLs.

⚠️ You should run xsd2go, before ever importing encoding/xml to your project. ⚠️

I cannot stress this enough.

I mean, xsd2go is 6 days old and thus very unfinished, but I still believe it already presents good value compared to starting from scratch.

How does it work?

Just briefly. Xsd2go will

parse your master XSD file

and processes all xsd:import elements, parsing imported XSD files

effectively building workspace and a dependency tree of all relevant XSDs.

and lastly, kudos

Here’s your report of what has happened in Fedora this week. Fedora 30 has reached end-of-life. Elections voting is open through 11 June. I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. Announcements Help wanted Upcoming meetings Releases CPE update […]

I did not realise that there are plenty of videos of events at which I spoke, interviews I gave and panels I was part of. For my own purposes, I thought it is best if I can bring them together in a playlist. All the videos are on youtube (and I am sure that there would be on vimeo as well) and I will have to do this housekeeping every now and then.

I’ve found 36 videos so far. I would like to place a link to each here (extracted from the playlist). Will do that soon.

In the last few releases new features were delivered to make Cockpit meet the Common Criteria and thus making it possible to undergo the certification process in the near future.
This certification is often required for large organizations, particularly in the public sector, and also gives users more confidence in using the Web Console without risking their security.

This article provides a summary of these new changes with reference to the given CC norms.

Cockpit session tracking

There is a multitude of tools to track logins. Cockpit sessions are now correctly registered in
utmp, wtmp and btmp, allowing them to be displayed in tools like who, w, last and lastlog.
Cockpit also works correctly with pam_tally2 and pam_faillock.

Support for banners on the login page

Companies or agencies may need to show warning which states that use of the computer is for lawful purposes, the user is subject to surveillance, and anyone trespassing will be prosecuted.
This must be stated before login so they had fair warning.
Like SSH, Cockpit can optionally show the content of a banner file on the login screen.

This needs to be configured in /etc/cockpit/cockpit.conf. For example to show content of /etc/issue.cockpit on the login page:

Session timeouts

To prevent abusing forgotten Cockpit sessions, Cockpit can be set up to automatically log users out of their current session after some time of inactivity.
The timeout (in minutes) can be configured in /etc/cockpit/cockpit.conf. For example, to log out the user after 15 minutes of inactivity:

[Session]
IdleTimeout=15

Delivered in version 209 (with default timeout
of 15 minutes, but since version 218 the
default timeout is disabled).

Show “last login” information upon log in

Cockpit displays information about the last time the account was used and how many failed login attempts for this account have occurred since the last successful login.
This is an important and required security feature so that users are aware if their account has been logged into without their knowledge or if someone is trying to guess their password.

Event logging is a central source of information both for IT security and operations, but different teams use different tools to collect and analyze log messages. The same log message is often collected by multiple applications. Having each team using different tools is complex, inefficient and makes systems less secure. Using a single application to create a dedicated log management layer independent of analytics instead, however, has multiple benefits.

Using syslog-ng is a lot more flexible than most log aggregation tools provided by log analytics vendors. This is one of the reasons why my talks and blogs focused on how to make your life easier using its technical advantages. Of course, I am aware of the direct financial benefits as well. If you are interested in that part, talk to my colleagues on the business side. They can help you to calculate how much you can save on your SIEM licenses when syslog-ng collects log messages and ensures that only relevant messages reach your SIEM and only at a predicatively low message rate. You can learn more about this use case on our Optimizing SIEM page.

In this blog, I will focus on a third aspect: simplifying complexity. This was the focus of many of my conference discussions before the COVID-19 pandemic. If we think a bit more about it, we can see that this is not really a third aspect, but a combination of the previous two instead. Using the flexibility of syslog-ng, we create a dedicated log management layer in front of different log analytics solutions. By reducing complexity, we can save in many ways: on computing and human resources, and on licensing when using commercial tools for log analysis as well.

Back to basics

While this blog is focusing on how to consolidate multiple log aggregation systems that are specific to analytics softwares into a common log management layer, I also often see that many organizations still do not see the need for central log collection. So, let’s quickly jump back to the basics: why central log collection is important. There are three major reasons:

Convenience: a single place to check logs instead of many.

Availability: logs are available even when the sender machine is down or offline.

Security: you can check the logs centrally even if a host was breached and logs were deleted or falsified locally.

Reducing complexity

Collecting system logs with one application locally, forwarding the logs with another one, collecting audit logs with a different app, buffering logs with a dedicated server, and processing logs with yet another app centrally means installing several different applications on your infrastructure. This is the architecture of the Elastic stack, for example. Many others are simpler, but still separating system log collection (journald and/or one of the syslog variants) and log shipping. This is the case of Splunk forwarder and many of the different Logging as a Service agents. And on top of that, you might need a different set of applications for different log analysis software. Using multiple software solutions makes a system more complex, difficult to update and needs more computing, network and storage resources as well.

All these features can be implemented using a single application, which in the end can feed multiple log analysis software. A single app to learn and to follow in bug & CVE trackers. A single app to push through the security and operations teams, instead of many. Less resources needed both on the human and technical side.

Implementing a dedicated log management layer

The syslog-ng application collects logs from many different sources, performs real-time log analysis by processing and filtering them, and finally, it stores the logs or routes them for further analysis.

In an ideal world, all log messages come in a structured format, ready to be used for log analysis, alerting or dashboards. But in a real world, only part of the logs fall into this category. Traditionally, most of the log messages come as free format text messages. These are easy to be read by humans, which was the original use of log messages. However, nowadays logs are rarely processed by the human eye. Fortunately, syslog-ng has several tools to turn unstructured (and many of the structured) message formats into name-value pairs, and thus delivers the benefits of structured log messages.

Once you have name-value pairs, log messages can be further enriched with additional information in real-time, which helps responding to security events faster. One way of doing that is adding geo-location based on IP addresses. Another way is adding contextual data from external files, like the role of a server based on the IP address or the role of the user based on the name. Data from external files can also be used to filter messages (for example, to check firewall logs to determine whether certain IP addresses are contained in various black lists for malware command centers, spammers, and so on).

Logging is subject to an increasing number of compliance regulations. PCI-DSS or many European privacy laws require removing sensitive data from log messages. Using syslog-ng logs can be anonymized in a way that they are still useful for security analytics.

With log messages parsed and enriched, you can now make informed decisions where to store or forward log messages. You can already do basic alerting in syslog-ng, and you can receive critical log messages on a Slack channel. There are many ready-to-use destinations within syslog-ng, like Kafka, MongoDB or Elasticsearch. Also, you can easily create your own custom destination based on the generic network or HTTP destinations, and using templates to log in a format as required by a SIEM or a Logging as a Service solution, like Sumo Logic.

What is next?

Many of these concepts were covered before in earlier blogs, and the individual features are covered well in the documentation. If you want to learn more about them and see some configuration examples, join me at the Pass the SALT conference, where among many other interesting talks, you can also learn more in depth about creating a dedicated log management layer.

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik.

Release Candidate versions are available in testing repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests, and also as base packages.

RPM of PHP version 7.4.7RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 32 orremi-php74-test repository for Fedora 30-31 and Enterprise Linux7-8.

RPM of PHP version 7.3.19RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 30-31 orremi-php73-test repository for Enterprise Linux.

PHP version 7.2 is now in security mode only, so no more RC will be released.

Voting in the Fedora 32 elections is now open. Go to the Elections app to cast your vote. Voting closes at 23:59 UTC on Thursday 11 June. Don’t forget to claim your “I Voted” badge when you cast your ballot. Links to candidate interviews are below. Fedora Council There is one seat open on the […]

This is a part of the Mindshare Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 28 May and closes promptly at 23:59:59 UTC on Thursday, 11 June. Interview with Maria Leandro Fedora account: tatica IRC nick: tatica (found in fedora-social – fedora-latam – fedora-ambassadors – fedora-design – […]

This is a part of the Mindshare Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 28 May and closes promptly at 23:59:59 UTC on Thursday, 11 June. Interview with Alessio Ciregia Fedora account: alciregi IRC nick: alciregi (found in fedora-join #fedora-it #fedora-ask #fedora others) Fedora user wiki […]

This is a part of the Mindshare Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 28 May and closes promptly at 23:59:59 UTC on Thursday, 11 June. Interview with Daniel Lara Fedora account: danniel IRC nick: danniel (found in #fedora #fedora-ambassadors #fedora-br# fedora-latam) Fedora user wiki page […]

This is a part of the Mindshare Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 28 May and closes promptly at 23:59:59 UTC on Thursday, 11 June. Interview with Sumantro Mukherjee Fedora account: sumantrom IRC nick: sumantrom (found in fedora-qa #fedora-test-day #fedora-classroom #fedora-india #fedora-meeting #fedora-join #fedora-devel #fedora-kernel […]

This is a part of the Council Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 28 May and closes promptly at 23:59:59 UTC on Thursday, 11 June. Interview with James Cassell Fedora account: cyberpear IRC nick: cyberpear (I tend to idle in very many channels, participating in […]

This is a part of the Council Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 28 May and closes promptly at 23:59:59 UTC on Thursday, 11 June. Interview with Aleksandra Fedorova Fedora account: bookwar IRC nick: bookwar (found in #fedora-devel, #fedora-ci) Fedora user wiki page Questions Why […]

This is a part of the Council Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 28 May and closes promptly at 23:59:59 UTC on Thursday, 11 June. Interview with Till Maas Fedora account: till IRC nick: tyll (found in #fedora-devel, #fedora-de, #fedora-meeting-1, #nm, #nmstate, #systemroles) Fedora user […]

This is a part of the Council Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 28 May and closes promptly at 23:59:59 UTC on Thursday, 11 June. Interview with Alberto Rodriguez Sanchez Fedora account: bt0dotninja IRC nick: bt0 (found in fedora-commops #fedora-mktg #fedora-ambassadors #fedora-latam #fedora-join #fedora-mindshare #fedora-neuro) […]

This is a part of the FESCo Elections Interviews series. The voting period starts on Thursday, 28 May and closes promptly at 23:59:59 UTC on Thursday, 11 June. Interview with Frantisek Zatloukal Fedora account: frantisekz IRC nick: frantisekz (found in fedora-qa, fedora-devel, fedora-admin) Fedora user wiki page Questions Why do you want to be a […]

Due to an invalid TLS certificate on MITRE’s CVE request form, I have — ironically — been unable to request a new CVE for a TLS certificate verification vulnerability for a couple weeks now. (Note: this vulnerability does not affect WebKit and I’m only aware of one vulnerable application, so impact is limited; follow the link if you’re curious.) MITRE, if you’re reading my blog, your website’s contact form promises a two-day response, but it’s been almost three weeks now, still waiting.

Update May 29: I received a response today stating my request has been forwarded to MITRE’s IT department, and less than an hour later the issue is now fixed. I guess that’s score +1 for blog posts. Thanks for fixing this, MITRE.

Of course, the issue is exactly the same as it was five years ago, the server is misconfigured to send only the final server certificate with no chain of trust, guaranteeing failure in Epiphany or with command-line tools. But the site does work in Chrome, and sometimes works in Firefox… what’s going on? Again, same old story. Firefox is accepting incomplete certificate chains based on which websites you’ve visited in the past, so you might be able to get to the CVE request form or not depending on which websites you’ve previously visited in Firefox, but a fresh profile won’t work. Chrome has started downloading the missing intermediate certificate automatically from the issuer, which Firefox refuses to implement for fear of allowing the certificate authority to track which websites you’re visiting. Eventually, we’ll hopefully have this feature in GnuTLS, because Firefox-style nondeterministic certificate verification is nuts and we have to do one or the other to be web-compatible, but for now that is not supported and we reject the certificate. (I fear I may have delayed others from implementing the GnuTLS support by promising to implement it myself and then never delivering… sorry.)

We could have a debate on TLS certificate verification and the various benefits or costs of the Firefox vs. Chrome approach, but in the end it’s an obvious misconfiguration and there will be no further CVE requests from me until it’s fixed. (Update May 29: the issue is now fixed. :) No, I’m not bypassing the browser security warning, even though I know exactly what’s wrong. We can’t expect users to take these seriously if we skip them ourselves.

This blog post originally started out as a way to point out why the NVD CVSS scores are usually wrong. One of the amazing things about having easy access to data is you can ask a lot of questions, questions you didn’t even know you had, and find answers right away. If you haven’t read it yet, I wrote a very long series on security scanners. One of my struggles I have is there are often many “critical” findings in those scan reports that aren’t actually critical. I wanted to write something that explained why that was, but because my data took me somewhere else, this is the post you get. I knew CVSSv3 wasn’t perfect (even the CVSS folks know this), but I found some really interesting patterns in the data. The TL;DR of this post is: It may be time to start talking about CVSSv4.

It’s easy to write a post that made a lot of assumptions and generally makes facts up that suit whatever argument I was trying to make (which was the first draft of this). I decided to crunch some data to make sure my hypothesis were correct and because graphs are fun. It turns out I learned a lot of new things, which of course also means it took me way longer to do this work. The scripts I used to build all these graphs can be found here if you want to play along at home. You can save yourself a lot of suffering by using my work instead of trying to start from scratch.

Firstly, we’re going to do most of our work with whole integers of CVSSv3 scores. The scores are generally an integer and one decimal place, so for example ‘7.3’. Using the decimal place makes the data much harder to read in this post and the results using only integers were the same. If you don’t believe me, try it yourself.

So this is the distribution of CVSSv3 scores NVD has logged for CVE IDs. Not every ID has a CVSSv3 score which is OK. It’s a somewhat bell curve shape, which should surprise nobody.

Just for the sake of completeness and because someone will ask, here is the CVSSv2 graph. This doesn’t look as nice, which is one of the problems CVSSv2 had, it tended to favor certain scores. CVSSv3 was built to fix this. I simply show this graph to point out progress is being made, please don’t assume I’m trying to bash CVSSv3 here (I am a little). I’m using this opportunity to explain some things I see in the CVSSv3 data. We won’t be looking at CVSSv2 again.

Now I wanted something to compare this data to, how can we decide if the NVD data is good, bad, or something in the middle? I decided to use the Red Hat CVE dataset. Red Hat does a fantastic job capturing things like severity and CVSS scores, their data is insanely open, it’s really good, and its’ easy to download. I would like to do this with some other large datasets someday, like Microsoft, but getting access to that data isn’t so simple and I have limited time.

Here are the Red Hat CVSSv3 scores. It looks a lot like the NVD CVSSv3 data, which given how CVSSv3 was designed, is basically what anyone would expect.

Except it’s kind of not the same it turns out. If we take the NVD score and subtract it from the Red Hat score for every CVE ID and graph the rest we get something that shows NVD likes to score higher than Red Hat does. For example let’s look at CVE-2020-10684. Red Hat gave it a CVSSv3 score of 7.9, while NVD gave it 7.1. This means in our dataset the score would be 7.1 – 7.9 = -0.8

This data is more similar than I expected. About 41 percent of the scores are within 1. The zero doesn’t mean they match, very few match exactly. It’s pretty clear from that graph that the NVD scores are generally higher than the Red Hat scores. This shouldn’t surprise anyone as NVD will generally error on the side of caution where Red Hat has a deeper understanding of how a particular vulnerability affects their products.

Now by itself we could write about how NVD scores are often higher than they should be. If you receive security scanner reports you’re no doubt used to a number of “critical” findings that aren’t very critical at all. Those ratings almost always come from this NVD data. I didn’t think this data was compelling enough to stand on its own, so I kept digging, what other relationships existed?

The graph that really threw me for a loop was when I graphed the Red Hat CVSSv3 scores versus the Red Hat assigned severity. Red Hat doesn’t use the CVSSv3 scores to assign severity, they use something called the Microsoft Security Update Severity Rating System. This rating system predates CVSS and in many ways is superior as it is very simple to score and simple to understand. If you clicked that link and read the descriptions you can probably score vulnerabilities using this scale now. Knowing how to use CVSSv3 will take a few days to get started and long time to be good at it.

If we look at the graph we can see low are generally on the left side, moderate in the middle, high toward the right, but what’s the deal with those critical flaws? Red Hat’s CVSSv3 scores place things as being in the moderate to high range, but the Microsoft scale says they’re critical. I looked at some of these, strangely Flash Player accounts for about 2/3 of those critical issues. That’s a name I thought I would never hear again.

The reality is there shouldn’t be a lot of critical flaws, they are meant to be rare occurrences, and generally are. So I kept digging. What are the relationship between the Red Hat severity and NVD severity? The NVD severity is based on the CVSSv3 score.

This is where my research sort of fell off the rails. The ratings provided by NVD and the ratings Red Hat assigns have some substantial differences. I have a few more graphs that help drive this home. If we look at the NVD rating vs the Red Hat ratings, we see the inconsistency.

I think the most telling graph here is the Red Hat Low vulnerabilities are mostly medium, high, and critical from the NVD CVSSv3 scoring. That strikes me as being a problem. I could maybe understand a lot of low and moderate issues, but there’s something very wrong with this data. There shouldn’t be this many high and critical findings.

Even if we graph the Red Hat CVSSv3 scores for their low issues the graph doesn’t look like it should in my opinion. There’s a lot of scoring that’s a 4 or higher.

Again, I don’t think the problem is Red Hat, or NVD, I think they’re using the tools they have the best they can. Now it should be noted that I only have two sources of data, NVD and Red Hat. I really need to find more data to see if my current hypothesis holds. And we can easily determine if what we see from Red Hat is repeated, or maybe Red Hat is an outlier.

There are also some more details that can be dug into. Are there certain CVSSv3 fields where Red Hat and NVD consistently score differently? Are there certain applications and libraries that create the most inconsistency? It will take time to work through this data, I’m not sure how to start looking at this just yet (if you have ideas or want to try it out yourself, do let me know). I view this post at the start of a journey, not a final explanation. CVSS scoring has helped the entire industry. I have no doubt some sort of CVSS scoring will always exist and should always exist.

The takeaway here was going to be an explanation of why the NVD CVSS scores shouldn’t be used to make decisions about severity. I think the actual takeaway now is the problem isn’t NVD, well, they sort of are, but the real problem is CVSSv3. CVSSv3 scores shouldn’t be trusted as the only source for calculating vulnerability severity.

New navigation with integrated switching of hosts

The navigation has been redesigned and brings four major improvements:

One-level navigation: The current two-level navigation has been squashed for better use of space and better discoverability

Integrated hosts switching: Switching between hosts as well as editing them can be done now directly from the navigation without using the ‘Dashboard’ component

Better discoverability of applications: Applications are shown as the first group in the menu and are also searchable

Access level for all hosts: You can change between Administrative and Limited access on every host, right from the navigation

Logs: Inline help for filtering

The previous release introduced new advanced search features for logs.
This release adds a help button that shows an overview of accepted options, and the journalctl command corresponding to the current filter.

Storage: Improve side panel on details page

The side panel on the storage details page has been unified and uses the same layout as on the storage overview page.

Try it out

The 2020 elections for the GNOME Foundation Board of Directors are underway, so it’s a good time to look back over the past 12 months and see what the current board has been up to. This is intended as a general update for members of the GNOME project, as well as a potential motivator for those who might be interested in running in the election!

Who’s on the board?

Rob’s been president, I’ve been the vice-president and chair, Carlos has been treasurer, Philip has been secretary, and Federico has been vice-secretary.

In addition to these formal roles, each of our board members has brought their existing expertise and areas of interest: Britt has brought a concern with marketing and engagement, Federico has been our code of conduct expert, Rob has brought his knowledge of all things Flatpak and Flathub, Carlos knows everything Gitlab, and Philip and Tristan have both been able to articulate the needs and interests of the GNOME developer community.

This year we made greater use of our Gitlab issue tracker for planning meeting agendas. A good portion of the issues there are private, but anyone can interact with the public ones.

Making the board into a board

Historically, the GNOME Foundation Board has performed a mix of different roles, some operational and some strategic. We’ve done everything from planning and approving events, to running fundraisers, to negotiating contracts.

Much of this work has been important and valuable, but it’s not really the kind of thing that a Board of Directors is supposed to do. In addition to basic legal responsibilities such as compliance, annual filings, etc, a Board of Directors is really supposed to focus on governance, oversight and long-term planning, and we have been making a concerted effort to shift to this type of role over the past couple of years.

This professionalising trend has continued over the past year, and we even had a special training session about it in January 2020, when we all met in Brussels. Concrete steps that we have taken in this direction include developing high-level goals for the organisation, and passing more operational duties over to our fantastic staff.

This work is already having benefits, and we are now performing a more effective scrutiny role. Over the next year, the goal is to bring this work to its logical conclusion, with a schedule for board meetings which better-reflects its high-level governance and oversight role. As part of this, the hope is that, when the new board is confirmed, we’ll switch from weekly to monthly meetings.

This is also the reason behind our change to the bylaws last year, which is taking effect for the first time in this election. As a result of this, directors will have a term of two years. This will provide more consistency from one year to the next, and will better enable the Foundation and staff to make long-term plans. There has been a concern people would be unwilling to sit as a Director for a two year period, but we have significantly reduced the time commitment required of board members, and hope that this will mitigate any concerns prospective candidates might have.

Notable events

The GNOME Foundation has had a lot going on over the last 12 months! Much of this has been “operational”, in the sense that the board has been consulted and has provided oversight, but hasn’t actually been doing the work. These things include hiring new staff, the coding education challenge that was recently launched, and the Rothschild patent case which was settled only last week.

In each case the board has been kept informed, has given its view and has had to give formal approval when necessary. However, the areas where we’ve been most actively working have, in some respects, been more prosaic. This includes things like:

Code of conduct. The board was involved with the review and drafting of the new GNOME code of conduct, which we subsequently unanimously approved in September 2019. We also set up the new Code of Conduct Committee, which is responsible for administering the code of conduct.

Linux App Summit 2019, which happened in Barcelona. This event happened due to the joint support of the GNOME Foundation and KDE e.V, and the board was active in drafting the agreement that allowed this joint support to take place.

Guidelines for committees. As the board takes a more strategic oversight role, we want our committees to be run and report more consistently (and to operate according to the bylaws), so we’ve created new guidelines.

2020 budget. The foundation has had a lot going on (the coding challenge, patent case, etc) and all of this impacted the budget, and made financial scrutiny particularly important.

GNOME software definition and “Circle” proposal. This is a board-led initiative which addresses a long-standing confusion around which projects should be included within GNOME and make use of our infrastructure, branding and whether the teams involved were eligible for Foundation membership. The initiative was announced on Discourse last week for initial community feedback.

Updated conference policy. This primarily involved passing responsibility for conference approvals to our staff, but we have also clarified the rules for conference bidding processes (see the policy page).

In addition to this, the board has been involved with its usual events and workload, including meeting with our advisory board, the AGM, and voting on any issues which require an OK from the board.

Phew.

2020 Elections

As I mentioned at the beginning of this post, the 2020 board elections are currently happening. Candidates have until Friday to announce their interest. As someone who has served on the board for a while, it’s definitely something that I’d recommend! If you’re interested and want more information, don’t hesitate to reach out. Or, if you’re feeling confident, just throw your hat in the ring.

Tor provides a SOCKS proxy so that you can have any application using the same
to connect the Onion network. The default port is 9050. The Tor Browser also
provides the same service on port 9150. In this post, we will see how can we
use the same SOCKS proxy to access the Internet using Rust.