C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D

As an avid mutt-kz user, I always found it quite annoying
to have to use a web browser or my phone to check out my work calendar or upcoming birthdays.
I have slowly started to use khal which is shaping up to be
a very nice calendaring application for use within terminals:

For Fedora users I have created a COPR repo. As root, simply run:

dnf copr enable mbaldessari/khal

and then launch:

dnf install khal

This will install the following packages: python-atomicwrites, vdirsyncer and khal
Once installed, we need to tell vdirsyncer where to fetch the caldav entries from.
My ~/.vdirsyncer/config is as follows and contains my birthday list from Facebook and my work calendar:

Introduction

One of many reasons to love Performance Co-Pilot, is the fact that it is a
fully fledged framework to do performance analysis. It makes it extremely simple to extend
and to build anything on top of it. In this post we shall explore how simple it is to analyze your
performance data using iPython and pandas.

Setup

To start we will need some PCP archives which contain some collected metrics from a system. In
this post I will use the data I collect on my home firewall and will try to analyze some of
the data there in. To learn how to store performance metrics in an archive, take a look at
pmlogger and the Quickstart guide.
For this example I collected data over the course of a day with a 1 minute interval.

iPython and PCP

First of all you need to import a small python module that bridges PCP and pandas/numpy:

At this point the data is fully parsed in memory and we can start analyzing it, using
all the standard tools like pandas and matplotlib.
Let’s start by looking at how many metrics are present in the archive:

In[4]:metrics=p.get_metrics()In[5]:len(metrics)Out[5]:253

Pandas and PCP

Now we can get a pandas object out of a metric. Let’s take incoming and outgoing network traffic
expressed in bytes over time.

Another very interesting aspect is the plethora of statistical functions that come for free through
the use of pandas. For example, to find covariance() and correlation() we can use the following methods:

Export data

Other outputs like latex, sql, clipboard, hd5f and more are supported.

Conclusions

The versatility of PCP allows anyone to use many currently available frameworks (numpy, pandas, R, scipy) to analyze
and display the collected performance data. There is some work to be done to make this process a bit simpler with an
out of the box PCP installation.

I was trying to understand some oddities going with an X11 legacy application showing bad artifacts in one environment and working flawlessly in another environment. Since wireshark does not have any support for diffing two pcaps, I came up with the following steps:

Dump both working.pcap and nonworking.pcap into text files with full headers:

After initially setting up Performance Co-Pilot and Arduino, I wanted to improve the data being displayed. As latency is quite important to me, I wanted to display that as well. I did not have too much time on my hands to code a new PMDA that collects that information, so I abused the pmdashping(1) for this purpose. The steps are simple:

Now it is possible to abuse the shping.error metric to fetch that value:

123

$ pminfo -f shping.error
shping.error
inst [0 or "8.8.8.8"] value 52

The last step was to fetch this via PMWEBAPI(3). This did not work until I realized, thanks to Fche’s suggestion that the issue was related to my inital context initialization. As a matter of fact there is a big difference between the following two:

/pmapi/context?hostname=STRING – Creates a PM_CONTEXT_HOST PMAPI context with the given host name and/or extended specification.

The man page of pmNewContext(3) explains this in more detail. Frank has added some more info to the PMWEBAPI(3) man page via the following commit, to make it a little bit more obvious. It’s still a pretty gross hack, but for the time being it’s enough for my needs.

Besides being an incredible nice toolkit to work with, Performance Co-Pilot is extremely simple to integrate with any application. Amongst an extensive API it allows to export any metric via JSON using the PMWEBAPI(3). I actived this feature on my Soekris firewall by installing PCP and running both the pmcd and the pmwebd services. Once pmwebd is active, querying any metric is quite simple:

Armed with an Arduino with a 328 Atmel onboard, an Ethernet shield and a 2x16 LCD I wanted to display my ADSL bandwith use. Instead of having to write a network parser for the PCP protocol (or worse SNMP), it’s simple to use the exported JSON data for this. Here are the results:

I’ll eventually clean up the C code I used for this and publish it somewhere.

When capturing network traffic on an interface, it is usually pretty obvious which direction the packets are going. Let’s take a typical Linux machine that hosts some VMs over a linux bridge. The interfaces will look like this:

In the above example, we could assume that both 58 and 59 were outgoing packets, but we’d be wrong. Although size 42 suggests that it has not been padded to ethernet’s minimal frame size, frame number 59 is not really coming from the “external network”. One hint is that arp requests are sent with a second interval between each request, so it’d be unlikely that the VM is the creator of the second packet too.
So where is 59 coming from? It turns out that with SR-IOV enabled, some cards’ onboard switch loops packets back.
Why is that a problem? Glad you asked.

When the Linux bridge sees packet 59, it records the mac-address 52:54:00:11:22:33 as coming from eth0, and not from the locally connected vnet0 tunnel. When packet 60 arrives, the bridge will drop it because it believes the destination MAC address is on eth0.

Long story short, in order to troubleshoot these kinds of issues, I know of three ways to be able to see the direction of packets:

tcpdump

With a fairly recent tcpdump/libpcap you can specify the -P in|out|inout option and capture traffic in a specific direction. In a situation like the one described here, it will be a bit cumbersome as you will need two separate tcpdump instances, but it works.

netsniff-ng

netsniff-ng can do an incredible number of cool things. Amongst others, it shows the direction of packets by default:

Lately I’ve had the pleasure to have to read different code bases in a fairly short amount of time. So I spent a little bit of time checking out the available tools to navigate source code in a more efficient way than launching ‘grep’ all over a code base. So first things first, let’s make sure that on this system (Fedora 19) all the needed packages are installed:

Now we can see all the functions, macros and variables of a source file just by pressing F8:

Using Ctrl-] you will always go to the definition of a symbol which is pretty handy (use Ctrl-t to go backwards)

cscope

I also use cscope because ctags is not always too precise and it does not allow you to see which functions call a certain other functions which is often quite handy. It is slower than ctags to generate the index so it’s a bit more painful for bigger codebases but nothing too dramatic. Let’s start by creating the index for our codebase via:

cscope -R -b -q

Note: the above cscope command indexes all the files. You might want to make it smarter and index only the files you’re interested in. I do that with a script that takes all the files with extensions and skips certain directories I am not interested in (for example arch/ia64 of the Linux kernel)

Once the index ‘cscope.out’ is created we can use it within VIM. Let’s for example open VIM in the same directory as ‘cscope.out’ and look for “all the functions that call add_matched_proc()” via ‘:cs find c add_matched_proc’

CCTree

The cctree plugin is not currently packaged in Fedora so we need to install it by hand:

Once installed we need to load and parse the cscope DB. We do this with ‘:CCTreeLoadDB cscope.out’. Once this is done we can ask ourselves questions like “what is the callgraph of functions from main()”? We type “:CCTreeTraceForward main” and we get something like the following:

It’s unfortunately very official now, our beloved Red Hat colleague and friend Ray Dassen has passed away. I still can’t believe it, as we had just spent a week in Brno together less than a month ago. I will always remember him. JHM++

Here is a little a little gem we (*) came up with some weeks ago while debugging some strace outputs.

Surprisingly enough (to me at least), I discovered that my ISP actually does support IPv6. You simply need to configure your PPP connection with the following:

Username: adsl@alice6.it

Password: IPV6@alice6

On my Debian-based Soekris firewall I use shorewall to manage the filtering rules. Eth4 is the internal ADSL modem and eth0 is the internal network. In order to distribute IPv6 to my home network I added the following to /etc/ppp/ipv6-up.d/dsl-provider (nb: it needs the ndisc6 package) :