So, these things are great if you want one file to appear in more than one
place. At least once I thought that a hardlink can allow two processes running
under different users to access and modify one file even if each file has a
pretty strict access mode. I could not be more wrong.

A directory contains filename to inode mapping for a file. And a hard link is
just another filename for the same inode. The directory entry does not have
the ownership information and access mode:

Yes, that information is stored in the inode itself and it is simply shared
between all the names of the file.

Hard links cannot cross the filesystem boundaries because they are simply
referencing an existing inode number and not the path.

You can’t create a hard link to a directory because it can introduce an
infinite loop while traversing the directories. You can still do this with
symbolic links but the system utilities will handle this for you, because it
can be clearly seen what file name is the canonical one:

At some point I understood that I have no idea why hard links would be useful,
however, as a nice blog post by Paul Cobbaut and the comments suggest, hard
links are used during file renaming across a single file system. At first a
new link to the same inode is created, then an old one is removed.

Good to know

They’re useful in the case of mostly-duplicated data, too. Backups are a good example of this. If you’ve got a folder things/ with thing1, thing2, thing3 in it, and you create a backup of it on your backup server, you’ll have:

backupserver:
/backup-2013-02-23
/things
thing1
thing2
thing3

then you delete thing2 from your disk, and back up again. In theory, then,
you’d get a new folder on the backup server containing thing1 and thing3 — so
the backup server would have two copies of thing1 and thing3, which is wasteful
of space because they havn’t changed. However, in practice, you get this:

so there’s only actually one copy of thing1, even though it’s in both backup
folders. This means that every backup folder looks like a full backup, but
only actually takes the space of an incremental backup.

You may have heard about stack overflow (no, not the web site),
but you may nave never had a chance to experience what that really is.

In Linux you can control the stack size with "ulimit -s". By default it is 8 MB on Ubuntu machine:

$ ulimit -s
8192

The program below causes a stack overflow. Please note that the application
does nothing, however it manages to fill its stack space completely.

intmain(intargc,char**argv){charstack[8192*1024];return0;}

$ gcc -o stack stack.c
$ ./stack
Segmentation fault (core dumped)

Even though 8Mb is available to the program, there are various other things
that need to be put on the stack, such as the arguments and return values. When
a recursive function breaks and calls itself indefinitely it eventually uses up
all the stack space and crashes in exactly the same way.

I have a second network at home for my virtual machines and DHCP is set up to
give the classless routing information. I usually use 3G connection but today
I enabled WiFi on my android phone running 2.2 and it brought the WiFi up but
no hosts could be accessed. It turned out that it did not want to set the
default route.

It turns out that Android actually implements it the right way:

If the DHCP server returns both a Classless Static Routes option and a
Router option, the DHCP client MUST ignore the Router option.

Oh, so my network is broken! I added the default route to the classless static
route (which immediately triggered a bug in network manager, which is not that
critical – the gateway is still picked up from the router option) but now my
phone failed to get the DNS.

After forcing dhcp option 6 with the ip address of my DNS server the phone
finally connected to the outside world via WiFi.

So now my dnsmasq uci config started to look like this and it works for me:

Another thing that was annoying me for a while is command-not-found package.
It is extremely helpful at discovering what package the application I want is
in. However, there were quite a few times when I made a typo, pressed Enter,
noticed it right away but had to wait for a second or so before I got the
“command not found” and the prompt back. When disk is really busy, making a
typo costs me another 30 seconds of disk trashing before command-not-found
comes up with a friendly suggestion that killal is better spelled as Command
'killall' from package 'psmisc'.

Having command-not-found installed and available but not kicking in on every
occasion is preferred. The function bash runs in case it can’t find the
command is command_not_found_handle so we simply need to unset this function
in .bashrc and add an alias (in my case packages-providing after LP:486716,
but it can be anything) which will execute the real command-not-found script:

When testing network issues with different cards it is a good idea to make sure
they actually have different chipsets.

Update: I found no difference between the NIC operating with built-in r8169
module and the version that is available on the RealTek’s web site. You will
want to use the vendor-provided module only if your card is not supported by
the driver shipped with the linux kernel.

Update: See the end of the post, patch is required for DGE-528T to be
picked by this module.

I had a complex VM network setup with separate vlan on the router for virtual
machines and second network card in the server to prevent connection hanging
when network topology was changing on the VM startup. Then I decided to
simplify the network and use one NIC only, in my case that would be DLink
DGE-528T.

This card is driven by realtek 8169 chip and for some reason these like to drop
network connection. I’ve found quite a few topics on poor performance of 8169
with in-kernel driver, but the built-in r8168 chip running under r8169 driver
included in the kernel performs way worse than r8169.

DGE-528T

So I left this module running for about a week and yesterday I could not make
the card work after a reboot. It turns out that while the updated r8169 module
is installed, it is not being used for D-Link DGE-528T card.

1186:4c00 is actually a DGE-T530 card, but it looks like there are some
DGE-T528 cards with 1186:4300 subsystem and some have 1186:4c00 one. The
original driver from D-Link has PCI_ANY_ID for subsystem fields so it looks
like these are compatible.

I was so sure the updated module would fix the issues I was having with the
network card that it actually stopped misbehaving.

However, I decided to patch the module to make it work with my card too:

After this I restarted all the applications that were supposed to listen on
the loopback interface and verified the fix with netstat again.

First of all, you need to have a firewall configured on your servers and allow
only trusted incoming connections to trusted applications. This is what
prevented my opendkim installation from accepting the incoming requests from
the internet.

Second, you need to verify that localhost actually refers to the loopback
interface and does not resolve to your public one, as you have a fully
qualified name for that purpose.

I found that now the control panel for the VPS I am using correctly generates
the hostname line, but it may not have been the case a year ago when I got the
VPS first configured.

Recently I needed to test the behavior of the function that fetched some
remote resource. I wanted to control how it works and supply my own cached
version stored as a file.

While I originally thought unittest should support this using some sort of a
method to get testdata directory, it is actually quite easy to implement. You
only need to create some folder (I called it “testdata”) in the “tests”
directory and then you can refer to it using plain old reference to __file__:

Disqus is a popular service that provides commenting functionality. All you
need to do for your HTML page to have embedded discussion is to add a bit of
javascript.

They have awesome importers from various blog engines. This blog was originally
hosted on Blogspot, migration to static blog generated by Octopress required me
to search for alternative commenting facilities and I decided to use disqus (is
there anything else, really?) Now I switched to WordPress and I wanted my
comments back.

Disqus does provide the ability to export all the comments in a XML file. Quick
Google search told me that it was possible to get the comments quite easily
into WordPress by converting the disqus dump into WXR, but it is not really
that easy. WordPress will ignore the post if it is already there and will not
import comments. There’s a plugin that imports disqus comments, but I wanted
full support of nested comments and little to no PHP coding.

I dumped all the wordpress database locally and hacked up a script that reads
the comments.xml file and puts the necessary data into the database. The script
needs access to some sort of a database in order to figure out the post_id for
each post and generate the comment identifiers for correct nesting.