News items for tag perl - Koos van den Hout

I want applications to use and prefer IPv6 whenever possible, so I have a
/etc/resolv.conf with IPv6 addresses of the nameserver(s) listed
first. But I noticed queries from the spamassassin processes still coming in
over the legacy IP protocol. Even when listing them in order in
/etc/spamassassin/local.cf spamassassin prefers IPv4. And I want it to
prefer IPv6 without leaving out IPv4. I like the redundancy but I want to
change the preference. Also: I only want to maintain the list of nameservers
in /etc/resolv.conf and not in other locations.

I wrote a simple test program to understand what the perl
Net::DNS::Resolver is doing. With a standard test program like:

I will see the IPv6 resolver listed first. But now to convince spamassassin
to do the same. Browsing the Net::DNS::Resolver shows the
RES_OPTIONS="inet6" option but does not document it. This option
confuses spamassassin when starting:

export RES_OPTIONS="inet6"

root@gosper:/etc/default# service spamassassin restart
Restarting SpamAssassin Mail Filter Daemon: Bad arg length for NetAddr::IP::Util::mask4to6, length is 128, should be 32 at /usr/lib/x86_64-linux-gnu/perl5/5.24/NetAddr/IP/Lite.pm line 647.
Compilation failed in require at /usr/lib/x86_64-linux-gnu/perl5/5.24/NetAddr/IP.pm line 8.
BEGIN failed--compilation aborted at /usr/lib/x86_64-linux-gnu/perl5/5.24/NetAddr/IP.pm line 8.
Compilation failed in require at /usr/share/perl5/Mail/SpamAssassin/Util.pm line 70.
BEGIN failed--compilation aborted at /usr/share/perl5/Mail/SpamAssassin/Util.pm line 70.
Compilation failed in require at /usr/share/perl5/Mail/SpamAssassin/Conf.pm line 85.
BEGIN failed--compilation aborted at /usr/share/perl5/Mail/SpamAssassin/Conf.pm line 85.
Compilation failed in require at /usr/share/perl5/Mail/SpamAssassin.pm line 71.
BEGIN failed--compilation aborted at /usr/share/perl5/Mail/SpamAssassin.pm line 71.
Compilation failed in require at /usr/sbin/spamd line 240.
BEGIN failed--compilation aborted at /usr/sbin/spamd line 240.

So that was a bad idea and is not the answer. Looking at the
resolv.conf manpage shows that the option indeed does different
things which explains why that was wrong.

inet6 Sets RES_USE_INET6 in _res.options. This has the
effect of trying an AAAA query before an A query inside
the gethostbyname(3) function, and of mapping IPv4
responses in IPv6 "tunneled form" if no AAAA records
are found but an A record set exists. Since glibc
2.25, this option is deprecated; applications should
use getaddrinfo(3), rather than gethostbyname(3).

So if I want perl programs to do what I want, I have to change every one
of them to set $resolver->prefer_v6(1);. There is no sane default
or a global "get into the 21st century" flag.

Changing /usr/share/perl5/Mail/SpamAssassin/DnsResolver.pm to
include $res->prefer_v6(1); does help, but will need to be
redone when updating spamassassin.

I recently noticed the network traffic statistics weren't updated correctly
for the LAN interface of my Draytek Vigor 130 modem. These statistics were
extracted using code that I originally started using at the computer science
systems group somewhere in the previous decade. It's all Perl Net::SNMP and
not very efficient. I don't know if I wrote it myself or copied from somewhere
else, I do know a new bug was introduced.

To understand the code it is important to realize that interface index numbers
in SNMP are dynamic. Across a reboot a certain number can change. Interface
names are static, but those are never used directly in SNMP.

So to get from a static interface name to a dynamic interface index the
interfaces.2.1.2 subtree (ifDescr) has to be fetched from the device and
checked for the right names. To get the interface index from an snmp object
identifier I used to use this bit of code:

# find the current interface indices for the wanted ^ interfaces
foreach my $oid (oid_lex_sort(keys(%table))) {
if (oid_base_match($ifTable_ifDesc,$oid)){
# printf("%s => %s\n", $oid, $table{$oid});
if (defined $wantstuff{$table{$oid}}){
$wantstuff{$table{$oid}}{ifindex}=substr($oid,1+rindex($oid,'.'));
# I am lazy. I fill a hash with the interface indices so I can
# use it for lookups
$findvlan{substr($oid,1+rindex($oid,'.'))}=$table{$oid};
# printf "Found ifindex %d for %s\n",$wantstuff{$table{$oid}}{ifindex},$table{$oid};
}
}
}

Using that rindex function there are 4 instances of index 1.
Which caused the very similar code looking for the ifInOctets, ifOutOctets
and other counters to overwrite the result for index 1 with those from
WAN1, WAN2 and LAN_PORT1.

So that code is now improved, no more rindex but a well-defined use
of length:

# find the current interface indices for the wanted ^ interfaces
foreach my $oid (oid_lex_sort(keys(%table))) {
if (oid_base_match($ifTable_ifDesc,$oid)){
#printf("%s => %s\n", $oid, $table{$oid});
if (defined $wantstuff{$table{$oid}}){
my $intindex=substr($oid,length($ifTable_ifDesc)+1);
#printf "Submatch found ifindex %d for %s\n",$intindex,$table{$oid};
$wantstuff{$table{$oid}}{ifindex}=$intindex;
# I am lazy. I fill a hash with the interface indices so I can
# use it for lookups
$findvlan{$intindex}=$table{$oid};
#printf "Found ifindex %d for %s\n",$wantstuff{$table{$oid}}{ifindex},$table{$oid};
}
}
}

On 8 and 9 February last week I attended the
Surf Security and Privacy conference.
SURFcert, the incident response team of SURF, had its own 'side event' within
this conference, an escape room. Since the members of SURFcert like to
visit escape rooms themselves, the idea was to build our own escape room. A
simple one as teams of 2 or 3 people had to solve it within 15 minutes. The
best scores were indeed just over 5 minutes so it was doable.

The escape room clock

The theme of this escape room was the trip Snowden made: from the US to
Hongkong to Moscow. Each location had a puzzle and like Snowden the only
thing you could take to the next location was knowledge. In this case a
4-digit code to open a lock. Someone else in the SURFcert team did most of
the hardware work and I decided to dive into some programming to support this
effort. The escape room needed a countdown clock that could only be stopped
by the right code. My idea was to use a barcode scanner to link the stop action
to scanning the barcode on an object.

So I installed a Raspberry Pi with a raspbian desktop and found out how
to set up the autorun on the Pi so my program would be started at startup
when the user 'pi' logs in automatically. This was done by starting it from
~/.config/lxsession/LXDE-pi/autorun.

The program I wrote had three inputs:

A reset switch connected to GPIO pin 11 and ground

A start button connected to GPIO pin 03 and ground

Entering the right barcode to stop the time. In the end this was the
barcode of a real Russian bottle of vodka, so my program needed vodka as input

For the barcodes I used an usb barcode scanner I have lying around.
It behaves like a usb keyboard so scanning a barcode will cause the code
to be entered as keystrokes with an enter key at the end,

But all programming I do is sequential. This is different, I needed to write
an event-based program. It needs to react to time events, enter events and
needs to check the state of gpio bits on time events. And on certain events
it needs to change the global state (reset, running, stopped). The last
time I did any event-based programming was an irc-bot written in Perl 4.

So with a lot of google searches, copypasting bits of code, searching a lot
for which input bits would be default high and go low when connected to earth
and a lot of trying I wrote a program. It uses WxPerl to have a graphical
interface and use events. I'm not saying its a good program, but it did the
job.

Notable things:

The OnInit function sets up everything: a window with minimal decorations,
tries to set it full-screen, a text box that will show the time and starts at
15:00 as static text. A handler for time events that will be called 10 times
per second. And an input box and a handler for when the enter key is pressed.

The onTimer function that looks at global state and decides which inputs
are valid in that state and handles them

The onenter function that calculates a sha256
hash of the input line and checks which inputs can change the global state.
The hash was to make sure that someone who could have a look at the source
still had no idea what the commands were to control it all via keyboard.
And no keyboard was connected anyway. The input for a shutdown is the barcode
from one of the loyalty cards I carry around.

After spending an evening fixing scripts on
The Virtual Bookcase to make
them run in PHP 7 and make them safer at the same time I came to the conclusion
that I still don't like php.

My conclusion is that if I want to maintain sites I'd rather redo them in
perl. I noticed any serious maintenance on the scripts of
The Virtual Bookcase was 9 years
ago (!). That was also when I had the habit of writing maintenance scripts
in perl and web code in php. The upside is that a part of the page-generating
code is already available in perl.

But a rewrite is a task for another day. For now the site works cleanly in
PHP 7 (and 5) and I can go on to the next task for moving the homeserver.

I am currently working on a new version of one of the sites I manage in
perl, rewriting it from php. I noticed loading times were slower and gave
mod_perl a try.

The basic configuration of mod_perl is quite simple. This did not give me
the big advantage in web server speed, that came when I added:

PerlModule Apache::DBI

to the apache2 config. The Apache::DBI module caches database
connections for supported drivers, this speeds up database-dependent scripts.
The module comes from the ubuntu package libapache-dbi-perl
and Apache will throw really bad errors at you when the module you want
to load is not available.

This is now enabled for my homepage site too. The processing times of the
pages don't change much, but the startup of the perl interpreter, modules
and scripts is much faster so the waiting time is a lot less.

How are you doing today, I am miracle 24
yearls old girl, i saw your profile today at googlesearch cpan.cse.msu.edu
- i like it, then i decided to contact you for going into deep
rellastionship between me and you

I introduced a MediaWiki
at work (science ict department) to use for internal documentation. One of the
things I wanted to try is pages in the wiki created or maintained from
other sources.

I created a special namespace for pages with information from other sources,
where normal users have no rights to edit pages. This is to make sure nobody
tries to edit something which is maintained by a script from another source.

I started with something simple: the list of printers. The windows
printserver is leading, so I want to fetch the list there and massage it to
generate a list of printers and comments. The weapon of choice is perl
and MediaWiki::Bot.
The output of smbclient -N -L printserver takes one regexp to find
printqueuenames and descriptions. For the overview of cups queues I can
parse the output of lpstat -a. With a bit more digging into IPP
it should also be possible to get a list of details of printers to link
cups queues and their windows counterparts.

I can run this script from crontab each day and the history tracking in
MediaWiki will start to help document when something changed. Another thing
which we can stop worrying about.

I have visions of the future of automatically linking zabbix (which has a
json interface) and mediawiki and maybe a further future with a good database
of stuff which is a source of entries in zabbix and the wiki. Double work
is unneeded, computers are much better at working with one canonical source
and importing that in a lot of places.

More than one visitor of my homepage saw an intricate XML parsing error
and not the page you all want to see. I never saw the problem myself but my
best guess sofar is that the twitter rss feed was malformed, because that
is the only XML parsing happening for the page. I fetch the twitter feed
automatically every 6 hours, but sometimes twitter is a bit overloaded
and probably gives an internal error page (the famous fail whale) and
not the valid rss feed.

Solution: Fetch the file to a temporary file, run the parser on it and when
the parser does not fail, copy it to where the webserver reads it:

I noticed a few malformed characters in the RSS feed of my homepage that
weren't there in the original database entries and showed ok in the web
version. Again, utf-8 problems showing, although all data (postgres -
script - xml - browser) should be utf-8. Lots of testing and searching,
finally I found The Perl UTF-8 and utf8 Encoding Mess by Jeremy Zawodny. He is right:
it is a mess. And the post itself demonstrates it by being filled with �
characters.
So to make sure everything in the RSS generating process understand that
what comes out of PostgreSQL is valid utf-8 and should be imported in the
XML::RSS module as the same valid utf-8, I need to recode it to utf-8.
Uh.. ok. The bit of code:

A new version of my homepage, rewritten in perl because PHP was
starting to irritate me. More database-driven in the background which allows
me to add things like the tags. And a minor change in the colour scheme
because someone remarked that the black-on-cyan was hard to read for people
above a certain age.

One of the little irritations at work was trying to find out what the exact error was of the printer when the helpdesk ticket just says 'printer problems'. Since HP laserjets will divulge everything via SNMP, I thought the complete information must be available. It is, and I gobbled together a perl script for our noc webserver. Public version in the perl noc stuff page.