Alexandre (adulau) Dulaunoy's messy desk or blog

My blog is a kind of non-consistent space where I put (when I have the time) some thoughts, ideas or stuff I want to share with the potential reader (human or not). There is no one single theme for my blog but it's ranging from Computer Science to Gardening and always with an important touch of freedom. The blog is running on Free Software with oddmuse.

Information Visualization Is Just A Starting Point

Information visualization is not an end but just a step to improve our understanding of data. Following a small discussion in the train about the visualisation of open data, I did a small experiment to analyse the statistics about the waste collection in my region. The result of this experiment is available along with some random notes. But the main question came from someone else looking at the visualization and basically told me: "I don't get it". He is right, the experimentation is just there to trigger more analysis (and sometime more visualization) with the objective to improve our understanding. Initially, the source of data is usually not analysed and sitting there waiting to be understood. Coming back to the data about waste collection, the initial discussion about the understanding or interpretation wouldn't be triggered if the first step of visualization is not done.

So in that scope, I tried a similar approach with a dataset I built from my cve-search tool. My idea was to see the terms used all the description of the Keywords in Common Vulnerabilities and Exposures (CVE). I did a first CVE terms visualization experiment and then I twitted about it. Then, this was triggering various explanations like why there is a predominance of some terms as commented by Steve Christey.

It clearly showed that is an iterative process especially to better understand the data. It's also an interactive process in order to improve the visualization and the data source. Following the good advise from Joshua J. Drake, I added a lemmatizer to keep only the root of each term and also exclude the standard English stopwords. With the visualization, we saw from some occurrences (e.g. unknown or unspecified) that the CVEs are based on incomplete information.

I'm quite sure that is not finished and just the beginning of more work and experiments in visualization. I read various books about information visualization but the result is often very static and you don't really see their iterative process to reach their visualization goals. Sometime, you just see a result without the process and the tools used to make the visualization happens.

Software Vulnerability Management Is Just A Huge Approximation

Approximation is a representation of something that is not exact. To be extremely exact vulnerability management is not even a mathematical approximation like we know it for Pi value. But from where this utterly huge approximation is coming from? The first origin is the inner definition of "vulnerability management". If you look at various definitions like the one from Wikipedia or some information security standards, you have something like "it's a process identifying → classifying → remediation → mitigation of software vulnerabilities". Many information security vendors might told that is an easy problem but you can ask yourself if this is an easy problem why so many organizations are still compromised with software vulnerabilities.

In my pragmatic eyes, it's very broad, so broad that a first reaction is to split the problems into parts that you can solve. If we just look at the initial step to identify software vulnerabilities.

To solve this problem, the first part is to discover, know and understand the software vulnerabilities. Everyone is discovering vulnerabilities everyday (just look at how many bug reports are going into the Linux Kernel bug tracking software) and very often when you report a bug, you don't even know if this is a software vulnerability. The worst part is that an organization (or an individual) doesn't exactly know what software they are running. If someone is telling you that they have a "software vulnerability management" software that is able to detect all the software running on a system, it's a lie. If such software would exist, you would have the perfect software that would be able to solve the virus detection issue while solving the Turing's halting problem. Just look at a simple software appliance and the set of software required to run the appliance.

Discovering vulnerabilities might be easy but it's difficult to be exhaustive. Even if a vulnerability is found, there is a market to limit their publications (like zero-day vulnerability market). For a named software, there is might be a large set of unknown vulnerabilities (I'm tempted to talk about Java but I think every software might fall into that category). Does this mean that you should give up? I don't think so. You must work on your vulnerability management but don't trust blindly solutions that claim to solve such issue.

Finally, my post is not a bashing post as it was an opportunity for me to talk about a side project I'm working to ease collecting and classifying Common Vulnerabilities and Exposures (CVE). The project is called cve-search and it's not a complete vulnerability management just a small tool to solve partially the identification and the classification part.

“When he time comes to leave, just walk away quietly and don't make any fuss.”– Banksy

I'm against SOPA... So I'll explain how to make soap with olive oil

One more time, some lobbyists try to regulate the Internet with some of the stupidest laws or rules. SOPA (in US) is again one of this tentative to break down the freedom of citizen worldwide to preserve some archaic business model. As I have a preference for concrete action leading to a direct social improvement, I'll explain how to do soap (it's better than SOPA and more useful, please note the clever inversion of the letters). My soap recipe is released under the public domain dedication license (CC0).

Safety Disclaimer

Doing soap is a chemical process that requires your full operating brain. Especially that you'll use sodium hydroxide that is a corrosive substance. So respect the proportions, the process and read the whole process multiple times before doing it. Wearing protective gloves and goggles is highly recommended. Avoid to use kitchen instruments in aluminum as it will be attacked by the sodium hydroxide.

Background of the chemical process

Doing soap is one of the first chemical process discovered by the humanity. The process is called saponification that is done by using a base to hydrolyze the triglycerides contained in the fats (organic or animal). This process generates a fatty acid salt along with the glycerol (the greasy touch of the soap). Each fat has a specific value for its saponification. The saponification value (usually called SAP in saponification tables) is expressed by the required volume of base (usually sodium hydroxide) to saponify 1 gram of fat. The saponification value is reduced to keep the resulting soap a bit fat (what is called the "excess fat"). I find it even convenient to keep a "safety" bound to ensure that the hydrolyze is complete and used the whole sodium hydroxide.

So that's the basis if you want to build your own soap, there are other rules to consider but for this recipe this is enough. In my case, I use olive oil as a fat. Easy to find and I have a preference for organic olive oil (to ensure that the oil producer is taking care of its environment). But you can use non-organic olive oil too (it's usually cheaper).

Ingredients

1000 grams of olive oil

124 grams of pure sodium hydroxide / NaOH (as the olive oil has a SAP factor of 0.134 and we want 7% of over fat → run bc and type (1000*0.134)*0.930) (total weight of fat *SAP factor for the fat)*(0.900<->0.960))

350 grams of tap water (usually between 31% and 35% of the total fat. In this recipe ~ 1000*0.350)

Process

Put your protective gloves and goggles

Prepare the sodium hydroxide by putting the sodium hydroxide in water (!put the sodium hydroxide in water not the reverse!).

and monitor the temperature of the prepared sodium hydroxide to reach around 46-47 Celcius degree (it will start at 80 Celcius degree with the reaction).

At the same time, warm the olive oil until 46-47 Celcius degree.

When both are at the same temperature (around 46-47 Celcius degree),

you can start to mix (using a mixer speed up the process) the warmed olive oil by incorporating the prepared sodium hydroxide. (!use a large pot to avoid projection of the prepared sodium hydroxide while mixing!).

When you start to see that the mixture is becoming consistent (especially that you can see a trace while removing the mixer) it means that's you reach the critical point.

When you have an homogeneous consistence, you can put the result into a plate.

Put a plastic film into the plate touching the mixture (to avoid oxygen to be in contact with the prepared soap).

In the next hours, you'll the "gelification process" where the soap is becoming a gel (usually starting from the center).

After 24 hours, your soap is becoming harder. (see above picture)

You can can remove it from the plate and cut the forms you want from your block soap.

And the soap must dry for the next 4 weeks in a dry and clean place. (see above picture)

X.509 Certificate Revocation Reasons in 2011

I'm automatically fetching the certificate revocation lists (CRLs) of all known public CAs. As of Today (17th December 2011), I compiled the reasons of certificate revocation. That's pretty interesting to see the revocation process within CAs and the CRL is usually the only public information we have. As the reason is a non-critical CRL entry (section 5.3.1 in RFC 3280 - RFC 5280), the situation is even worst because the majority of certificate revocation is without any reason. In this blog entry, there are only certificate revocations with a reason entry set.

So having a reason is already a good step for a CA to be transparent on their operations. Now if we have a deeper look on the revocation reason, you will see that is not always enough to understand the context of the revocation and especially what has been really revoked.

The reason "Unspecified" should not be used as recommended in the RFC "however, the reason code CRL entry extension SHOULD be absent instead of using the unspecified (0) reasonCode value." but as you can see it's still largely used. That's probably the behaviour of a software largely used in PKI 1.

The reason "Certificate Hold" is still largely used but its use "is strongly deprecated for the Internet PKI." as mentioned in section 5.3.2 of RFC 5280.

On the security side, the reason "Key Compromise" is regularly used showing the reality of compromised private keys. That reality is also shown with all the different malware (e.g. SpyEye? or Banker Trojan) capturing "private keys" on infected machines.

What can we say on that one? that the certificate with the serial number 43ADFDBE62CB0820 has been revoked recently with the reason code 10 (aACompromise). I couldn't find a clear definition of that reason in the standard. If you have any ideas, let me know.

230 entries with reason CA Compromise (code 2)

With the recent incidents in different CAs (from Comodo to DigiNotar?), everyone should be interested in the reason code 2 used when a CA is compromise. In those cases, that's usually intermediate CAs as the standard is not very clear about the revocation process of self-signed/root CA. But that's again a matter of interpretation of the processes…

Here is a list of the entries found in CRL with a reason "CA Compromise" (You'll see ones matching publicly disclosed incidents but for some others, questions are open):

some might be duplicate as CRLs can be duplicated. In that scope, I generated a list of CRLs URL with an MD5 hash of their output to detect the different CRL URL providing the same revocation list. http://www.foo.be/crl/crl-synonyms.txt

The ones from DigiNotar?. As you can see the extended attributes with the Invalidity Date seem to be incorrect for DigiNotar? as the breach was discovered to be much earlier. As explained in RFC 5280 (section 5.3.2),

"The invalidity date is a non-critical CRL entry extension that provides the date on which it is known or suspected that the private key was compromised or that the certificate otherwise became invalid". I hope there are not any malicious software signed with those revoked keys…

The Challenge

What Did You Get During Hack.lu 2011?

From the hack.lu website, you got a text message including a message stream. During the conference, you got a t-shirt.

The horrible "Beer Scrunchie" subverted the hack.lu 2011 conference to hide some cryptographic materials. He especially abused the t-shirt for hack.lu 2011 to transmit under cover activities. We still don't know at which extend "Beer Scrunchie" abused the t-shirt. Everything is possible just like those trojan t-shirts discovered...

If you decode the message encoded in Base64, you'll see that the stream of data in binary is starting in the following way : "Salted__…." That's the behaviour of the OpenSSL? salted encryption scheme prefixing with "Salted__" to announce that the first 8 bytes of the encrypted stream are reserved for the salt. This gives the indication that the message has been probably encrypted with an OpenSSL? tool or library. If you look carefully look at the encryption schemes available in OpenSSL?:

There are not so many algorithms written by Bruce Schneier in a default OpenSSL? except Blowfish (bf-*). Usually cryptographer recommends to use the "default" mode and in this case, bf is Blowfish in CBC mode. So this is highly probable…

Where Is The Key?

As you didn't use the t-shirt until now, there is a good guess that the key is hidden somewhere. If you look carefully at the text in the back of the hack.lu 2011 t-shirt, you'll see many typographic errors. The interesting part is to compare the typographic errors from the original text as published by Phrack. Please note the typo in the URL (even if the URL works, doesn't mean that's the correct one ;-).

The original text from Phrack (original.txt)

This is our world now... the world of the electron and the switch, the beauty of the baud.
We make use of a service already existing without paying for what could be dirt-cheap if it
wasn't run by profiteering gluttons, and you call us criminals. We explore... and you call
us criminals. We seek after knowledge... and you call us criminals. We exist without skin
color, without nationality, without religious bias... and you call us criminals. You build
atomic bombs, you wage wars, you murder, cheat, and lie to us and try to make us believe
it's for our own good, yet we're the criminals. Yes, I am a criminal.
My crime is that of curiosity. My crime is that of judging people by what they say and think,
not what they look like. My crime is that of outsmarting you, something that you will
never forgive me for.
I am a hacker, and this is my manifesto.
You may stop this individual, but you can't stop us all... after all, we're all alike.
The Conscience of a Hacker, The Mentor, January 8, 1986,
http://www.phrack.org/issues.html?issue=7&id=3#article

The text from the hack.lu 2011 t-shirt (modified.txt)

This is our world now... the world of the electron and the swich, the beauty of the baud,
We make use of a service already exeisting without paying for what could be dirt-cheep if it
was'nt run by profofiteering gluttons, and you call us cricriminal. We explore... and you call
us criminals. We seek after knowledge... and you call us criminals. We exist without skin
colo, without nationlity, without rrligious bias... and you call us crimnals. You build
atomic bombs, you wage wars, you murder, cheat, and lie to us and try to make us believe
it's for our own good, yet we're the criminals. yes, I am a criminal.
My crime is that of curiosity. my crime is that of judginfg people by what thy say and think,
not what they look like. my crime is that of outmarting you, something that you will
never forgive me for.
I am a hacker, and this is my manifasto.
you may stop this individul, but you can't stop us all... after all, we're all alike.
The Conscience of a Hacker, The Mentor, January 8, 1986,
http://www.phrack.org/issues.html?issue=7$id=3#article

So you can build a key from the differences but how? That's the most difficult part (as there are many different way to do it). As there is no natural way to generate a key, I decided to go for a long key that can be read easily from the original text. To build back the key from original to modified you can use word diff and use your favorite GNU tools for word diff. We just discarded the punctuation and we didn't care about the case sensitivity.

and to decrypt the message, you'll need to use OpenSSL? in the following way used the guessed parameters:

openssl enc -d -a -bf -in encrypted.txt -out decrypted.txt

and the original decrypted message is:

I'm Beer Scrunchie and I'm the author or co-author of various block ciphers, pseudo-random number generators and stream ciphers.
In 2012, there will be two major events: the proclamation of a winner for the NIST hash function competition and probably the hack.lu 2012 infosec conference
.
I hope that my Skein hash function will be the winner.
If you are reading this text and be the first to submit to tvtc@hack.lu, you just won a hack.lu ticket for next year. If I'm winning the NIST competition wit
h my hashing function,
you'll get a second free ticket...
Bruce

I got one correct answer 5 days after the conference showing that the difficulty to get back the key was bound to the uncertainty of the key generation. Next year, it's possible that we make a multi-stage t-shirt challenge for hack.lu 2012… from something more easy to something very difficult.

Information Security Is Not a Matter Of Compliance But a Matter of Some Regular and Boring Activities

Making conclusions from experience is not always a scientific approach but a blog is a place where to share experience. Today, I would like to share my past experience with information security and especially how much it's difficult to reach some security with the specific compliance detour proposed by the industry or even the society.

Compliance is a Different Objective Than Information Security

Many compliance mechanisms exist in the information security to ensure on paper the security of a service, a company, a process. I won't list all of them but you might know PCI-DSS, TS 101 456, ISO/IEC 27001 and so on… Very often the core target of a company is to get the final validating document at the end of the auditing process.

Of course, many of those validation processes are requiring many strong security requirements on the procedural aspect of the information security management within the company. This is usually a great opportunity for the information security department to increase somehow their budget or their attraction. Everything is nice. But usually when the paper work is finished, the company got their golden certificate and the investment in information security is just put aside.

But concrete information security is composed of many little dirty jobs that no one wants really do. Usually in the compliance documents those tasks are underestimated (e.g. a check-box at the end of a long list) or even not mentioned (e.g. discarded during the risk assessment because they seem insignificant). Those tasks are usually a core part of information security. Not only for protecting but also to detect misuse earlier.

I summarized the tasks in three large groups (it's not an exhaustive view) but show some of the core jobs to be performed in the context of protecting information systems:

But to discover those discrepancies, you need someone at the end. The answer, here, is not a machine to read your logs (I already hear the SIEM vendors claiming this can be automatized). It's a human having some knowledge (with some doubts) to pick something unusual that can lead to the detection of something serious.

The log analysis is a tedious work that needs curious and competent people. It's something difficult to describe in a compliance document. The analysis job can be boring and not really rewarded. That's why sometime you see the idea of "outsourcing" log analysis but can an outsourced analysis detect an accounting issue because he knows that some user is not working during that time shift?

IMHO, it's sometime better to invest into people and promote the act of regular logs analysis than pursue into an additional security certification without the real security activities associated.

Reducing the Attack Surface

The less software you have the better it is for its security. It sounds very obvious but that's a core concept. We pile more and more features in each software used. I never saw a control in a security standard or certification that recommends to have a policy to reduce software or remove old legacy systems. If you carefully look at "Systems Development Life Cycle", this always shows the perfect world without getting rid of old crappy code.

Maintaining the Software and Hardware

Maintaining software and hardware could fall into the category of "reducing the attack surface" but it's another beast, often under estimated in many security compliance processes. A software is like a living organism, you have to care of it. You don't acquire a tiger and put in your garden without taking care of it. Before maintaining, you obviously need to design systems with "flaw-handling in mind" as Marcus J. Ranum said or Wietse Venema or Saltzer and Schroeder in 1975 . In today's world, we are always not going in that direction so you have to maintain the software to keep out the daily security vulnerabilities.

The main issue with a classical information system is the interactions with the other systems and its environment. If you (as a security engineer) recommend to update a software in a specific infrastructure, you always hear the same song "I can't update it", "It will be done with the yearly upgrade" (usually taking 4 years), "Do you know the impact of this update on my software?" (and obviously you didn't write his software), "It's done" (while checking it's still giving the old version number), "It's not connected so we don't need to patch" (looking at the proxy logs you scare yourself by the volume of data exchanged) and … the classical "it's not managed by us" (while reading the product name in the title of the user who answers that).

Yes, upgrading software (and hardware) is a dirty job, you have to bother people, chase them every days. Even in information security, upgrading software is a pain and you usually break stuff.

All those dirty jobs are part of protecting information systems, we have to do them. Security certification is distracting a lot of professionals from those core activities. I know it's arduous to do them and not rewarded, but we have to do those tasks if we want to make the field more difficult for the attackers.

You might ask why a picture with a radio on a piano… both can do the same "music" but are operated in a different way. Just like information security on a system or an paper are done in two different ways.

Ease Your Log Analysis With BGP Ranking and logs-ranking

Raphael Vinot and I worked on a network security ranking project called BGP Ranking to track the malicious activities per Internet Service Provider (referenced with their ASN Autonomous System Number). The project is free software and can be downloaded, forked or updated at GitHub. As BGP Ranking recently reached a beta stage, we have now a nice set of data about the ranking of each Internet service provider in the world. Every day, we are trying to find new ways to use the dataset to improve our life and remove the boring work while doing network forensic.

A very common task when you are doing network forensic is to analyse huge stack of logs files. Sometime, you don't even know where to start as the volume is so important that you end up to look for some random patterns that might be suspicious. I wrote a small software called logs-ranking to prefix each line of a log file (currently only W3c (common/combined) logs files are supported) with the ASN and its BGP Ranking value. logs-ranking uses the whois interface of RIPE RIS to get the origin AS for IP address and the CIRCL BGP Ranking whois interface to get the current ranking.

To use it, you just to stream your log file and specify the log format (apache in this case).

So now, you'll be able to sort your logs by the most suspicious entries at first (at least from the most suspicious Internet service provider):

sort -r -g -t"," -k2 www.foo.be-access.log-ranked

So this can be used to discriminate infected clients from Proxy logs that tries to reach bulletproof hoster where the malware C&C is hosted. Or infected machine on Internet trying to infect your latest web-based software… the ranking can be used for other purposes, it's just a matter of imagination.

Roberto Di Cosmo recently published a work called "Manifeste pour une Création Artistique Libre", the work is not really a manifesto in the traditional sense but more a work about the potential licensing scheme at the Internet age. My blog entry is not about the content of the work itself but more about the non-free license used by the author. On the linuxfr.org website many people (including myself) made comments about how strange is to publish a work about free works while the manifesto itself is not free (licensed under the restrictive CC-BY-NC-ND). The author replies to the questions explaining his rationals to choose the non-free license with an additional "non printing" clause to the CC-BY-NC-ND.

I have a profound respect to Roberto's works regarding the promotion and support to the free software community but I clearly disagree with the facts stating philosophical works must not have any derivative and cannot be a free work. I also know that Richard Stallman disallows derivative work on his various works. If you carefully check the history of philosophical works, there are a lot of essays from various philosophers having some revision due to external contributions (e.g. Ivan Illich has multiple works evolving over time due to interaction or discussions with people). It's true that the practice was not very common to publish about the evolution of the works. But that was mainly due to the slowness of the publishing mechanisms and not by the works themselves.

The main argument used to avoid freeing the works is usually the integrity of the author's work. A lot of works have been modified over time to reflect the current use of the language or make a translation to another language. Does this affect the integrity of the author's work? I don't think so. Especially for any free works (including free software) attribution is required in any case. So by default, the author (and the reader) would see the original attribution and the modification over time (recently improved in the free software community by the extensive use of distributed version control system like git).

Maybe it's now the time to reconsider that free software is going far beyond the simple act of creating software but also touching any act of thinking or creation.

Monitoring The Memory of Suspicious Processes

If you are operating many GNU/Linux boxes, it's not uncommon to have issues with some processes leaking memory. It's often the case for long-running processes handling large amount of data and usually using small chunk of memory segment while not freeing them back to the operating system. If you played with the Python "gc.garbage" or abused the Perl Scalar::Util::weaken function but to reach that stage, you need to know which processes ate the memory.

Usually looking for processes eating the memory, you need to have a look at the running process using ps, sar, top, htop… For a first look without installing any additional software, you can use ps with its sorting functionality:

It's nice to have a sorted list by size but usually the common questions are:

Is that normal?

What's the evolution over time?

Does the value increased or reduced over time?

Which memory usage is evolving badly?

My first guess was to get the values above in a file, add a timestamp in front and make a simple awk script to display the evolution and graph it. But before jumping into it, I checked in Munin if there is a default plugin to do that per process. But there is no default plugin… I found one called multimemory that basically doing that per process. To configure it, you just need to add it as plugin with the processes you want to monitor.

You can connect to your Munin web page and you'll see the evolution for each monitored process name. After that's just a matter of digging into "valgrind --leak-check=full" or use your favorite profiling tool for Perl, Ruby or Python.

Often I'm Wrong But Not Always...

Prediction is very difficult, especially if it's about the future.Niels Bohr

Usually at the beginning of the year, you see all those predictions about the future technology or social comportment in front of those technologies. In the information security field, you see plenty of security companies telling you that there will be much more attacks or those attacks will be diversified targeting your next mobile phone or your next-generation toaster connected to Facebook. Of course! More malware or security issues will pop up especially if you increase the number of devices in the wild, their number of wild users and especially those wild users waiting to get money fast. So I'll leave up to the security companies waiting to make press release about their marketing predictions.

As we are at the beginning of a new numerical year, I was cleaning up a bit my notes in an old Emacs folder (from 1994 until 2001). I discovered some interesting notes and some drawings and I want to share a specific one with you.

In my various notes, I discovered an old recurring interest for Wiki-like technologies at that time. Some notes are making references to some Usenet articles (difficult to find back) and some references to c2.com articles how a wiki is well (un)organized. Some notes were unreadable due to the lack of the context for that period 2. There is even a mention to the use of a Wiki-like in the enterprise or building a collaborative Wiki website for technical FAQ. There are some more technical notes about the implementation of the software to have a wiki-like FAQ website including a kind of organization by vote. I let you find the today's website doing that…

Suddenly, in the notes, there is a kind of brainstorm discussion about the subject. The notes include some discussion from myself and from other colleagues. And there is an interesting statement about Wiki-like technology from a colleague : it's not because you like the technology that other people will use it or embrace it. That's an interesting point but the argument was used to avoid doing something or invest some times in Wiki-like approach. Yes, this is right but the question is more on how you are making stuff and how people would use it. My notes on that topic ended up with the brainstorm discussion. A kind of choke to me…

What's the catch? Not doing or building something to test it out. You can talk eternally about an idea if it is good or bad. But the only way to know if this is a good or bad idea is to build the idea. I was already thinking like that but I forgot that it happened to me… Taking notes is good especially when you learned that you should pursue and transform your ideas in a reality even with the surrounding criticisms.

My conclusion to those old random notes would be something like this:

If you see something interesting and you get a strong conviction that could succeed in one way or another, do or try something with it. (please note the emphasis on the do)