Lone Wolves - web, game and open source development (articles)http://www.jejik.com/
The latest articles on the Lone Wolves blogen-usCopyright 2012, Stichting Lone Wolveshttp://creativecommons.org/licenses/by-sa/2.5/Sat, 17 Nov 2012 12:51:00 CETSat, 17 Nov 2012 12:51:00 CETPHP+Smartywebmaster@jejik.comhttp://www.rssboard.org/rss-specification120Injecting custom classes in Jejik/mt940http://www.jejik.com/articles/2012/11/injecting_custom_classes_in_jejik_mt940
s.marechal@jejik.com (Sander Marechal)
<p>I have just released a major update to the <a href="https://github.com/sandermarechal/jejik-mt940">Jejik/mt940</a> library. The new 0.3 version allows you to easily extend and override the built-in classes with your own implementations. This makes it possible to easily integrate the MT940 library into your application using a database abstraction such as Active Record or ORM like <a href="http://www.doctrine-project.org/projects/orm.html">Doctrine 2</a>. As usual, you van install it through <a href="http://packagist.org/packages/jejik/mt940">Composer</a>.</p>
<p>Using the new functionality is quite straight-forward. The <tt>Jejik\MT940\Reader</tt> class now also acts as a factory for creating the various objects that make up a parsed MT940 document. You can set the factory methods using the following methods on the <tt>Reader</tt> class:</p>
<ul>
<li>
<tt>setStatementClass($className)</tt> defaults to <tt>Jejik\MT940\Statement</tt>
</li>
<li>
<tt>setAccountClass($className)</tt> defaults to <tt>Jejik\MT940\Account</tt>
</li>
<li>
<tt>setContraAccountClass($className)</tt> defaults to <tt>Jejik\MT940\Account</tt>
</li>
<li>
<tt>setTransactionClass($className)</tt> defaults to <tt>Jejik\MT940\Transaction</tt>
</li>
<li>
<tt>setOpeningBalanceClass($className)</tt> defaults to <tt>Jejik\MT940\Balance</tt>
</li>
<li>
<tt>setClosingBalanceClass($className)</tt> defaults to <tt>Jejik\MT940\Balance</tt>
</li>
</ul>
<p>You can either specify the classname as a string, or provide a PHP callable that
returns an object. Your classes do not have to extend the built-in classes but
they must implement the proper interfaces.</p>
<p>The callable for the <tt>Statement</tt> class is passed an <tt>AccountInterface</tt> and the statement
sequence number as parameters. The callable for the <tt>Account</tt> class and <tt>ContraAccount</tt>
class are passed the account number as parameter. The other callables are not passed
any variables.</p>
<p>If the callable for the <tt>Account</tt> class or <tt>Statement</tt> class returns <tt>null</tt> then that
statement will be skipped by the parser.</p>
<p>An example, integrating MT940 with your ORM:</p>
<pre>&lt;?php
use Jejik\MT940\AccountInterface;
use Jejik\MT940\Reader;
$db = new ORM(); // Whatever your flavour is...
$reader = new Reader();
$reader-&gt;setAccountClass(function ($accountNumber) use ($db) {
$account = $db::factory('My\Account')-&gt;findBy(array(
'number' =&gt; $accountNumber,
));
return $account ?: new My\Account();
});
$reader-&gt;setStatementClass(function (AccountInterface $account, $number) use ($db) {
$statement = $db::factory('My\Statement')-&gt;findBy(array(
'account' =&gt; $account-&gt;getNumber(),
'number' =&gt; $number,
));
return $statement ?: new My\Statement();
});
$reader-&gt;setTransactionClass('My\Transaction')
-&gt;setContraAccountClass('My\ContraAccount')
-&gt;setOpeningBalanceClass('My\OpeningBalance')
-&gt;setClosingBalanceClass('My\ClosingBalance');
foreach ($reader-&gt;getStatements(file_get_contents('mt940.txt'))) {
$statement-&gt;save();
}</pre>
<p>Jejik/MT940 is licensed under the MIT license. Have fun with it!</p>
http://www.jejik.com/articles/2012/11/injecting_custom_classes_in_jejik_mt940Sat, 17 Nov 2012 12:51:00 CETLone WolvesA parser for MT940 bank statementshttp://www.jejik.com/articles/2012/03/a_parser_for_mt940_bank_statements
s.marechal@jejik.com (Sander Marechal)
<p>I am working on a new project and I needed to parse some MT940 files. MT940 is a pretty common exchange format for bank statements. Most banks will allow you to export your bank statements in this format. I had a look around but I wasn't quite happy with the existing parsers so I decided to implement one myself. This also gave me an opportunity to try out <a href="http://travis-ci.org">travis-ci</a> and <a href="http://packagist.org">composer/packagist</a>, both of which I haven't used before.</p>
<p>So, Here is <a href="https://github.com/sandermarechal/jejik-mt940">jejik/mt940 on github</a>. You can <a href="http://packagist.org/packages/jejik/mt940">install it using composer</a> and check the <a href="http://travis-ci.org/#!/sandermarechal/jejik-mt940">build status on travis-ci</a>. Using the library is very easy:</p>
<pre>&lt;?php
use Jejik\MT940\Reader;
$reader = new Reader();
$statements = $reader-&gt;getStatements(file_get_contents('mt940.txt'));
foreach ($statements as $statement) {
echo $statement-&gt;getOpeningBalance()-&gt;getAmount() . &quot;\n&quot;;
foreach ($statement-&gt;getTransactions() as $transaction) {
echo $transaction-&gt;getAmount() . &quot;\n&quot;;
}
echo $statement-&gt;getClosingBalance()-&gt;getAmount() . &quot;\n&quot;;
}</pre>
<p>At the moment four banks are supported: ABN-AMRO, ING, Rabobank and Triodos bank. I'd be happy to add support for your bank as well. Just send me a Pull Request on github with your parsers. Make sure that you also add a unit test for it that parses a test document. You can redact personal information from the test document (e.g. use '123456789' for the account number, etcetera.</p>
<p>I am also happy to implement a parser for you, if you prefer that. Just open an issue on github and I will contact you privately, or use the <a href="/contact.php">contact form</a> on this website. I will need an unredacted MT940 file from your bank. It needs to be unredacted because the MT940 isn't well defined and can be fickle. If you redact it, it is possible that the parser I write will work on the file you supplied but not on the real thing. Of course, I will redact the file for you when I add it to my unit tests.</p>
<p>Jejik/MT940 is licensed under the MIT license. Have fun with it!</p>
http://www.jejik.com/articles/2012/03/a_parser_for_mt940_bank_statementsFri, 23 Mar 2012 19:27:00 CETLone WolvesA PHP type hinting alternativehttp://www.jejik.com/articles/2012/03/a_php_type_hinting_alternative
s.marechal@jejik.com (Sander Marechal)
<p>A couple of days ago <a href="http://nikic.github.com/aboutMe.html">Nikita Popov</a> gave a nice overview about the discussion about type hints for scalar types in PHP called <a href="http://nikic.github.com/2012/03/06/Scalar-type-hinting-is-harder-than-you-think.html">Scalar type hinting is harder than you think</a>. He came up with his own alternative called Strict Weak Type Hinting. From the options in his article it seems by far the most sensible, but I think it can do with one improvement.</p>
<p>For starters, here is Nikita's Strict Weak Type Hinting. It does weak type hinting with strict validation:</p>
<pre>&lt;?php
function foo(int $i) {
var_dump($i);
}
foo(1); // int(1)
foo(1.0); // float(1.0)
foo(&quot;1&quot;); // string(1) &quot;1&quot;
foo(1.5); // fatal error: int expected, float given
foo(array()); // fatal error: int expected, array given
foo(&quot;hi&quot;); // fatal error: int expected, string given</pre>
<p>In this case you can still pass in strings or floats to a parameter hinted as <tt>int</tt> but only if they losslessly convert to an int. A very sensible thing to do, but I miss one thing: I hinted the function as an int, so I want an int. I would prefer that the Strict Weak Type Hint also cast the parameter after it passes validation. Here's what that would look like:</p>
<pre>&lt;?php
function foo(int $i) {
var_dump($i);
}
foo(1); // int(1)
foo(1.0); // int(1)
foo(&quot;1&quot;); // int(1)
foo(1.5); // fatal error: int expected, float given
foo(array()); // fatal error: int expected, array given
foo(&quot;hi&quot;); // fatal error: int expected, string given</pre>
<p>Your thoughts?</p>
http://www.jejik.com/articles/2012/03/a_php_type_hinting_alternativeSun, 11 Mar 2012 20:50:00 CETLone WolvesEasily remove unused MySQL databaseshttp://www.jejik.com/articles/2011/09/easily_remove_unused_mysql_databases
s.marechal@jejik.com (Sander Marechal)
<p>I use <a href="https://github.com/sebastianbergmann/phpunit/">PHPUnit</a> and the <a href="https://github.com/sebastianbergmann/dbunit">DbUnit</a> extension for my unit tests. Because I use InnoDB tables with foreign keys I cannot use an SQLite database or temporary tables to run my unittests on. So, I have set up a separate MySQL server to run all my unittests on. My PHPUnit bootstrap script simply generates a random database name and imports the schema so that DbUnit can use it.</p>
<p>The only downside is that after a while, you get a bunch of unused databases on the server. So, I have written a simple bash cronjob that deletes all databases from the server that have not been used for 30 days. This script uses the <tt>debian-sys-maint</tt> MySQL user that is automatically set up on all Debian systems for maintenance tasks.</p>
<p>By default it deletes all databases that have not been changed in 30 days. But, it is trivial to change this. Just edit the <tt>TEST</tt> variable in the script. If, for example, you want to delete all databases that have not been accessed in 14 days, change the test to <tt>"-atime -14"</tt> for example. This works als long as you haven't mounted your filesystem using the <tt>noatime</tt> option.</p>
<p>I installed this script into <tt>/etc/cron.weekly</tt>.</p>
<pre>#!/bin/bash
MYSQL=&quot;/usr/bin/mysql --defaults-file=/etc/mysql/debian.cnf&quot;
MYADMIN=&quot;/usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf&quot;
# This `find` test is run on a database data directory. If any files are returned,
# then the database is kept
TEST=&quot;-ctime -30&quot;
# priority can be overriden and &quot;-s&quot; adds output to stderr
LOGGER=&quot;logger -p cron.notice -t purge-mysql-databases&quot;
# mysqladmin likes to read /root/.my.cnf. This is usually not what I want
# as many admins e.g. only store a password without a username there and
# so break my scripts.
export HOME=/etc/mysql/
## Fetch a particular option from mysql's invocation.
#
# Usage: void mysqld_get_param option
mysqld_get_param() {
/usr/sbin/mysqld --print-defaults \
| tr &quot; &quot; &quot;\n&quot; \
| grep -- &quot;--$1&quot; \
| tail -n 1 \
| cut -d= -f2
}
#
# main()
#
DATADIR=`mysqld_get_param datadir`
DATABASES=`$MYSQL -e 'SHOW DATABASES;' | tail -n +2`
# Loop through all the databases and see which can be deleted
for DATABASE in $DATABASES; do
# Do not touch MySQL's own databases
if [ &quot;$DATABASE&quot; == &quot;mysql&quot; -o $DATABASE == &quot;information_schema&quot; ]; then
continue
fi
FILES=`find $DATADIR/$DATABASE $TEST`
if [ -z &quot;$FILES&quot; ]; then
$MYADMIN --force drop $DATABASE &gt; /dev/null
echo &quot;Dropped inactive database $DATABASE&quot; | $LOGGER
fi
done</pre>
http://www.jejik.com/articles/2011/09/easily_remove_unused_mysql_databasesTue, 20 Sep 2011 19:23:00 CESTLone WolvesA YuiCompressorFilter for Phinghttp://www.jejik.com/articles/2011/07/a_yuicompressorfilter_for_phing
s.marechal@jejik.com (Sander Marechal)
<p>I have been playing with <a href="http://phing.info">Phing</a> quite a lot lately. Phing is a build system that looks and acts a lot like <a href="http://ant.apache.org">Apache Ant</a>, except that it is written in PHP. That's a great thing for PHP developers like me, because it makes it much easier to extend the system with custom extensions.</p>
<p>I am going to write several useful extensions, the first of which is a <tt>YuiCompressorFilter</tt>. Phing already has support for a JavaScrip minifier in the form of the <tt>JsMinTask</tt>, but the yui-compressor is more useful. Not only does it usually compress better than JsMin, it can also compress CSS files. Also, because my YuiCompressor extension is implemented as a filter instead of a task you can do fancy things like minifying and concatenating files in a single step.</p>
<p>Here's an example task that concatenates and minifies your JavaScript files in a single step:</p>
<pre>&lt;target name=&quot;js-compress&quot;&gt;
&lt;delete file=&quot;${project.basedir}/build/js/main.js&quot; /&gt;
&lt;append destFile=&quot;${project.basedir}/build/js/main.js&quot;&gt;
&lt;filterchain&gt;
&lt;filterreader classname=&quot;path.to.filters.YuiCompressorFilter&quot;&gt;
&lt;param name=&quot;type&quot; value=&quot;js&quot; /&gt;
&lt;param name=&quot;preserve-semi&quot; value=&quot;true&quot; /&gt;
&lt;/filterreader&gt;
&lt;/filterchain&gt;
&lt;filelist dir=&quot;src/js&quot; files=&quot;forms.js,validation.js,gallery.js&quot; /&gt;
&lt;/append&gt;
&lt;/target&gt;</pre>
<p>The <tt>YuiCompressorTask</tt> supports all options supported by the yui-compressor itself. Full source code and documentation is available on <a href="http://github.com/sandermarechal/phing-ext">github.com/sandermarechal/phing-ext</a>.</p>
http://www.jejik.com/articles/2011/07/a_yuicompressorfilter_for_phingSat, 30 Jul 2011 02:06:00 CESTLone WolvesResizing images with correct gamma using PHP and GDhttp://www.jejik.com/articles/2010/07/resizing_images_with_correct_gamma_using_php_and_gd
s.marechal@jejik.com (Sander Marechal)
<p>A short while ago Ty W posted an <a href="http://stackoverflow.com/questions/3303639/php-gd2-how-to-maintain-alpha-channel-transparency-and-correct-gamma">interesting question on StackOverflow</a>. Apparently, most graphics software <a href="http://www.4p8.com/eric.brasseur/gamma.html">cannot scale images the right way</a>. Usually it's hard to notice the flaw but the linked article does a great job of explaining the problem.</p>
<p>PHP's GD library suffers from the same issue, but Ty discovered that the sample PHP program provided with the article did not work on partially transparent images. After a couple of hours of fiddling I managed to get a working solution.</p>
<p>Apparently, the imagegammacorrect() function in PHP deals badly with images that have an alpha channel. I suspect that it tries to apply the same calculation to the alpha channel that it applies to the red, green and blue channels. To work around this, my solution splits the aplha channel from the original image. The alpha channel is resampled regularly while the red, green and blue channels are resampled using gamma correction.</p>
<p>Here's the code.</p>
<pre>&lt;?php
// Load image
$image = imagecreatefrompng($path_to_image);
// Create destination
$resized_image = imagecreatetruecolor($new_width, $new_height);
imagealphablending($resized_image, false); // Overwrite alpha
imagesavealpha($resized_image, true);
// Create a separate alpha channel
$alpha_image = imagecreatetruecolor($width, $height);
imagealphablending($alpha_image, false); // Overwrite alpha
imagesavealpha($alpha_image, true);
for ($x = 0; $x &lt; $width; $x++) {
for ($y = 0; $y &lt; $height; $y++) {
$alpha = (imagecolorat($image, $x, $y) &gt;&gt; 24) &amp; 0xFF;
$color = imagecolorallocatealpha($alpha_image, 0, 0, 0, $alpha);
imagesetpixel($alpha_image, $x, $y, $color);
}
}
// Resize image to destination, using gamma correction
imagegammacorrect($image, 2.2, 1.0);
imagecopyresampled($resized_image, $image, 0, 0, 0, 0, $new_width, $new_height, $width, $height);
imagegammacorrect($resized_image, 1.0, 2.2);
// Resize alpha channel
$alpha_resized_image = imagecreatetruecolor($width, $height);
imagealphablending($alpha_resized_image, false);
imagesavealpha($alpha_resized_image, true);
imagecopyresampled($alpha_resized_image, $alpha_image, 0, 0, 0, 0, $new_width, $new_height, $width, $height);
// Copy alpha channel back to resized image
for ($x = 0; $x &lt; $new_width; $x++) {
for ($y = 0; $y &lt; $new_height; $y++) {
$alpha = (imagecolorat($alpha_resized_image, $x, $y) &gt;&gt; 24) &amp; 0xFF;
$rgb = imagecolorat($resized_image, $x, $y);
$r = ($rgb &gt;&gt; 16 ) &amp; 0xFF;
$g = ($rgb &gt;&gt; 8 ) &amp; 0xFF;
$b = $rgb &amp; 0xFF;
$color = imagecolorallocatealpha($resized_image, $r, $g, $b, $alpha);
imagesetpixel($resized_image, $x, $y, $color);
}
}
// Write resized image
imagepng($resized_image, $path_to_resized_image);
// Clean-up
imagedestroy($image);
imagedestroy($resized_image);
imagedestroy($alpha_image);
imagedestroy($alpha_resized_image);
?&gt;
&lt;p&gt;I hope this is of help to other people as well!&lt;/p&gt;</pre>
http://www.jejik.com/articles/2010/07/resizing_images_with_correct_gamma_using_php_and_gdFri, 23 Jul 2010 15:27:00 CESTLone WolvesHow to correctly create ODF documents using ziphttp://www.jejik.com/articles/2010/03/how_to_correctly_create_odf_documents_using_zip
s.marechal@jejik.com (Sander Marechal)
<p>One of the great advantages of the OpenDocument format is that it is simply a zip file. You can unzip it with any archiver and take a look at the contents, which is a set of XML documents and associated data. Many people are using this feature do create some nifty toolchains. Unzip, make some changes, zip it again and you have a new ODF document. Well&hellip; almost.</p>
<p>The <a href="http://docs.oasis-open.org/office/v1.1/OS/OpenDocument-v1.1-html/OpenDocument-v1.1.html#17.4.MIME%20Type%20Stream|outline">OpenDocument Format specification</a>, section 17.4 has one little extra restriction when it comes to zip containers: The file called &ldquo;mimetype&rdquo; must be at the beginning of the zip file, it must be uncompressed and it must be stored without any additional file attributes. Unfortunately many developers seem to forget this. It is the number one cause of failed documents at <a href="http://www.officeshots.org">Officeshots.org</a>. If the mimetype file is not correctly zipped then it is not possible to programmatically detect the mimetype of the ODF file. And if the mimetype check fails, Officeshots (and possibly other applications) will refuse the document. This problem is compounded because virtually no ODF validator checks the zip container. They only check the contents.</p>
<p>In this article I will show you how you can properly zip your ODF files, but before I do that I will show you the problem in detail.</p>
<h4>Detecting mimetypes</h4>
<p>Linux and other Unix-like opratingsystems do not rely on file extensions to determine the type of a file. Relying on file extensions can be a serious sercurity problem, as you can see in the Windows world. It's simply too easy to change the extension and pretend that a file is of a different type than it really is. Instead, the Unix world looks at the contents of the file itself. This happens with a library called &ldquo;magic&rdquo;.</p>
<p>The magic library consists of a large set of rules, which it uses to figure out what type of file it is looking at. For example, it can look at a certain byte offset and see what value it contains. This is precisely the reason why the ODF specification says that you need to zip the mimetype first, without any file attributes. If you do that and open the ODF file in a hex editor, you will see something like this:</p>
<pre>Offset: Hexadecimal: ASCII:
00000000 - 50 4b 03 04 14 00 00 08 00 00 c1 b6 66 3b 5e c6 PK..............
00000010 - 32 0c 27 00 00 00 27 00 00 00 08 00 00 00 6d 69 2.'...'.......mi
00000020 - 6d 65 74 79 70 65 61 70 70 6c 69 63 61 74 69 6f metypeapplicatio
00000030 - 6e 2f 76 6e 64 2e 6f 61 73 69 73 2e 6f 70 65 6e n/vnd.oasis.open
00000040 - 64 6f 63 75 6d 65 6e 74 2e 74 65 78 74 50 4b 03 document.textPK.
...</pre>
<p>This is very easy to match for the magic library. Here is an explanation of the rules that magic uses to test if the file is an ODF file:</p>
<ol>
<li>Look at the beginning of the file. It should start with the letters PK and then bytes 03 and 04. This means it is a zip file.</li>
<li>Look at offset 30 ("1e" in hex). It should be the string "mimetype".</li>
<li>Look at offset 38 ("26" in hex), directly after the word "mimetype". It should be one of the ODF mimetypes.</li>
</ol>
<p>You can guess what happens when you don't zip the mimetype file first: The string "mimetype" won't be at the right offset. And if you accidentally zip it with extra file attributes, then the contents of the mimetype file will not start directly after it. There will be several bytes in between. This causes the magic library to detect it as a standard zip file, not as an ODF file. Here is how such a badly zipped ODF could look like. This file was zipped normally, without paying special attention to the mimetype file:</p>
<pre>Offset: Hexadecimal: ASCII:
00000000 - 50 4b 03 04 0a 00 00 00 00 00 25 01 6e 3c 00 00 PK..............
00000010 - 00 00 00 00 00 00 00 00 00 00 10 00 15 00 43 6f ..............Co
00000020 - 6e 66 69 67 75 72 61 74 69 6f 6e 73 32 2f 55 54 nfigurations2/UT
00000030 - 09 00 03 16 1b 9c 4b 47 1e 9c 4b 55 78 04 00 e8 ......KG..KUx...
00000040 - 03 e8 03 50 4b 03 04 0a 00 00 00 00 00 25 01 6e ...PK........%.n
...</pre>
<p>As you can see, it does not match the rules that the magic library has. Instead of checking your ODF file with a hex editor, you can also simply use the "file" command. For example:</p>
<pre>$ file --mime my-document.odt
my-document.odt: application/vnd.oasis.opendocument.text</pre>
<p>If that command results in "application/zip" or "application/octet-stream" then it means that your ODF file is probably incorrectly zipped. Note that the magic library shipped with "file" up to version 5.0.3 does not contain all mimetypes for ODF files but only for OpenDocument Text (odt) files. File 5.0.3 is the version most commenly shipped with Linux distributions today. I have since submitted a patch that includes all known ODF mimetypes. It was accepted and it should be included in file version 5.0.4 and later.</p>
<h4>How to zip an ODF file</h4>
<p>So, here is how you can zip an ODF file the right way. Suppose that I have an unzipped ODF file that looks like this:</p>
<pre>+ my-document/
+ Configurations2/
+ META-INF/
- manifest.xml
+ Thumbnails/
- thumbnail.png
- content.xml
- meta.xml
- mimetype
- settings.xml
- styles.xml</pre>
<p>Start by creating a new zip file that just contains the mimetype file:</p>
<pre>$ zip -0 -X ../my-document.odt mimetype</pre>
<p>The -0 parameter means that the file will not be compressed. The -X parameter means that no extra file attributes will be stored. Next you can add the rest of the files:</p>
<pre>$ zip -r ../my-document.odt * -x mimetype</pre>
<p>Be sure to exclude the mimetype file. Now if you look at it with a hex editor, you will see it has been zipped correctly:</p>
<pre>Offset: Hexadecimal: ASCII:
00000000 - 50 4b 03 04 14 00 00 08 00 00 c1 b6 66 3b 5e c6 PK..............
00000010 - 32 0c 27 00 00 00 27 00 00 00 08 00 00 00 6d 69 2.'...'.......mi
00000020 - 6d 65 74 79 70 65 61 70 70 6c 69 63 61 74 69 6f metypeapplicatio
00000030 - 6e 2f 76 6e 64 2e 6f 61 73 69 73 2e 6f 70 65 6e n/vnd.oasis.open
00000040 - 64 6f 63 75 6d 65 6e 74 2e 74 65 78 74 50 4b 03 document.textPK.
...</pre>
<p>Happy zipping everyone!</p>
http://www.jejik.com/articles/2010/03/how_to_correctly_create_odf_documents_using_zipSun, 14 Mar 2010 00:51:00 CETLone WolvesNew Officeshots feature: ODF Anonymiserhttp://www.jejik.com/articles/2010/01/new_officeshots_feature_odf_anonymiser
s.marechal@jejik.com (Sander Marechal)
<p>I have just released a new feature for <a href="http://www.officeshots.org">Officeshots</a>: The <a href="http://www.officeshots.org/pages/anonymiser">ODF anonymiser</a>.
The ODF Anonymiser tries to make your document completely anonymous
while maintaining it's overall structure. All metadata is removed or
cleaned. All text in the document is replaces with gibberish text that
has approximately the same word length and word distribution. All images
are replaced with placeholder images. All unknown content is removed.</p>
<p>The result of the anonymiser is a document that has the same general
structure but with made-up contents. If your original document does not
work in a certain application, the anonymised version of the document
should fail in the same manner. By using the anonymiser you can test
your private documents without exposing the contents to our rendering
clients.</p>
<p>To use the Anonymiser, simply check the appropriate checkbox on the
Officeshots front page.</p>
<p>The ODF Anonymiser is written and maintained by the people who created
the <a href="http://www.hforge.org/itools">iTools Python libraries</a>. The Anonymiser is part of that library
(called ODF Greek). If you want to use the anonymiser yourself, just install iTools and use the <tt>iodf-greek.py</tt> script. Many thanks for their contribution.</p>
http://www.jejik.com/articles/2010/01/new_officeshots_feature_odf_anonymiserTue, 05 Jan 2010 16:55:00 CETLone WolvesNew Officeshots feature: ODF validatorshttp://www.jejik.com/articles/2009/12/new_officeshots_feature_odf_validators
s.marechal@jejik.com (Sander Marechal)
<p>I am happy to announce an exciting new feature for <a href="http://www.officeshots.org">Officeshots</a>: Integrated ODF validators.</p>
<p>Every ODF document that is uploaded is run through several different ODF validators. If the converted documents are also ODF documents (when you are testing ODF round trips) then those results are also passed through these ODF validators.</p>
<p>The results of the validators are made available on the request overview, the individual result pages and inside the galleries. Galleries now not only show all attached documents but also all results and a summary of the validator results. This way it becomes really easy to see which documents failed.</p>
<p>The following ODF validators have been integrated:</p>
<ul>
<li><a href="http://www.cyclone3.org/documentation/addons/extensions/ODFvalidator/">The Cyclone3 ODF validator</a></li>
<li><a href="http://www.probatron.org:8080/officeotron/officeotron.html">Office-o-tron</a> by Alex Brown</li>
<li><a href="http://odftoolkit.org/ODFValidator">The ODFToolkit validator</a> by Sun
</ul>
<p>Below you can see an example result of a document that has been round-tripped, showing a summary of the validation results. The one thing that show immediately is that the various validators do not always agree with each other. Documents that are valid according to one validator may be invalid according to another. Some documents (such as the result from the old AbiWord versions) can even cause some validators to choke and die with errors.</p>
<img src="http://www.jejik.com/images/officeshots/officeshots-validators-600.jpg" /><br />
<small>(<a href="http://www.jejik.com/images/officeshots/officeshots-validators.jpg">large version</a>)</small>
<p>From that page on you can click through to the individual validator outputs so you can see the complete error message.</p>
<p>Have fun!</p>
http://www.jejik.com/articles/2009/12/new_officeshots_feature_odf_validatorsThu, 17 Dec 2009 15:10:00 CETLone WolvesBook Review: Pro Linux System Administrationhttp://www.jejik.com/articles/2009/10/book_review_pro_linux_system_administration
s.marechal@jejik.com (Sander Marechal)
<p>&ldquo;By the end of this book, You&#8217;ll be well on your way to becoming a Linux expert&rdquo; is quite a bold claim for a book that is aimed at people who only have some familiarity with Windows and networking. &ldquo;Pro Linux System Administration&rdquo; by James Turnbull, Peter Lieverdink and Dennis Matotek aims to do precisely that and surprisingly, it largely succeeds. In its 1080 pages it explains how you can set up and configure multiple Linux servers to operate a small business network. Starting with basic Linux management and working up the stack through networking, e-mail and webservers you will end up with a pretty complete network that includes document management, groupware and disaster recovery.</p>
<img src="http://www.jejik.com/images/articles/pro-linux-sysadmin//pro-linux-sysadmin-256.png" class="right" />
<p>The only downside of the book is that it becomes terser as it goes along. Part 1 and half of part 2 are
quite thorough. They explain what you are doing, why you are doing it, differences between distributions and possible
gotchas. But as the book moves up the application stack these explanations become shorter and in some cases amount to little more than an installation walk-through. I think it would have been better to focus on fewer alternative
applications and dive deeper into those.</p>
<h4>Part 1: The Beginning</h4>
<p>The book starts with an introduction to Linux and free software in general and discusses various major
distributions. It then settles on Red Hat Enterprise and Ubuntu Server edition. The rest of the book focuses
on these two distributions. A good choice in my opinion since these two distributions represent two different
lineages of Linux. The Red Hat examples also work on CentOS and the Ubuntu examples also apply to Debian and its
many family members. There is also the distinction that Red Hat Enterprise ships with a GUI and many GUI configuration tools while Ubuntu Server is administered purely through the command line.</p>
<p>After you are shown how to install Linux on top of LVM you are taught the basics. How to work with files,
permissions, users and groups both from the GUI as from the command line. It then dives right into the guts of Linux
by showing you how the boot process goes, how run-levels and services work and how you can troubleshoot problems that you are likely to encounter.</p>
<p>Chapter 6 introduces networking and firewalls and that is where things start to get more complicated. The authors
have come up with a nice example network consisting of a Linux gateway, a main server, a wired and wireless network
with several clients and branch offices which will connect over VPN. Unfortunately networking is never an easy
subject and here the promise of only &ldquo;some experience required&rdquo; breaks down a bit. Still, it&#8217;s
explained pretty thoroughly, as are the next chapters on package and storage management.</p>
<p><center><img src="http://www.jejik.com/images/articles/pro-linux-sysadmin/pro-linux-sysadmin-network-512.png" style="border: 1px solid black;" /><br /><small>The example network used throughout the book.</small></center></p>
<p>I did notice a few technical errors and omissions in these chapters. For example, when explaining how to manage
networking with the `ip` and `ifup|ifdown` commands, the authors do not mention that ifup and ifdown does much
more on Ubuntu, like mounting network shares from fstab. Also the section about LVM on RAID fails to mention
that you need to blacklist your raw disks used in the RAID array in the LVM configuration. If you don&#8217;t
blacklist these then LVM may try to construct the volume groups from the raw disks instead of from the RAID.
Luckily such issues are few and far between.</p>
<h4>Part 2: Making It Work for You</h4>
<p>The second part of the book moves away from the details of system administration itself and onto providing
services and application that your users can actually use. First up is more networking. DNS, DHCP, NTP and such.
Then e-mail using Postfix with Dovecot sitting on top to provide IMAP and POP3 and SpamAssassin to keep your
mailboxes clean. The chapter on setting up the AMP part of LAMP (Apache, MySQL and PHP) is well done but
skimpy on the details of setting up PHP. The half a page section on PHP just gives you a list of packages to
install and nothing more. No mention is made of configuration or the possibility of running PHP through
fcgi which makes it possible to run PHP applications under separate user privileges.</p>
<p>Chapter 12 describes file and print sharing using both Samba and more Linux oriented services like NFS4
and Cups. It also touches on document management using KnowledgeTree. I found several issues with this chapter.
First off the information on NFS4 is skimpy, in contrast to the extensive explanation on Samba. It misses
important details on NFS4&#8217;s new pseudo file system that needs a root directory and requires mount binding if
you want to export directories outside the NFS4 export root. Another thing it misses is how to share printers
with Linux clients. The book only explains printer sharing over Samba to Windows clients and not directly with
Cups to Linux clients. The section on installing KnowledgeTree could also be better. It shows you how to install it using its massive stack installer which contains its own Apache, PHP and MySQL. I would have
preferred and explanation that shows how to setup KnowledgeTree on the LAMP stack from the previous chapter.
The same applies to installing Zimbra in chapter 15.</p>
<p>The rest of the book is pretty well organized and thorough, with chapters on backup and recovery, resource monitoring, virtualization, LDAP and more networking fun with OpenVPN. This ties together the main office and all the branch offices in the example network from part 1 of the book. There is even a chapter on automated installs and
configuration management using Cobbler (or Kickstart/Preseed) and Puppet.
<h4>Conclusion</h4>
<p>I really enjoyed &ldquo;Pro Linux System Administration&rdquo;. It's thorough, it covers a large amount
of topics and does not pin you to just one distribution. The credo of &ldquo;only some Windows and networking experience required&rdquo; is a bit over the top, but anyone reasonably experienced with Windows System Administration should be able to use this book. It is also well suited to people who have some Linux experience but who want to know more (like me). Some of the chapters are a bit shallow and feel more like an installation walk-through but this is more than made up for by the depth of the rest of the book. All-in-all I rate it 8 out of 10 points.</p>
<h4>Book information</h4>
<table>
<tr><td style="width: 10em;"><strong>Title</strong></th><td>Pro Linux System Administration</td></tr>
<tr><td><strong>Authors</strong></td><td>James Turnbull, Peter Lieverdink and Dennis Matotek</td></tr>
<tr><td><strong>Publisher</strong></td><td>Apress</td></tr>
<tr><td><strong>ISBN</strong></td><td>978-1-4302-1912-5</td></tr>
<tr><td><strong>Year</strong></td><td>June 2009</td></tr>
<tr><td><strong>Pages</strong></td><td>1080</td></tr>
<tr><td><strong>CD Included</strong></td><td>No</td></tr>
<tr><td><strong>Price</strong></td><td>$ 49.99</td></tr>
<tr><td><strong>Overall rating</strong></td><td>8/10</td></tr>
</table>
<p><em>This article was originally posted at <a href="http://lxer.com/module/newswire/view/127036/">LXer Linux News</a>.</em></p>
http://www.jejik.com/articles/2009/10/book_review_pro_linux_system_administrationFri, 23 Oct 2009 09:49:00 CESTLone WolvesOpen Core: The worst of both worldshttp://www.jejik.com/articles/2009/10/open_core_the_worst_of_both_worlds
s.marechal@jejik.com (Sander Marechal)
<p>A lot has been written recently about so called &ldquo;Open Core&rdquo; software ever since <a href="http://alampitt.typepad.com/lampitt_or_leave_it/2008/08/open-core-licen.html">Andrew Lampitt coined the term</a> back in August of 2008. Many analysts have been critical about it, such as Richard Hillesley from <a href="http://www.h-online.com/open/">The H Open</a> in his recent article &ldquo;<a href="http://www.h-online.com/open/Open-core-closed-heart--/features/114363/0">Open core, closed heart?</a>&rdquo;. Many are also very positive about it such as <a href="http://blogs.the451group.com/opensource/">Matt Aslett from The 451 Group</a>. However, I think that most them are missing the elephant in the room: Open core is not sustainable in the long term because it represents the worst of both worlds. Open core tries to find a middle ground between proprietary software and free software, but it reaps the benefits of neither and inherits the problems of both.</p>
<p>Let me show you by example. <a href="http://www.sugarcrm.com">SugarCRM</a> is one of the more popular open core software products available. The company offers the Community Edition for free under a GPLv3 license but also offers a Professional and Enterprise edition under a proprietary license. SugarCRM has been around since 2004 but it is already showing many signs of not being sustainable.</p>
<h4>Proprietary versus Free Software</h4>
<p>Proprietary and Free Software are developed and commercialised in totally different ways, each with distinct advantages and disadvantages. Feel free to skip this part of you are thoroughly familiar with it.</p>
<p>In the proprietary world you develop your software in-house and sell licenses and support contracts to your customers. You bear 100% of the development cost yourself but usually the license sales alone generate much more revenue than the service contracts. The real trick is getting people to buy or update. Usually that means adding features to the product so it becomes more attractive. This way new customers will choose your product over the competition and your existing customers will part with their cash for new licenses to get the updated version. This also means that proprietary software has feature creep. Over time, more and more features get added to the product to stay ahead of the other proprietary vendors. This happens at the cost of quality. Developer resources are limited and with the pressure to add features, bugs go unfixed and corners are cut.</p>
<p>Over in Free Software land it is very different. The source code and the entire development process are completely open and aside from the in-house developers there is a community of external developers working on the product. This is why much free software is of such high quality. There are plenty of resources to fix bugs and to get it right. But making money from it is much harder. It&#8217;s mainly about support contracts which generates less revenue than closed source license sales. Also, customers don&#8217;t need so much support thanks to the high quality of the software itself. There is also the risk that someone can fork the project. This is great for the software itself but not so great if you are a business that is trying to keep its customers.</p>
<h4>The &ldquo;middle road&rdquo;</h4>
<p>Let&#8217;s see how his works for open core and specifically SugarCRM. If you go to their website you can download the latest SugarCRM Community Edition for free under a GPLv3 license. You can also see that it lacks many features that the Professional and Enterprise version do have, such as team management, advanced reporting and an Oracle back-end. What you will <em>not</em> see however is a source code repository or developers mailinglist or forum. SugarCRM is developed completely in-house with no community involvement. They have a <a href="http://www.sugarforge.org/content/community/participate/contribute.php">patch submission form</a> but when that went down it took weeks before SugarCRM noticed it. That not only tells you how much SugarCRM cares about patches but also how many people use that form to submit a patch.</p>
<p>What simply happens is that SugarCRM develops a new version of their system and they throw the code &ldquo;out there&rdquo; as a teaser so that people will hopefully shell out for the Professional or Enterprise edition. SugarCRM generates revenue in the same way as a proprietary vendor: by selling licenses. That means it is on the same feature treadmill as proprietary software. It needs to keep adding features to stay ahead of the competition and to make people upgrade.</p>
<p>So far this &ldquo;Open Core&rdquo; vendor looks suspiciously like a proprietary vendor. But the problem only starts here. SugarCRM does have quite a developer community around it, but it is not working on the open core part. It&#8217;s busy building add-on modules, plugins and offering customisation and support. One of the things this community does very well is creating add-on functionality that provides the features normally found in the Professional and Enterprise editions, such as <a href="http://www.sugarforge.org/projects/ce-teams">team management</a> and <a href="http://www.sugarforge.org/softwaremap/trove_list.php?form_cat=361">advanced reporting</a>.</p>
<p>SugarCRM has to compete feature-wise not only with the competition but with their own developer community as well. They have to add new features to the Professional and Enterprise editions faster than their own developer community can re-implement them as add-ons to the GPLv3 edition. If they don&#8217;t then their customers will switch to the free Community Edition, which is very likely since the Professional and Enterprise licenses need to be renewed every year and they are not cheap. It&#8217;s like the feature treadmill of proprietary products but in overdrive. And it takes a far larger toll on quality&hellip; and it shows.</p>
<h4>Buggy, buggier, buggiest</h4>
<p>A quick look through the SugarCRM <a href="http://www.sugarcrm.com/forums">forums</a> and <a hre="https://www.sugarcrm.com/crm/support/bugs.html?tmpl=">bug tracker</a> can confirm that this is exactly what is happening. New Sugar versions are appearing at a rapid pace with ever more features while the bugs pile up and never get fixed. <a href="http://www.sugarcrm.com/forums/showthread.php?t=38941&page=2#post135087">users keep asking</a> for more quality control and an opportunity to fix bugs themselves by opening up the development process but this is not happening. And it&#8217;s not small bugs that go unfixed. There are bugs in the <a href="https://www.sugarcrm.com/crm/support/bugs.html?task=search&order_by=found_in_release%2CDESC&name=amount&bug_number=&priority=&status=&resolution=&found_in_release=&fixed_in_release=&product_category=Opportunities">currency formatting</a> that have existed since version 4.2 from 2006 that have still not been fixed in the latest version. This bug causes the financial forecast to be off by several orders of magnitude. That is a critical part of sales force automation.</p>
<p>As a PHP developer I can testify that the quality of the SugarCRM source code is low. I am reasonably familiar with the source code, having deployed it on several locations and having <a href="http://www.jejik.com/tag/sugarcrm">developed custom modules</a> for it. The source code is as bad as old PHP-Nuke versions, full of spaghetti code, hard-coded special cases, inconsistent design and lots of bugs.</p>
<p>SugarCRM is not just an exception here. Even open core projects that do have a more open development model, like <a href="http://www.alfresco.com/">Alfresco</a> suffer from <a href="http://iquaid.org/2009/08/01/alfresco-packaging-for-fedora/#comment-3265">quality problems</a>.</p>
<h4>The bottom line</h4>
<p>In the end open core software is driven by the same incentives as proprietary software is. Therefore it suffers from the same problems: too much focus on features and too little on quality. That&#8217;s the downside of proprietary software. But it also inherits the problems of open source software. Because of the open source community editions you have to worry about forks taking your customers (e.g. <a href="http://www.vtiger.com/">vtiger</a>). To top it off they also need to compete against their own developer community who will reimplement the closed enterprise features as add-ons for the open source edition. This magnifies the problems caused by the feature treadmill and leads to a rapid decline in quality.</p>
<p>Digg this article: <a href="http://digg.com/linux_unix/Open_Core_The_worst_of_both_worlds"><img src="http://digg.com/img/digg-it-tiny.gif/" alt="This article on Digg"></a></p>
http://www.jejik.com/articles/2009/10/open_core_the_worst_of_both_worldsFri, 09 Oct 2009 15:28:00 CESTLone WolvesHelp translate Officeshots in your languagehttp://www.jejik.com/articles/2009/07/help_translate_officeshots_in_your_language
s.marechal@jejik.com (Sander Marechal)
<p>I have finished setting up the internationalisation and localisation frameworks for Officeshots. If you want, you can now help to translate Officeshots to your own language. Translating Officeshots can be done through <a href="http://lang.officeshots.org">our Pootle installation</a>.</p>
<p>At the moment there are almost no languages configured yet in Pootle. The reason is that the CakePHP framework on which Officeshots runs has a different locale structure than what Pootle expects. This means I need to add every language by hand.</p>
<p>If you want to start working on a new language, please post to <a href="http://lists.opendocsociety.org/mailman/listinfo/officeshots">the Officeshots mailinglist</a> and I will add the language to Pootle and to Officeshots. Also, translations made in Pootle are not automatically pushed to Subversion or to the running Officeshots instances (because of possible merge conflicts). If you need a quick sync or push of language files, please drop a line here on the mailinglist as well.</p>
<p>Happy translating!</p>
http://www.jejik.com/articles/2009/07/help_translate_officeshots_in_your_languageThu, 16 Jul 2009 20:50:00 CESTLone WolvesScanning files with ClamAV from CakePHPhttp://www.jejik.com/articles/2009/07/scanning_files_with_clamav_from_cakephp
s.marechal@jejik.com (Sander Marechal)
<p>One of the requirements for the upcoming public release of <a href="http://www.officeshots.org">Officeshots.org</a> is that all uploaded files are run through a virus scanner before they are made available. Picking a virus scanner for this job was easy. <a href="http://clamav.net/">ClamAV</a> is open source, well supported, actively maintained and comes pre-packaged for Debian Lenny which we use for the Officeshots servers. Finding a PHP library to interact with ClamAV proved harder though. The <a href="http://www.clamav.net/download/third-party-tools/3rdparty-library">3rd party library page for ClamAV</a> points to two different libraries that provide PHP bindings for ClamAV but both appear to be dead and expunged from the internet. So, I created my own using the <a href="http://www.clamav.net/doc/latest/html/node26.html">clamd TCP API</a>, and because Officeshots is built using <a href="http://cakephp.org/">CakePHP</a> I implemented it as a Cake plugin.</p>
<p>You can <a href="http://www.jejik.com/files/clamd/clamd-0.1.tar.gz">download the clamd-0.1.tar.gz plugin</a> or check out the source from my Subversion repository with the following command:</p>
<pre>~$ svn checkout https://svn.jejik.com/cakephp/plugins/clamd/trunk clamd</pre>
<p>Or you can <a href="http://svn.jejik.com/viewvc.cgi/cakephp/plugins/clamd/trunk/">browse the repository online</a>. In the rest of this article I will show you how you can use this plugin.</p>
<h4>Install ClamAV (on Debian)</h4>
<p>Start off by installing ClamAV. On Debian and derivative distributions this is very simple.</p>
<pre>~# aptitude install clamav clamav-freshclam</pre>
<p>Make sure that the ClamAV daemon is running and that freshclam regularly updates the virus definition database. On Debian Lenny this will be done automatically. Please refer to the <a href="http://www.clamav.net/doc/latest/html/node9.html">ClamAV installation guide</a> for installation on other Linux distributions or on Windows machines. After you have installed the ClamAV daemon you can test it with the clamdscan command.</p>
<pre>~$ clamdscan somefile.odt
/home/you/somefile.odt: OK
----------- SCAN SUMMARY -----------
Infected files: 0
Time: 0.011 sec (0 m 0 s)</pre>
<h4>Install the Clamd CakePHP plugin</h4>
<p>This is really easy. Extract the package and move the <tt>clamd</tt> directory to your <tt>plugins</tt> directory in your CakePHP application.</p>
<pre>~$ tar -zxvf clamd-0.1.tar.gz
~$ mv clamd your-cakephp/app/plugins/</pre>
<h4>Configuration</h4>
<p>Start by including the Clamd plugin.</p>
<pre>App::import('Core', 'clamd.Clamd');</pre>
<p>When you create the Clamd object you can pass the configuration to
the constructor. The configuration consists of options that will be
passed to <a href="http://www.php.net/fsockopen">fsockopen()</a>. For example:</p>
<pre>$Clamd = new Clamd(
'host' =&gt; '127.0.0.1',
'port' =&gt; 3310,
'timeout' =&gt; 60
);</pre>
<p>You can also connect to a local Unix socket. Here is how you connect to the default socket on Debian Lenny:</p>
<pre>$Clamd = new Clamd(
'host' =&gt; 'unix:///var/run/clamav/clamd.ctl',
'port' =&gt; 0
);</pre>
<h4>Usage</h4>
<p>To test the Clamd connection you can use the ping() method.</p>
<pre>echo $Clamd-&gt;ping() ? 'Success!' : $Clamd-&gt;lastError();</pre>
<p>You can use the scan method to scan a single file. The result will be
one on the Clamd constants <tt>Clamd::OK</tt>, <tt>Clamd::FOUND</tt> or <tt>Clamd::ERROR</tt>.</p>
<pre>if ($Clamd-&gt;scan('/path/to/file', $message) === Clamd::FOUND) {
echo &quot;The file is infected with '$message'!\n&quot;;
}</pre>
<p>You can also recursively scan an entire directory. You will get back
an array containing the results of the scan. Note that this array only
contains files that returned <tt>Clamd::FOUND</tt> or <tt>Clamd::ERROR</tt>. Scanned
files which are clean will not be returned. A result looks like this:</p>
<pre>array(3) {
['file'] =&gt; '/full/path/to/file'
['status'] =&gt; self::FOUND | self::ERROR
['message'] =&gt; virus name | error message
}</pre>
<p>Also, by default Clamd stops scanning as soon as the first infection
is found. Pass TRUE as the second parameter to scan all files.
Example usage:</p>
<pre>$results = $Clamd-&gt;rscan('/path/to/directory', true);
foreach ($results as $result) {
if ($result['status'] === Clamd::FOUND) {
echo &quot;File '$result[file]' is infected with '$result[message]'!\n&quot;;
}
}</pre>
<h4>The Clamd shell</h4>
<p>The Clamd plugin also contains a simple interactive Cake Shell which you can
use to test Clamd and scan files interactively. You can start the shell from your app directory with:</p>
<pre>~$ cake clamd</pre>
<p>You can use the following commands in the interactive shell:</p>
<dl>
<dt>exit|quit|q</dt>
<dd>Quit the shell</dd>
<dt>connect &lt;host&gt; [&lt;port&gt; [&lt;timeout&gt;]]</dt>
<dd>Connect to a ClamAV daemon</dd>
<dt>ping</dt>
<dd>Send a PING command to the clamav daemon</dd>
<dt>scan &lt;file&gt;</dt>
<dd>Scan &lt;file&gt; for viruses</dd>
<dt>rscan &lt;directory&gt;</dt>
<dd>Recursively scan &lt;directory&gt; for viruses. Only infected files and errors are returned.</dd>
<dt>help</dt>
<dd>Show the help</dd>
</dl>
http://www.jejik.com/articles/2009/07/scanning_files_with_clamav_from_cakephpTue, 07 Jul 2009 01:03:00 CESTLone WolvesFixing OpenDocument MIME magic on Linuxhttp://www.jejik.com/articles/2009/06/fixing_opendocument_mime_magic_on_linux
s.marechal@jejik.com (Sander Marechal)
<p>When working on the beta of <a href="http://www.officeshots.org">Officeshots.org</a> I ran into an interesting problem with file type and MIME type detection of OpenDocument files. When a user uploads an ODF file to Officeshots I want to determine the MIME type myself using the <a href="http://www.php.net/manual/en/ref.fileinfo.php">PHP Fileinfo</a> extension. Windows user who do not have any ODF supporting applications installed will report ODF files as application/zip which is of no use to me. In addition, a malicious user could attempt to upload an executable file and report the MIME type as ODF file.</p>
<p>On Linux, the PHP Fileinfo extension relies on the magic file that is provided by the <a href="http://packages.debian.org/lenny/file">file</a> package. The magic file contains a series of tests that can determine the file type and MIME type of a file by its contents. I found out that the magic file is incomplete for OpenDocument files. Below I will show you what is wrong with the magic file and how you can fix it.</p>
<p>If you don&#8217;t care about the technical explanantion, you can <a href="#patch">skip to the fix</a> directly.</p>
<h4>The problem with magic</h4>
<p>First off, some tests. I ran these tests on Debian Lenny, but I have seen other distributions as well that have incomplete file magic support for OpenDocument Format. Here is what I get when I test an odt file using the file command.</p>
<pre>~$ file document.odt
document.odt: OpenDocument Text
~$ file --mime document.odt
document.odt: application/vnd.oasis.opendocument.text</pre>
<p>So far, so good. Both the file type description and the MIME type are right. But for any other type of OpenDocument file only the description is correct. The file type is not. Below I am testing an ods spreadsheet.</p>
<pre>~$ file spreadsheet.ods
spreadsheet.ods: OpenDocument Spreadsheet
~$ file --mime spreadsheet.ods
spreadsheet.ods: application/octet-stream</pre>
<p>The file type "OpenDocument Image Template" is even missing completely from the magic file. There is another problem with the magic file too. An OpenDocument file is basically a zip archive that contains several XML files. The <a href="http://docs.oasis-open.org/office/v1.1/OS/OpenDocument-v1.1.pdf">OpenDocument specification (pdf)</a> does not specify what version of zip to use. The magic file only searches for zip 2.0, which is what most ODF applications use, but not all. Some applications use version 1.0 instead and according to the ODF spec that is valid. Here is what happens when you try to detect an ODF file zipped with the zip 1.0 standard.</p>
<pre>~$ file document.odt
document.odt: Zip archive data, at least v1.0 to extract
~$ file --mime document.odt
document.odt: application/zip</pre>
<h4 id="patch">Fixing magic detection</h4>
<p>I have written a patch for the magic file that fixes all of the above problems. It removes the version test for the ODF zip container, adds the correct MIME type for all the different ODF file types and adds the missing OpenDocument Image Template. This patch is written for <tt>/usr/share/file/magic</tt> on Debian Lenny. If you want to patch your own Linux distribution then you may need to adapt it. You can <a href="http://code.officeshots.org/trac/officeshots/browser/trunk/server/patches/magic.patch">view the patch in our Officeshots Trac</a> or <a href="http://code.officeshots.org/officeshots/trunk/server/patches/magic.patch">download the patch directly from Subversion</a>.
<p><strong>Update 2009-06-29:</strong> I have now also created <a href="http://www.jejik.com/files/examples/file-5.0.3-opendocument.patch">a patch</a> against the original upstream <a href="http://www.darwinsys.com/file/">file-5.0.3</a>.</p>
<p>First, make a backup of your original magic file. Then apply the patch to magic.</p>
<pre>~# cd /usr/share/file
/usr/share/file# cp magic magic.orig
/usr/share/file# patch &lt; ~/magic.patch
patching file magic</pre>
<p>After this you need to recompile the magic file. This will create magic.mgc which is the file that is actually used by the file command and the PHP Fileinfo extension.</p>
<pre>/usr/share/file# file -C magic</pre>
<p>Now your magic file will correctly identify all OpenDocument file types.</p>
<pre>~$ file --mime spreadsheet.ods
spreadsheet.ods: application/vnd.oasis.opendocument.spreadsheet</pre>
<p>And that&#8217;s all there is to it. Have fun with ODF!.</p>
http://www.jejik.com/articles/2009/06/fixing_opendocument_mime_magic_on_linuxSun, 28 Jun 2009 15:12:00 CESTLone WolvesBook Review: Practical CakePHP Projectshttp://www.jejik.com/articles/2009/06/book_review_practical_cakephp_projects
s.marechal@jejik.com (Sander Marechal)
<p><a href="http://www.cakephp.org">CakePHP</a> has rapidly been gaining mindshare as a powerful and easy to use MVC framework for PHP. Mimicking Ruby on Rails, it allows developers to quickly prototype and build database driven websites and web applications. With increased popularity books usually follow. &ldquo;Practical CakePHP Projects&rdquo; by Kai Chan and John Omokore is one such book. It is aimed at advanced PHP developers who have some experience with CakePHP and builds on books like &ldquo;Beginning CakePHP&rdquo; (Apress, 2008). The book promised to show how to build practical, real-world web applications using the CakePHP frameworks.</p>
<p>Unfortunately &ldquo;Practical CakePHP Projects&rdquo; only partially succeeds in that. It is refreshing to see how
applications are built that are different from the proverbial &ldquo;blog&rdquo; or &ldquo;store&rdquo; example (though both
are used in the first chapters), but I find myself disagreeing often with how these applications are built. The chosen
solutions often seem to work against the framework instead of going with it.</p>
<h4>The projects</h4>
<p>After the blogging example--which seems to be the &ldquo;Hello World!&rdquo; of MVC frameworks--follow chapters that
show off e-commerce sites, Google and Yahoo maps clients and Twitter mashups. Interesting examples but not optimally implemented.
The e-commerce site implements it's own nested categories instead of using a TreeBehavior and exposes private controller methods as publicly
accessible actions. The forum example tries to shoehorn the Command Pattern on the framework.</p>
<p>A more onerous example is the dynamic
data fields in chapter 12, which allow SQL injection. Joshua Bloch said it very nicely in his
<a href="http://www.youtube.com/watch?v=aAb7hSCtvGw">TechTalk on API design</a>.</p>
<blockquote><p>Example programs should be exemplary. You should spend ten times as much time on example code as on production
code (17:05 in the video).</p></blockquote>
<p>A true statement for a very simple reason: Example code gets copied and put into production, no matter how big the disclaimers
are that you put next to them.</p>
<p>Reading the book leaves me with the feeling that the authors are trying to coax the framework into doing things it is not really suitable for. It is interesting to see how they achieve their goals but I do not have the feeling that I am a better CakePHP developer after studying their projects.</p>
<h4>Source code</h4>
<p>Accompanying the book is a downloadable archive that contains the source code for almost all the examples in the book.
A good idea considering there are plenty of examples which are longer than usual for such books. The examples do not work
out of the box unfortunately. They all require a running (MySQL) database and the SQL schemas are not supplied with the examples.
They need to be copied manually from the book. A missed opportunity, especially when you consider that CakePHP can use SQLite
databases so that the applications could have run out of the box.</p>
<img src="http://www.jejik.com/images/articles/practical-cakephp-projects-256.png" class="right" />
<p>In addition to that, some of the examples contain bugs and errors
that need to be fixed before the examples will run at all. I am not talking about some edge case bugs here but clear and simple errors that pop up as soon as you try to run the projects. These errors should have easily been caught just by trying to run the examples.</p>
<h4>Conclusion</h4>
<p>Overall I expected more from this book. It has shown me a few interesting ways to (ab)use CakePHP to make it do what you
want, but I don't feel that this book has helped me build better CakePHP applications. Combined with the quality of the
examples themselves I can't give this book a higher rating than 6 out of 10.
<h4>Book information</h4>
<table>
<tr><td style="width: 10em;"><strong>Title</strong></th><td>Practical CakePHP Projects</td></tr>
<tr><td><strong>Authors</strong></td><td>Kai Chan and John Omokore</td></tr>
<tr><td><strong>Publisher</strong></td><td>Apress</td></tr>
<tr><td><strong>ISBN</strong></td><td>978-1-4302-1579-3</td></tr>
<tr><td><strong>Year</strong></td><td>December 2008</td></tr>
<tr><td><strong>Pages</strong></td><td>379</td></tr>
<tr><td><strong>CD Included</strong></td><td>No</td></tr>
<tr><td><strong>Overall rating</strong></td><td>6/10</td></tr>
</table>
<p><em>This article was originally posted at <a href="http://lxer.com/module/newswire/view/122244/">LXer Linux News</a>.</em></p>
http://www.jejik.com/articles/2009/06/book_review_practical_cakephp_projectsThu, 25 Jun 2009 23:16:00 CESTLone WolvesOfficeshots.org available in closed betahttp://www.jejik.com/articles/2009/05/officeshots_org_available_in_closed_beta
s.marechal@jejik.com (Sander Marechal)
<p><a href="http://www.officeshots.org">Officeshots.org</a> has finally gone into Beta this week. It took a lot more work (and time) than expected but we made it nonetheless. At the moment the beta is a closed beta, available to current contributers and members of the OpenDoc society. But we hope to start with public, free availability within a month. Joining the OpenDoc society is free for FOSS projects, so if you are interested in the beta, please <a href="http://www.opendocsociety.org/">join them.</a></p>
<p>Below is the full press release.</p>
<h4>Officeshots.org available in closed beta</h4>
<p>&quot;Free webservice lets user compare office applications&quot;</p>
<p>'s Hertogenbosch/Den Haag, May 19 2009</p>
<p>The Netherlands in Open Connection and OpenDoc Society are happy to
announce the immediate availability of the beta of Officeshots.org, a
free webservice that allows users to compare the output quality of
office applications. The Officeshots project entails both an open source
service framework, and a free online service based on this framework.
The service is now in closed beta, exclusively available to members of the
international OpenDoc Society on <a href="http://www.officeshots.org">http://www.officeshots.org</a> (1). If you wish
to join the beta program you can become a member or sponsor of the OpenDoc
Society (2).</p>
<p>Officeshots.org was first announced on January 29th 2009, with the beta phase
of the Officeshots.org service scheduled to become active at the end of
February or early March. A delay in supporting rendering factories caused the
original schedule to slip but with support of NLnet, Open IT Netherlands and
Abicollab.net the project is finally ready to start the beta phase.</p>
<p>Project Lead Sander Marechal (The Lone Wolves Foundation): "At the start
of the beta phase, Officeshots.org is running a number of factories
implementing GO-OO, OpenOffice.org, Gnumeric and AbiWord. We expect to quickly
add support for a number of other factories such as Lotus Symphony, RedOffice
(Chinese), OfficeReader (an ODF viewer for Symbian smart phones) and the soon
to be released KOffice 2.0" Software vendors and user communities are
encouraged to add their office solution of choice to Officeshots.org, to make
its functionality available to a wider audience.</p>
<p>So far development work within the project has concentrated on making a
safe, distributed document rendering environment that allows buyers and
developers to test interoperability in many different applications on
many different platforms. During the closed beta the visual appearance
of the main Officeshots website will be enhanced, and a translation
framework so people can assist in translating Officeshots.org to their
native language. The beta phase is expected to last until the beginning
of June, after which the service will be available freely to anyone.</p>
<p>Officeshots will be put to the test significantly in the first ODF Plugfest
that will be held June 15/16th 2009 in The Royal Library in The Hague, where a
large number of ODF implementations (including wellknown names such as
Microsoft, Google, IBM and Sun, but also upcoming players like ZCubes and
CelFrame Office) will test their interoperability on invitation by the Dutch
cabinet in the person of Dutch Minister of Foreign Trade Frank Heemskerk (3).</p>
<p>If you wish to be updated with the latest developments and get the
announcement for the public release, please consider joining out
mailinglist at the following URL:</p>
<p><a href="http://lists.opendocsociety.org/mailman/listinfo/officeshots">http://lists.opendocsociety.org/mailman/listinfo/officeshots</a></p>
<ol>
<li>If you want to log in with a digital certificate you can do so at <a href="https://www.officeshots.org">https://www.officeshots.org</a>.</li>
<li>Join at <a href="http://www.opendocsociety.org">http://www.opendocsociety.org</a>.</li>
<li><a href="http://www.odfworkshop.nl">http://www.odfworkshop.nl</a>.</li>
</ol>
http://www.jejik.com/articles/2009/05/officeshots_org_available_in_closed_betaThu, 21 May 2009 11:08:00 CESTLone WolvesOpen Source News from FOSDEM 2009 - Day 2http://www.jejik.com/articles/2009/03/open_source_news_from_fosdem_2009_-_day_2
s.marechal@jejik.com (Sander Marechal)
<p>In the weekend of 7 and 8 February, the 9th Free &amp; Open Source Developers' Europe Meeting
(FOSDEM) took place at the Université Libre Bruxelles (ULB) in Brussels. Your editors Sander
Marechal and Hans Kwint attended this meeting to find out for you what's hot, new in the area of
the Linux environment and might be coming to you in the near future. This is our report of the
second day covering the talks about Thunderbird 3, Debian release management, Ext4, Syslinux,
CalDAV and more. Coverage of the first day can be found in
<a href="http://www.jejik.com/articles/2009/02/open_source_news_from_fosdem_2009_-_day_1/">our previous article</a>.</p>
<center><a href="http://www.fosdem.org" style="border: none;">
<img src="http://www.jejik.com/images/articles/fosdem2009/fosdem2009.gif" alt="FOSDEM, the Free and Open Source Software Developers' European Meeting" />
</a></center>
<h4 id="toc">Table of Contents</h4>
<ul>
<li><a href="#puppet">Lightning talk: How the social networking site Hyves benefits from Pupput - Marlon de Boer</a></li>
<li><a href="#thunderbird">Thunderbird 3 - Ludovic Hirlimann and David Ascher</a></li>
<li><a href="#upstart">Upstart - Scott James Remnant</a></li>
<li><a href="#debian">Release management in Debian: Can we do better? - Frans Pop</a></li>
<li><a href="#freedroid">Lightning talk: Introducing FreeDroidRPG - Arthur Huillet</a></li>
<li><a href="#syslinux">Syslinux and the dynamic x86 boot process - H. Peter Anvin</a></li>
<li><a href="#sgx">Lightning talk: Games done good - Steven Goodwin</a></li>
<li><a href="#ext4">Ext4 - Theodore Ts'o</a></li>
<li><a href="#drupal">Automated web translation workflow for multilingual Drupal sites - Stany van Gelder</a></li>
<li><a href="#caldav">CalDAV, the open groupware protocol - Helge Heß</a></li>
<li><a href="#wrapup">Wrap up</a></li>
</ul><br />
<h4 id="puppet">Lightning talk: How the social networking site Hyves benefits from Pupput - Marlon de Boer</h4>
<p style="font-style: italic">By Hans Kwint <small>[<a href="#toc">back to top</a>]</small></p>
<p><a href="http://hyves.net/">Hyves.net</a> is one of the Netherlands biggest social Network sites. However, not everybody might be familiar with Hyves. To give you an idea what this means for the requirements for the servers, Marlon gives some figures: "200 million pageviews daily and at busy moments reaching eighteen million pageviews per hour". All these requests are served by 2500 servers decentrally located in the area of the largest internet <a href="http://www.ams-ix.net/">exchange</a> in the world in Amsterdam. From time to time servers have to be replaced or new servers have to be added. The Hyves administrators team is able to unpack and connect the wires of a server and the configuration of thenew server in seven minutes. Interestingly these servers run Gentoo Linux. Great "Get the facts" titles came to my mind such as "Read how Hyves manages 2500 servers and puts new servers up online in seven minutes using <a href="http://www.gentoo.org/main/en/about.xml">Gentoo</a> Linux and <a href="http://reductivelabs.com/trac/puppet">Puppet</a>".</p>
<p>Back to Puppet: Puppet is a front-end for remote system administration tasks written in Ruby. Also, it uses <a href="http://lxer.com/module/newswire/view/115938/index.html#augeas">Augeas</a>, the topic of the last talk of the previous day. Puppet uses a client-server architecture, and it uses SSL for authentication. It makes use of templates to enable quick configuration of new servers and it also supports types and functions. Hyves now uses the PXE boot protocol in combination with puppet to quickly configure new servers. Puppet amongst others handles DNS, NTP, Firewall and package management. Of course, as a Gentoo user I was curious why Hyves chose Gentoo; which may not seem the most logical choice. One of the developers told someone else asked exactly that question, they use it because of the ease in writing custom packages. "I write an ebuild within five minutes" he said, "something that cannot be done that quickly with Debian packages for example".</p>
<div class="sidebar right" style="width: 300px;"><img
src="http://www.jejik.com/images/articles/fosdem2009/fosdem2009_keysign.jpg"
style="margin-bottom: 5px;" /><small>A long line op people at the keysigning party</small></div>
<h4 id="thunderbird">Thunderbird 3 - Ludovic Hirlimann and David Ascher</h4>
<p style="font-style: italic">By Sander Marechal <small>[<a href="#toc">back to top</a>]</small></p>
<p>There was already a Q&amp;A session going on when I entered the Mozilla dev room for the
Thunderbird 3 talk. The room was packed to the brim and some people could not get in anymore. Many
points were discussed before the talk even began, the biggest one probably being the biggest
setback for people awaiting Thunderbird 3 and Mozilla dropping integration of the Lightning Calendar for
the 3.0 release.</p>
<p>The talk itself began with a short overview of the history of Thunderbird and Mozilla
Messaging. Mozilla itself has been very focused on Thunderbird (See also the Mozilla opening
keynote in the previous article) but Thunderbird had been pushed to the background a bit. The
spin-off of Mozilla Messaging is meant to correct that because like the web, messaging is under
serious threat lately.</p>
<p>Messaging as a whole (not just e-mail) is under threat from a variety of big, centralized and
closed systems, in particular Facebook. Ludovic estimated that at the current growth rate of
closed messaging networks like Facebook, soon more messages will travel over these closed networks
than over e-mail (spam excluded). Just because these messages travel over HTTP does not mean that
the messaging networks are open. Data ownership matters, as does decentralized innovation and
user-level hackability. Currently most innovation is happening in the closed centralized networks.
Mozilla Messaging is therefor not just about Thunderbird and e-mail but about messaging as a
whole.</p>
<p>For the immediate future the focus is on Thunderbird though, and Mozilla Messaging has quite a
big job ahead of it. There are thousands of open bug reports and feature requests going back
years that need to be addressed. Mozilla also wants to add new features in order to grow it's
market the way that Firefox has but that is more difficult than it seems. Thunderbird is pretty
feature-packed as it is so in order to add new features, old ones need to be removed or pushed
into extensions in order to keep the Thunderbird base from becoming too bloated. Also, the
internals of Thunderbird are quite complex, changes that appear to be easy to do often turn out to
be very complex or even impossible and (too) many of the APIs are still in C++. It has a code base
of well over 500,000 lines, only a handful of developers and 2-3 QA people to maintain
it.</p>
<p>The plan for Thunderbird 3 is to upgrade Gecko to version 1.9.1 (the same as Firefox 3.1),
upgrade the platform and make a lot of improvements to the overall user experience. The core of
Thunderbird is being reworked to take modern computing into account (cheap disk space and high
bandwidth) and do most of it's work asynchronously. For example, when you delete a message it
immediately disappears but the actual delete from your inbox files will be run at leisure in the
background. The UI is also getting an overhaul in order to provide faster access to the most used
functions and simplify account setup from a 7 page wizard to a single form with three fields. When
all this is done Thunderbird 3 will be released. Beta 2 is expected around the end of this week.</p>
<p>After that the team plans to make contributing easier with higher level (JavaScript) API, a new
indexing database called Gloda, HTML/CSS-based views which are easier to customize and a set of
extensions to experiment with even more drastic UI layout changes.</p>
<h4 id="upstart">Upstart - Scott James Remnant</h4>
<p style="font-style: italic">By Hans Kwint <small>[<a href="#toc">back to top</a>]</small></p>
<p>Before attending this presentation, I only new Upstart as an alternative way of booting used
by Ubuntu to speed up the boot process. However, Scott Remnant, the writer of Upstart who also works
at Canonical says that Upstart is much more: "It's an API for processes to communicate." It has also
been used in Fedora as of late and it promises to improve the boot process by way of trying to
eliminate the time the computer spends doing nothing but sleep() and avoiding race conditions.</p>
<p>The unique characteristic of Upstart is that it makes use of simple grammar: While Q running, until P, while R start / stop S, else T, V also W, X and Y, Z or N. The capitals here may represent boot processes which have or have not been started. As you may be guessed, these requirements may be combined as well. As a result, it's the intention that runlevels are history. "Just tell <i>what</i> you want to start, or <i>when</i> it needs to be run", Scott explains.</p> For example, the grammar can be used to tell a laptop what it should do or what to stop doing while it draws power from the battery or the net.</p>
<p>Upstart also intends to make cron obsolete, by means of supporting timed events. These are the
ones like 'in 2 hours', '24 seconds after startup', 'every 10 minutes while eth0 up' and so on. These timed events allow for a greater flexibility of running conditional commands; because these commands can be coupled to services that are running or not.</p>
<p>Scott also explained more about the road map: Upstart 0.5.2 will be available this month, and 0.10.0 is planned for June 2009 and will feature new job syntax. After that, 1.00 will probably come sometime in September of 2009; or if it's not progressed far enough it will be called 0.50.</p>
<h4 id="debian">Release management in Debian: Can we do better? - Frans Pop</h4>
<p style="font-style: italic">By Hans Kwint <small>[<a href="#toc">back to top</a>]</small></p>
<p>Frans Pop is not a Debian release engineer but works as the release engineer of the Debian
installer, even though he temporarily resigned for a few months. In that function he is an important
part of the Debian release cycle because if anything in Debian is broken, users will logically
blame the installer. In the past, there have been serious issues with Debian releases leading
to broken features in the distribution. For example, when Etch had been released the Sarge
installer was broken for some time. This is a problem because certain users have policies,
for example that they might have decided to continue using old-stable for a year to year and a half. Frans was not out to blame people personally, however he did make some suggestions on how release engineers can improve their behaviour to prevent problems such as has happened in the past.</p>
<p>It basically can be summarized as follows: There has been too little public communication.
The communication between those involved with the release process was mainly on IRC, resulting in
those not being in IRC at that time not knowing what had happened. Also, the team of release engineers lack a thorough summary of what tasks are part of a release and how long these tasks take. For example, some of the release engineers pulled out new release notices without being aware these notices have to be translated before the release. Some not widely known parts of the release are sometimes a bit forgotten: Key distribution, documentation on an upgrade path and the documentation on the Debian website. Frans pleaded for a more open communication and better understanding of how long certain tasks take and a better planning of all these tasks.</p>
<div class="sidebar left" style="width: 300px;"><img
src="http://www.jejik.com/images/articles/fosdem2009/fosdem2009_freedroid.jpg"
style="margin-bottom: 5px;" /><small>Arthur Huillet showing FreeDroidRPG in action</small></div>
<h4 id="freedroid">Lightning talk: Introducing FreeDroidRPG - Arthur Huillet</h4>
<p style="font-style: italic">By Sander Marechal <small>[<a href="#toc">back to top</a>]</small></p>
<p>FreeDroidRPG is an isometric action/adventure game in the style of Diablo starring Tux as the main
protagonist against the evil robots that are running the MegaSys operating system, a poorly
secured operating system that nevertheless runs on the majority of robots. Arthur took the
audience around a quick tour of the features of the game, such as fighting, magic inventory and
the ability to hack the OS of the droids so you can reprogram them and turn them to your side.</p>
<p>The game is purely single player and no multiplayer options are planned. Arthur and his team
want to focus to be on the storyline and character development. The base code is pretty much done but
they are looking for people who can add characters, quests and levels in order to increase the
games length because at the moment it only takes about six hours to finish the game.</p>
<h4 id="syslinux">Syslinux and the dynamic x86 boot process - H. Peter Anvin</h4>
<p style="font-style: italic">By Hans Kwint <small>[<a href="#toc">back to top</a>]</small></p>
<p><a href="http://syslinux.zytor.com/wiki/index.php/The_Syslinux_Project">SysLinux</a> is a
lightweight dynamic bootloader which can be used to boot various operating systems in different
ways. Peter developed it because when working late into the night, he became incensed at not being able to boot something. As of recent SysLinux has included gPXE support and it can also boot over HTTP using Apache and cgi. Peter showed this by means of booting his virtual computer in Brussels from an Apache server in California.</p>
<p>SysLinux has a modular design. It consists of the user interface, diagnostic tools, policies
and filesystem modules. The filesystem modules can boot binary formats. For example, recently
someone asked Peter to support the Microsoft SDI format. Peter tells: "I didn't know that format,
but it turned out to be some ramdisk using a Windows kernel". SysLinux uses a system called
"shuffle": It's fed with parameters defining which parts of a boot file are at what place in a
binary boot file. Because of this system adding new formats is quite easy. Supporting the
Microsoft SDI format only took 139 lines of code, most of which was error checking. SysLinux also
comes with policy modules. For example, these can say "Boot kernel X when on a 32 bit system, boot
kernel Y when on a 64 bit system, else boot kernel Z". These modules also enable some quite
sophisticated uses. One of them is probing the PCI bus when booting, mapping the devices found to
kernel modules, and building an initramfs with all the needed modules. The astonishing thing is all of this can be done on the fly!</p>
<p>Syslinux is a work in progress. As a result of a historical error, much of the FS-code has been written in assembly. Work is being done to rewrite these parts amongst others to make sure it's easy to write support for brtfs in the near future. Also, Peter intends SysLinux to have an LUA-interpreter in the future to allow usage of simple to write LUA-scripts instead of the current modules. LUA is chosen because it is small and clean. Another point of attention is EFI support. For all of these things, Syslinux needs feedback from users and distributions. Currently it also lacks newbie-friendly documentation.
<h4 id="sgx">Lightning talk: Games done good - Steven Goodwin</h4>
<p style="font-style: italic">By Sander Marechal <small>[<a href="#toc">back to top</a>]</small></p>
<p>Steven Goodwin is the main developer of the SGX Engine, a 3D graphics engine for games and the
author of "Cross-Platform Game Programming" and "The Game Developers' Open Source Handbook". He
started off describing that there is a fundamental disconnect in the way people write libraries for
use in games and the game developers that actually write the games that causes game developers to
re-use far less libraries that most non-game projects. This is usually because the developers of
the library are making far too many assumptions and game developers are not going to try too hard
to adapt. They simply try out a library and if it doesn't work they throw it out and try another
one. If none of them work then they write it from scratch, again.</p>
<p>Library developers need to find common ground with the game developers, but what common ground
is there? Steven gave some nice examples to show that there is far less common ground than
expected. For example, the size of an int is different across platforms. The standard library is
different across platforms. Even GCC is different across platforms with many options not being
available everywhere and to make matters worse, people don't even agree on the definition of an
"engine" or "object".</p>
<p>Libraries for games need to be built in a different way; everything needs to be abstracted and
no assumptions must be made. For example, it is nearly impossible to write an input toolkit for
games. What' i a "click" when you're running on a Wii with four controllers connected? What is a
cursor on a touch screen? In that same vein, a graphics engine should not expect to find a
graphics card on a machine. Just think of a World of Warcraft server.</p>
<p>SGX has been built in a way that is truly cross platform. Everything is abstracted and
organized into loosely coupled modules ranging from memory to CRC, math, geometry, graphics,
physics, sound and more. It is used by various games (although Steven was not allowed to say which
ones) and should run everywhere, from your PC to your Wii, XBox, server and hand-held device.</p>
<div class="sidebar right" style="width: 300px;"><img
src="http://www.jejik.com/images/articles/fosdem2009/fosdem2009_ext4.jpg"
style="margin-bottom: 5px;" /><small>A large crowd gathered to watch Theodore Ts'o speak
on Ext4</small></div>
<h4 id="ext4">Ext4 - Theodore Ts'o</h4>
<p style="font-style: italic">By Sander Marechal <small>[<a href="#toc">back to top</a>]</small></p>
<p>Theodore started his presentation with an apology. He had prepared a very nice demonstration of
ext4 on his laptop but it had been stolen at the train station. Luckily he did have a backup of his
presentation, just not the demo.</p>
<p>He gave a quick overview about the ext3 filesystem and what is so good about it; ext3 is widely
used and is pretty much the de facto Linux filesystem. It also has a very diverse developer
community with contributors from all the major distributions. That is a bigger point then you would
at first assume because until recently Red Had did not officially support the XFS filesystem and it did not
have any of its own developers working on it. JFS is a great filesystem but the fact that pretty
much all contributors are IBM employees has likely contributed to JFS' lack of success. Big
distributions want someone who knows the ins and outs on their own team before they can support
something as important as a filesystem and ext3 developers are everywhere.</p>
<p>The ext3 filesystem has its fair share of problems which ext4 should fix. Currently ext3 filesystems can only be up to
16 TB in size and there is a limit of 32,000 directories. The resolutions of the timestamps on files
is only one second and there are performance problems, ext4 fixes all these issues.</p>
<p>Ext4 is not a new filesystem, just like ext3 was not a new filesystem. Ext3 is just ext2 with a
number of new features added such as journaling and ext4 is simply ext3 with even more features,
such as extends. Google has even contributed a patch that you can use to mount ext2 filesystems as
ext4. The reason: Google is still using ext2 because it doesn't believe in journaling. When
something goes wrong on one of the machines at Google it is simply easier to wipe the system and
re-flash it from another node than it is to recover it. But they did want to make use of some of
the new features that ext4 adds.</p>
<p>Ext4 isn't all good news though, the new allocator that it uses is likely to expose old bugs in
software running on top of it. With ext3, applications were pretty much guaranteed that data was
actually written to disk about 5 seconds after a write call. This was never official but simply
resulted from the way ext3 was implemented. The new allocator used for ext4 means that this can
take between 30 seconds and 5 minutes or more if you are running in laptop mode. It exposes a lot
of applications that forget to call fsync() to force data to the disk but nevertheless assume that
it has been written. Two of the major culprits appear to be Gnome and KDE who each write hundreds
of dotfiles to a users home directory. A sudden crash of the machine means that all these files
will appear to have disappeared. Users think that the filesystem is to blame but in reality it is
the applications.</p>
<p>The situation appears to be a bit tricky to solve because the last thing you want to do is call
fsync() too often because that would force your hard drive out of power saving mode. One of the
possible solutions under investigation is a sort of callback system whereby an application can be
notified when data has actually reached the platters of your hard drive.</p>
<h4 id="drupal">Automated web translation workflow for multilingual Drupal sites - Stany van Gelder</h4>
<p style="font-style: italic">By Hans Kwint <small>[<a href="#toc">back to top</a>]</small></p>
<p>As of Drupal 6, Connexion offers an automated web translation workflow <a
href="http://www.connexion.eu/en/wcms/automated-translation/">module</a> for Drupal. Of
course there are machine translators, but normally these are not good enough. A human translator
will still be necessary. However, much of the work such as a human translator does can be automated.
The AWTW-module meant to do just that; this process is called Computer Aided Translation, or
just simply CAT. The module is mainly aimed to automate repetitive tasks, however it should consider
local differences like websites for different countries that may have different contact persons. Stany
presented a demo of how these things can be filled in using the XML editor Déjavu. He also explained
how ATWT can save several days of time in the translation process. In the future this module will also be able to map internal links, making sure links in one translation link to that topic in the same language.</p>
<div class="sidebar left" style="width: 300px;"><img
src="http://www.jejik.com/images/articles/fosdem2009/fosdem2009_caldav.jpg"
style="margin-bottom: 5px;" /><small>Helge Heß explaining the ins and outs of CalDAV</small></div>
<h4 id="caldav">CalDAV, the open groupware protocol - Helge Heß</h4>
<p style="font-style: italic">By Sander Marechal <small>[<a href="#toc">back to top</a>]</small></p>
<p>CalDAV is a relatively new standard that allows users to store and retrieve calendar events
from a central server. Helge noted that CalDAV is just a transport; the actual data formats used, iCal and vCard
are much older. CalDAV is built on top of several well known technologies such as
HTTP (REST-style), WebDAV and WebDAV-ACL. CalDAV itself is relatively simple, but the
underpinnings of WebDAV and especially WebDAV ACL can make it quite complex. That is why a new
protocol has emerged in the open source world called GroupDAV [really, this talk should have been
titled GroupDAV and not WebDAV -- Sander].</p>
<p>GroupDAV is a subset of CalDAV, CardDAV and WebDAV. Helge recommended that anyone who tries to
implement CalDAV first implement proper GroupDAV support because any GroupDAV client is able of
talking to a CalDAV server because they are completely compatible. Full CalDAV clients simply have a couple of
extra functions like REPORT which make some types of queries easier to do. One of the more interesting design goals
behind GroupDAV is that it is completely compatible with Apache + mod_webdav. That means you
do not need a special server to store your groupware data.</p>
<p>Helge then went into a more technical explanation of the protocols and how you can implement
them, demonstrating various things with a simple command line client that showed the HTTP requests
and responses between the server and client. He finished with an overview of existing server and
client implementations. DaviCal (PHP) and CalendarServer (Python) are pretty complete CalDAV
server implementations. On the client side the choice is wide. Evolution, Mozilla, Funambol,
Mulberry, Chandler and even MS-Outlook (using the OpenConnector) are all able to speak
CalDAV.</p>
<h4 id="wrapup">Wrap up</h4>
<p style="font-style: italic">By Sander Marechal <small>[<a href="#toc">back to top</a>]</small></p>
<p>All in all I think FOSDEM 2009 has been a great success. The talks were great, the people
friendly and the atmosphere was buzzing. The LXer editors would like to
congratulate the FOSDEM 2009 staff on a job well done. We will certainly be there next year for
FOSDEM 2010.</p>
<p><em>This article was originally posted on <a href="http://lxer.com/module/newswire/view/116126/index.html">LXer Linux News</a>.</em></p>
http://www.jejik.com/articles/2009/03/open_source_news_from_fosdem_2009_-_day_2Tue, 03 Mar 2009 21:38:00 CETLone WolvesOpen Source News from FOSDEM 2009 - Day 1http://www.jejik.com/articles/2009/02/open_source_news_from_fosdem_2009_-_day_1
s.marechal@jejik.com (Sander Marechal)
<p>A week ago, the 9th Free &amp; Open Source Developers' Europe Meeting (FOSDEM) took place at the Université
Libre Bruxelles (ULB) in Brussels. Your editors Sander Marechal and Hans Kwint attended this meeting to find
out for you what's hot, new in the area of the Linux environment and might be coming to you in the near future.</p>
<p>Here is the blow-by-blow of the first day with talks about Mozilla's future, the role of Debian, two OSI
talks, Reverse engineering and much, much more.</p>
<center><a href="http://www.fosdem.org" style="border: none;">
<img src="http://www.jejik.com/images/articles/fosdem2009/fosdem2009.gif" alt="FOSDEM, the Free and Open Source Software Developers' European Meeting" />
</a></center>
<h4 id="toc">Table of Contents</h4>
<ul>
<li><a href="#opening">Opening</a></li>
<li><a href="#sun">Long life, success and other problems - Simon Philips (Sun)</a></li>
<li><a href="#mozilla">Free. Open. Future? - Mark Surman (Mozilla)</a></li>
<li><a href="#debian">The role of Debian in Free Software - Bdale Garbee (HP)</a></li>
<li><a href="#osi">OSI: Recent activities and future - Michael Tiemann</a></li>
<li><a href="#xrandr">XRandR 1.3 - Matthias Hopf</a></li>
<li><a href="#defenders">Lightning talk: Linux defenders a.k.a. Open Invention Network - Keith Bergelt</a></li>
<li><a href="#smallmail">Lightning talk: Private mail with SmallMail - Peter Roozemaal</a></li>
<li><a href="#osi-discuss">Public meeting of the Open Source Initiative - Michael Tiemann</a></li>
<li><a href="#reverse">Reverse engineering proprietary protocols - Rob Savoye</a></li>
<li><a href="#nouveau">Nouveau status update - Stéphane Marchesin</a></li>
<li><a href="#exherbo">10 cool things about Exherbo - Bryan Østergaard</a></li>
<li><a href="#lxde">Lightning talk: LXDE. Lighter, Faster, Less Resource Hungy - Mario Behling</a></li>
<li><a href="#camelot">Lightning talk: Camelot. Building desktop apps at warp speed - Erik Janssens</a></li>
<li><a href="#augeas">Augeas - Raphaël Pinson</a></li>
</ul><br />
<div class="sidebar left" style="width: 300px;"><img
src="http://www.jejik.com/images/articles/fosdem2009/fosdem2009-entrance.jpg"
style="margin-bottom: 5px;" /><small>The entrance of FOSDEM at the ULB</small></div>
<h4 id="opening">Opening</h4>
<p style="font-style: italic">By Sander Marechal <small>[<a href="#toc">back to top</a>]</small></p>
<p>Shortly after 10 AM FOSDEM was opened with a short recap of the beer event on the night before
and the infamous FOSDEM dance. Several new records were set this year during this event. Over 750
people showed up and the bill was close to 10,000 Euro, even though Google had sponsored some free
beer. They had designed a new way to make sure everyone could get beer and it supposedly worked
much better than last year.</p>
<h4 id="sun">Long life, success and other problems - Simon Philips (Sun)</h4>
<p style="font-style: italic">By Sander Marechal <small>[<a href="#toc">back to top</a>]</small></p>
<p>Simon Philips of Sun appeared on the stage before the scheduled opening keynote. He told
that Sun has been struggling with some old code. The ONC-RPC code is over 29 years old and thereby
precedes the GPL and BSD licenses. The code is back to the days when less attention was payed to
licenses. As it turned out, the always diligent Debian people had discovered back in 2002 that
<a href="http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=181493">the license was not DFSG
free</a>. Sun worked hard over the last six years to resolve this issue, but it is surprisingly
hard to contact all the contributers from 1982 and before in order to get their approval for a
change of license.</p>
<p>The process stagnated a bit over the years until Debian threatened to take out the ONC-RPC
code, which in effect meant taking out glibc4 all together. But Simon was happy to announce that
Sun had finally been able to contact all the authors and get approval for a license change. Since
Thursday February 5th the ONC-RPC has been relicensed and is now officially free.</p>
<div class="sidebar right" style="width: 300px;"><img
src="http://www.jejik.com/images/articles/fosdem2009/fosdem2009-mozilla.jpg"
style="margin-bottom: 5px;" /><small>Mark Surman about Mozilla</small></div>
<h4 id="mozilla">Free. Open. Future? - Mark Surman (Mozilla)</h4>
<p style="font-style: italic">By Sander Marechal <small>[<a href="#toc">back to top</a>]</small></p>
<p>Mark Surman's opening keynote was an interesting insight into the workings of Mozilla. The
official mission statement of Mozilla states &ldquo;To guard the open nature of the
internet&rdquo;, but what does this mean for Mozilla's future? How far can it go and what
comes after that?</p>
<p>The first question was easily answered. FOSS can go very far. Back in 2003 the web was in
danger. Microsoft's Internet Explorer was close to a 99,98% market share. Many websites were
designed for Internet Explorer only and many sites were flash only. In 2008 the situation has much
improved. IE has hit an all-time low of 68% market share and Firefox alone is sharing at over 22%.
Even better, there are now many other browsers out there that are being used by people and most of
these are Free Software. In addition most of the websites now support all browsers and the use of
flash for complete websites has decreased, although your editor thinks that Googlebot is probably
more responsible for that than the success of alternative browsers.</p>
<p>The main point is that users are taking back control over their web experience. Not just by
using alternative browsers but also by using all kinds of browser extensions to tune the web to
their needs.</p>
<p>So what is next? Mark predicts that in 2009 FOSS goes big on mobile and that there is an epic
battle ahead. The mobile web is in a far worse shape than the normal web was back in 2003 and a
lot needs to be done to open it up. The hardware needs to be opened up so people can put Free software
stacks on them. The web needs to be opened because much like in 2003 the mobile apps and sites are
targeted at specific devices and not at open standards. The cloud needs to be opened up because
mobile apps tend to be cloud apps. And to top it off the pricing structure needs to the changed
and users need to be given more permissions (think contracts and simlocks) so that they can do
whatever they want without price discrimination.</p>
<p>That's a tall order that Mozilla is giving itself, but it needs to prevail on all those
points. Loose one and the mobile web will not be open.</p>
<div class="sidebar left" style="width: 300px;"><img
src="http://www.jejik.com/images/articles/fosdem2009/fosdem2009-debian.jpg"
style="margin-bottom: 5px;" /><small>Bdale showing the increase of Debian developers over
the years</small></div>
<h4 id="debian">The role of Debian in Free Software - Bdale Garbee (HP)</h4>
<p style="font-style: italic">By Sander Marechal <small>[<a href="#toc">back to top</a>]</small></p>
<p>Bdale Garbee is a former Debian Project Leader and currently works for HP as Linux CTO. His
talk was an interesting overview of the history of the Debian project and how it has grown over
the years, especially how it has grown around a common set of values. Other things such as the
Debian Manifesto and the DFSG are derived from those values. Most importantly he said that Debian is
not a Linux distribution but an association of people committed to building a distro. The distro
itself is just the product that comes out of it.</p>
<p>There are several reasons why Debian matters to the Free Software community as a whole. Debian
is very focussed on freedom and it has an enormous amount of packages in it's main
repository. Many projects use these packages and not just derivatives of the Debian distribution.
It is also a very stable and functional development community. It is true that some people see
Debian as a project with many internal conflicts and flamewars but in reality this is not true.
The noise is generated by a small but vocal group of people within the project. The vast but
silent majority just keeps on working.</p>
<p>Bdale said that Debian has contemplated using a Code of Conduct similar to what Ubuntu is
using, but it is very hard to integrate something like that into the current structure of Debian.
He did recommend that anyone who wants to start their own Free Software project use such a Code of
Conduct. The Debian Manifesto and Free Software Guidelines affect the project from a technical
point of view. A Code of Conduct does the same on a social level. A good Free Software project
needs both.</p>
<h4 id="osi">OSI: Recent activities and future - Michael Tiemann</h4>
<p style="font-style: italic">By Hans Kwint <small>[<a href="#toc">back to top</a>]</small></p>
<p>As I arrived late at the <a href="http://opensource.org/">OSI</a>-talk, I was only able to attend the last few minutes of the meeting.
However, the alarming note given there was quite interesting: The youth of tomorrow - roughly
referring to those younger than 18 - knows less and less about free software. Sure, when asked
about free software they name Linux, but they are not able to explain how Linux is different from
other software, apart from technical and cosmetic details.</p>
<p>Therefore, one of the main tasks of OSI will be educating children about open source and free
software. For example, in India nowadays teachers are being taught about FOSS, but in the US the
knowledge about FOSS is terrifyingly absent. The latter is also supported by the recent example of
<A HREF="http://linuxlock.blogspot.com/2008/12/linux-stop-holding-our-kids-back.html">Karen</A>
who forbade here children to use Linux saying "it's illegal".</p>
<p>OSI is more known for its <A HREF="http://opensource.org/licenses">approval</a> of open source
licenses. One of the other things the OSI is working on, is <a
href="http://opensource.org/licenses/category">categorizing</a> available licenses. The
way the Creative Commons licenses are <a
href="http://creativecommons.org/about/licenses/">categorized</a> in a grid may serve as
an example here. Another important task is teaching corporations why license proliferation is bad,
and this is another task that OSI will take on.</p>
<div class="sidebar right" style="width: 300px;"><img
src="http://www.jejik.com/images/articles/fosdem2009/fosdem2009-xrandr.jpg"
style="margin-bottom: 5px;" /><small>Fancy effects with transformations</small></div>
<h4 id="xrandr">XRandR 1.3 - Matthias Hopf</h4>
<p style="font-style: italic">By Sander Marechal <small>[<a href="#toc">back to top</a>]</small></p>
<p>Matthias started his talk with an overview of the things that XRandR 1.2 currently lacks. For
example, querying the state of a display currently involves probing it which causes the screen to
go temporarily black. XRandR 1.2 also lacks the ability to do desktop panning (movie the view when
your mouse cursor hits the edge of the screen) and has trouble with display types. Currently it
has to parse the names of the display (like LVDS) and figure out the display type from that.</p>
<p>XRandR 1.3 will solve all of the above problems and add a few other nice features. It adds
mandatory property sets to all displays so much more information about displays can provided by
the driver in a uniform fashion. Panning is fully supported although there is still a problem with
dual-head setups. This problem is not technical but a matter of behaviour. What should the display
do when you hit the edge of the screen? Move one the view of one display? Both? What happens when
the displays are of a different size? And what happens when the displays are transformed?</p>
<p>Transformations are the other new feature that XRandR 1.3 adds. They make it easy to rotate,
flip and scale the display image. It also adds the ability to do keystone correction, that means
deforming the image by moving the corners. You can use this for example to correct the display
of a beamer that projects at an angle. I asked Matthias how that works in combination with
compositing which is also able to do these kinds of transformations. Matthias answered that it
works transparently as it should, but when compositing is available it is better to use that
instead of the XRandR transformations to prevent slowdowns due to the extra framebuffer copies
that need to be stored.</p>
<h4 id="defenders">Lightning talk: Linux defenders a.k.a. Open Invention Network - Keith Bergelt</h4>
<p style="font-style: italic">By Hans Kwint <small>[<a href="#toc">back to top</a>]</small></p>
<p>One of the threats to FOSS are software patents, and especially patent trolls. These are a
hindrance to putting open source into use. Therefore, some patent <a
href="http://www.openinventionnetwork.com/about_members.php">heavyweights</a> founded the
<a href="http://www.openinventionnetwork.com/">Open Invention Network</a> at the end of 2005. As one can assume, they are not against software patents, tells
Keith Bergelt, and in fact they try to stay away from the religious fight about software patents
as much as possible. They manage a pool of patents meant to defend Linux if necessary. However,
they can do more for FOSS. They will also provide one on one assistance to free software
developers or users if they are being threatened by those exerting their patent rights against
them. For example, they can talk to patent trolls to ensure them that they should stay away from FOSS
and choose another victim. Those being threatened by entities are encouraged to report the attack
to <a href="http://openinventionnetwork.com/linux911.php">LinuxDefenders 911</a>.</p>
<p>Lately, they have been trying to work with the community to search for prior art for existing
patents which might threaten FOSS. If prior art is found, they can take this to the USPTO and they
will try to invalidate that particular patent. They also hope community members will write
defensive publications of their innovations. These defensive publications might serve as prior art
later on. Those defensive publications will be put into a database, and if necessary
<a href="http://linuxdefenders.org/">LinuxDefenders</a> will make sure the USPTO receives the right documents.</p>
<p>At this moment, Linux Defenders focuses mainly on patents around mobile platforms, because
that's currently an hot issue.</p>
<div class="sidebar left" style="width: 300px;"><img
src="http://www.jejik.com/images/articles/fosdem2009/fosdem2009-signs.jpg"
style="margin-bottom: 5px;" /><small>So much to see, so little time</small></div>
<h4 id="smallmail">Lightning talk: Private mail with SmallMail - Peter Roozemaal</h4>
<p style="font-style: italic">By Hans Kwint <small>[<a href="#toc">back to top</a>]</small></p>
<p>Because of the <a
href="http://en.wikipedia.org/wiki/Telecommunications_data_retention#Data_retention_in_the_European_Union">data
retention directive</a> passed by the EU in 2006, a group of privacy concerned Dutch
citizens formed the counterpart of Big Brother and named it <a
href="http://smallsister.org/">Small Sister</a>. The projects aim is to protect the
privacy of ordinary citizens. Lately, they started a new private e-mail system which they called
Smallmail, intended to keep your e-mail secure. Nowadays there's already PGP, but Peter Roozemaal
stresses that's not enough. <a href="http://podcast.smallsister.org/downloads/">Smallmail</a> supports strong privacy: The headers are also encrypted One of the other goals is to hide even the sheer <i>existence</i> of communication, the reason why a
server doesn't even keep track of <i>when</i> an e-mail is received. Smallmail is designed in such
a way that anyone can run it as a server without too much hassle. One perceived problem is the
distribution of the very long mail addresses including the crypto-keys. Smallsister hopes to
tackle this problem by means of using vCards, and you should download the vCard of the one you are
mailing to before starting the actual secure communication.</p>
<h4 id="osi-discuss">Public meeting of the Open Source Initiative</h4>
<p style="font-style: italic">By Sander Marechal <small>[<a href="#toc">back to top</a>]</small></p>
<p>I walked in halfway through the public meeting because I first attended the XRandR
presentation. It was clearly structured very different from any of the other talks I attended. It
was a very open and lively discussion about the various focus areas that the OSI tries to cover.
Many people were contributing and discussing ideas which was fun to watch.</p>
<p>Unfortunately the end of the meeting left me with a bitter taste. The last point of
discussion was how the Free Software Foundation and Open Source Initiative can cooperate better.
Instead of the open discussion like the previous topic this mainly ended up in finger pointing by
the OSI speakers. In there opinion the OSI has made every effort to cooperate but the FSF refuses
to do so. Not a good way to end an otherwise nice discussion.</p>
<div class="sidebar right" style="width: 300px;"><img
src="http://www.jejik.com/images/articles/fosdem2009/fosdem2009-books.jpg"
style="margin-bottom: 5px;" /><small>It was always busy at O'Reillys book stand</small></div>
<h4 id="reverse">Reverse engineering proprietary protocols - Rob Savoye</h4>
<p style="font-style: italic">By Hans Kwint <small>[<a href="#toc">back to top</a>]</small></p>
<p>Lately, a group of reverse engineers has been working to make a free version of Adobe®'s Flash®
available. This version is currently known as Gnash. In April '08, Gnash <a
href="http://www.linux.com/feature/130842">became part</a> of a broader project called <a
href="http://www.openmedianow.org/">OpenMedia Now Foundation</a>.</p>
<p>One of the most tedious parts of making Gnash includes reverse engineering Adobe®'s proprietary
Real Time Messaging Protocol. Another option includes disassembling the Flash® binaries, but
that's not legal so tells Rob Savoye. Recently Adobe® made the specification of RTMP available,
but using that specification to make your own implementation is forbidden by the EULA. Also, it's
quite useless because it specifies what was already reverse-engineered.</p>
<p>For the rest Rob's talk focuses mainly on technical aspects of reverse engineering. The tools
used to reverse engineer network protocols like RTMP are the old-fashioned TCP-dump, Ngrep and Rob
also started using WireShark for its nice GUI. To display data, the GNU core utility od (octal
dump) is used, but also the debugger GDB can come in quite handy. Ngrep and tcpdump are also used
here. For hex editing, the best tools are ghex2, Emacs, and to a lesser degree also Vim because
the latter is not well suited, though that also depends on the modules used according to a member
of the audience.</p>
<p>"Once having this tools, you should start looking at the hex-code until it makes sense, which
can last several weeks" according to Rob. He sees it as the perfect task for a rainy day. "Another
important ingredient is a relaxing and quiet environment". Something Rob definitely has, since he
lives up the hills in Colorado.</p>
<p>The way reverse engineering is done once having the right tools, is first identifying and
stripping ASCII-characters, and then trying to understand the header. Because Rob deals with
network protocols most of the time, an header is almost always present. After the length of the
header is found and you understand the header, you can try to change bytes and see what changes.
However, a lot of code doesn't work and has to be thrown away. This process repeats itself
endlessly, until you can make sense of hex-code.</p>
<p>Questions from the audience are if it is the same as reverse engineering hardware to make
drivers - which Rob deems harder - and reverse engineering file formats, which Rob sees as being
different. Inevitable people also want to know how to debug encrypted protocols, but Rob can't
tell that because of the DMCA and his wish to return to the United States. Moreover, it's not
something he's currently working on; though friends of his plan to do this.</p>
<h4 id="nouveau">Nouveau status update - Stéphane Marchesin</h4>
<p style="font-style: italic">By Sander Marechal <small>[<a href="#toc">back to top</a>]</small></p>
<p>Next up in de X.org dev room was the status update of Nouveau, the project that aims to create
Free Software 2D and 3D drivers for Nvidia cards. It was an entertaining talk with various
interesting anecdotes spliced in. For example, did you know that Nvidia cards are pretty much
OpenGL at the hardware level? Most of the variable ranges and constants that OpenGL provides map
to the hardware in a 1:1 fashion.</p>
<p>The news on the 2D front is positive: it mostly works today which is impressive since Nouveau
is targeting 10 years worth of Nvidia hardware. That means even old cards (such as nv04 cards)
will work on new desktops.</p>
<p>The 3D side is not that advanced yet. Pretty much all cards now have some form of Gallium-based
driver but but in itself that doesn't say a whole lot. Much work is needed to get 3D working
properly. The nv30 and nv40 chipsets are mostly there but nv04, nv20 and nv50 need much work. From
there on the drivers will be moved into the Gallium mainline which in turn will be incorporated
into the Mesa mainline.</p>
<p>Video decoding has also made progress, thanks to the Google Summer of Code project that
resulted in hardware independent decoding at the Gallium level. This should be supported on most
newer cards with nv30 chipsets or better although at the moment it just does H.264 decoding.
Nvidia cards have come with fixed video decoding pipelines since nv17 but the differences between
all the cards are very big so it may not be worth the development effort of implementing this just
for cards older than nv30.</p>
<p>The plans of the Nouveau team are to use TTM and GEM for memory management and stabilizing the
kernel API. Once that is done they hope for a 2009 release which would mean that the old nv driver
does not have to be used anymore.</p>
<div class="sidebar left" style="width: 300px;"><img
src="http://www.jejik.com/images/articles/fosdem2009/fosdem2009-crowd.jpg"
style="margin-bottom: 5px;" /><small>A busy crowd during the keynote speeches</small></div>
<h4 id="exherbo">10 cool things about Exherbo - Bryan Østergaard</h4>
<p style="font-style: italic">By Hans Kwint <small>[<a href="#toc">back to top</a>]</small></p>
<p><a href="http://www.exherbo.org/">Exherbo</a> is a new Linux source distribution made because Bryan Østergaard wanted to do things
that couldn't be done in Gentoo. The name Exherbo refers to 'from source', because it's a source
based distro and because the developers started almost out of nothing. It's quite different from
other distributions because at this time it doesn't center on end-users, and also doesn't want to
attract too many developers. Bryan tells "If there are like 100 developers or so, they will only
end up wanting different things which cannot be done at the same time". Therefore, it's better
having less developers who all want basically the same.</p>
<p>One of the most important features of Exherbo is better depository handling, and this is done
using Paludis. Also, Exherbo has virtual repositories like 'unwritten' and 'unavailable'. Bryan
shows us a list of about twenty repositories his system is currently able to use. Exherbo also has
mechanisms to provide sane metadata for packages.</p>
<p>Another interesting possibility Exherbo offers is user authorization by the package manager. It
almost treats users or groups as packages, an can show which packages would use a certain user. On
of the advantages of this new package management is that it will be easier for unprivileged users
to do the things they want to do without having to use su.</p>
<p>Exherbo has been rough in the near past. However, Bryan tells "I have been using it for the
last few months as my main system and it didn't brake". As a Gentoo user, your editor would say
this is definitely a distribution to keep track of.</p>
<h4 id="lxde">Lightning talk: LXDE. Lighter, Faster, Less Resource Hungy - Mario Behling</h4>
<p style="font-style: italic">By Sander Marechal <small>[<a href="#toc">back to top</a>]</small></p>
<p>Lightning talks are very short (10-15 minutes) presentations about a certain subject and the
LXDE presentation was certainly lightning fast. They crammed an hour worth of LXDE overview into
their 15 minute presentation, had time to introduce the community and leave some spare minutes
for questions to boot.</p>
<p>The most striking thing about LXDE isn't the project itself but the community. Where many
projects lament the lack of Asian members the LXDE community is truly international. They have a
lot of members from Taiwan, China, Japan and many other Asian countries as well as a strong
representation in western countries.</p>
<h4 id="camelot">Lightning talk: Camelot. Building desktop apps at warp speed - Erik Janssens</h4>
<p style="font-style: italic">By Sander Marechal <small>[<a href="#toc">back to top</a>]</small></p>
<p>Camelot is a framework based on Python-QT, Elixer and SQLAlchemy that was inspired by the
Django framework. Only instead of web applications you use it to build desktop applications. Erik
gave a nice overview of how simple it is to create desktop applications with Camelot. It really is
fast.</p>
<p>The only downside I could see is the structure. By far the strongest point of frameworks like
Django, Ruby on Rails and CakePHP is that they are MVC frameworks which makes changing and
maintaining the code easier. Camelot however forgot to copy this bit from Django and instead mixes
model, view and controller logic into the same classes. A missed opportunity in my opinion, but
Camelot is still a nice framework if you need to get your GUI application up and running in no
time.</p>
<h4 id="augeas">Augeas - Raphaël Pinson</h4>
<p style="font-style: italic">By Hans Kwint <small>[<a href="#toc">back to top</a>]</small></p>
<p>At the Fedora room, Raphaël Pinson gave a talk about <a href="http://augeas.net/">Augeas</a>. The fact that Raphaël is actually
an Ubuntu developer shows Augeas is not only used on Fedora. Basically, what Augeas hopes to solve
is one of the biggest Linux annoyances: How to manage all those different configuration files
while at the same time you don't (want to!) know the language of those same configuration
files.</p>
<p>A while ago I read Tuomov's "<a
href="http://modeemi.fi/~tuomov/b/archives/2007/01/20/T11_58_29/">Thoughts on
configuration files and databases"</a> and I was quite happy to see Augeas is going in the
direction suggested there.</p>
<p>What Augeas basically is, is a configuration Application Programming Interface that deals with
text configuration files. Currently, a user who knows how to edit /etc/sudoers might not
know how to manage vsftpd.conf, and by developing an uniform API to all those files Augeas
hopes to solve this problem. It does this by using a biderectional language, and the
several files with sourcecode to enable the bidirectional character of this API are called
'lenses'. This was all a bit abstract, but luckily this was explained by Raphaël.</p>
<p>"What we basically want, is map our text-configuration files to a tree which is easy to
understand and edit for the end user, and be able to map that tree back to the text configuration
files" he explained. All this should be done automatically, and if done correctly the end-user
only has to edit the three, without having to worry about the syntax of the configuration-files
themselves. Raphaël then went on by making this all more concrete by showing some examples of
these lenses, and configuration file-filters which make heavy use of regular expressions. One of
the interesting things he also told is there's theoretical work done on lenses which is parallel
to the development of Augeas. The theory then might almost immediately be used in Augeas, and
examples from Augeas on its turn are used in the theoretical work. Raphaël showed quickly which
mathematical properties a lens should have, which seemed like statements taken from order theory
or functional programming. Luckily, I was not the only one a bit lost: "It took me about a day to
understand these two rules too" Raphaël confessed.</p>
<p>By this time it became clear to me, that one lens is related to one configuration file, let's
say vsftpd.conf. What Augeas then does is use that lens to map the configuration file to a tree
which can be edited, and use the <i>same</i> lense to map the tree back to the configuration file.
This two opposite directions the lense works is what it makes bi-directional. This is what makes
writing a lens quite hard. If for example your lens concatenates two strings but it's not sure how
these strings should be divided when going the opposite way, your lense cannot be correct.
Currently, one of the programs using the Augeas API is <a
href="http://reductivelabs.com/trac/puppet">Puppet</a>, about which I will talk again
later. As a last note, Raphaël suggests people who are interested take the <a
href="http://augeas.net/tour.html">Quick Tour</a> on the website.</p>
http://www.jejik.com/articles/2009/02/open_source_news_from_fosdem_2009_-_day_1Mon, 16 Feb 2009 23:12:00 CETLone WolvesOfficeshots.org announcementhttp://www.jejik.com/articles/2009/01/officeshots_org_announcement
s.marechal@jejik.com (Sander Marechal)
<p>Yesterday the <a href="http://www.opendocsociety.org/">OpenDoc Society</a>, the <a href="http://www.noiv.nl/noiv">NOiV (Netherlands in Open Connection)</a> and the <a href="http://www.nlnet.nl/">NLNet Foundation</a> announced <a href="http://www.officeshots.org">Officeshots.org</a>, a new webservice where you can upload ODF documents and compare their rendering and output in different office suite applications. We here at Lone Wolves are happy to announce that we are the lead architects of this new webservice.</p>
<p>Over the coming days I will announce a couple of things regarding Officeshots.org on this website, like how it works, where to get the code and how to contribute. The plan is to start a closed beta by the end of February and go public by the end of March, but if we want to make this deadline then we need contributers. In the upcoming days I will explain exactly what we need, but if you want to help then you can already <a href="http://lists.opendocsociety.org/mailman/listinfo/officeshots">join the officeshots.org mailinglist</a>.</p>
<p>Here is the full press release from the OpenDoc Society and NoiV:</p>
<pre>Free webservice lets user compare office applications
'Office users can finally see what others are seeing'
~ Maarssen/Amsterdam, The Netherlands, January 30th 2009
The Dutch government program "Netherlands in Open Connection" and
OpenDoc Society have announced they are collaborating on an online
document factory to compare office suite applications. The free
webservice Officeshots.org should be available by the end of February
2009. Users will be able to online compare the output quality of a large
number of office suites as well as web-based productivity applications.
The collaboration was announced during a well visited ODF conference in
Maarssen, The Netherlands. The project is financially supported by a
grant from the Netherlands based not-for-profit investor NLNet Foundation.
"Thanks to the adoption of open standards like the Open Document Format,
the number of productivity applications is increasing rapidly. In a
mature market a user should be able to compare the various suppliers
transparently." says Bert Bakker, president of the OpenDoc Society.
"Officeshots.org will ensure that you do not need to blindly trust a
supplier when he claims to support a certain document format. Seeing is
believing."
"We want to make the differences between the various applications
visible and measurable, which will stimulate suppliers to make quality
improvements" said Ineke Schop, program manager at Netherlands in Open
Connection."Because a user can simply upload a document and see the
output of the various applications they get a powerful tool to make
quality differences measurable. The service also helps designers to
compare the rendering of document templates and letterheads in different
office suites. "This helps governments choose the right application and
supports the ambitions of the Dutch cabinet to standardise on ODF and
PDF for document exchange."
Under the "Netherlands in Open Connection" action plan, the Dutch
administration accepts and uses the Open Document Format as of April
last year. Other government bodies in the Netherlands do so since
January 2009. The program is a joint initiative of the Dutch government,
led by the minister for Foreign Trade Heemskerk and the State Secretary
for the Interior and Kingdom Relations Bijleveld-Schouten.
The tool will be multilingual from the start. The web service will
launch as a closed beta for members of the OpenDoc Society at the end of
February, followed by a public launch planned one month later.</pre>
http://www.jejik.com/articles/2009/01/officeshots_org_announcementSat, 31 Jan 2009 13:38:00 CETLone WolvesEnhance you SugarCRM edit views with filtershttp://www.jejik.com/articles/2008/12/enhance_you_sugarcrm_edit_views_with_filters
s.marechal@jejik.com (Sander Marechal)
<p>The standard edit views for your custom modules are workable, but with a few additions and modifications you can make them a lot more powerful. In my last article &ldquo;<a href="http://www.jejik.com/articles/2008/12/add_grandparent_fields_to_your_sugarcrm_modules/">Add grandparent fields to your SugarCRM modules</a>&rdquo; I showed you how you can relate your modules to grandparent modules and even great-grandparent modules by following the chain of one-to-many relations upwards. This works very well to enhance your list views and detail views.</p>
<p>In this article we are going to use those grandparent fields on the edit view and use them to add filters to the other relate fields on your custom module. I will be building upon the invoicing module I created in the last article. It contains an Invoices module which relates to Accounts, and an InvoiceLines module which relates to Invoices. Suppose you want to edit an invoice line and move that line to another invoice. I will show you how to add functionality to the edit view so that, when you click the &ldquo;Search&rdquo; button on the Invoice field, the popup will only show you the invoices for the same account. This makes it <em>much</em> easier to find the right invoice, or any other related module.</p>
<p>Building on the package from the previous package means that I have setup SugarCRM and my package according to my older articles &ldquo;<a href="http://www.jejik.com/articles/2008/12/keeping_sugarcrm_under_subversion_control/">Keeping SugarCRM under Subversion control</a>&rdquo; and &ldquo;<a href="http://www.jejik.com/articles/2008/12/build_custom_sugarcrm_modules_in_subversion/">Build custom SugarCRM modules in Subversion</a>&rdquo;. That means that I will be working inside the installable zip package that is generated by the SugarCRM module builder. During this article we will also be making a few small changes to core files of SugarCRM, so make sure you have SugarCRM itself under version control as well, and that it is separate form your package.</p>
<p>You can download <a href="/files/sugarcrm/invoicing-grandparents.zip">the invoicing package</a> that I built in my previous article as a starting point, or you can <a href="/files/sugarcrm/invoicing-filters.zip">download the finished invoicing package</a> with the filters already added to the InvoiceLines module. Note that you still need to make the appropriate changes to the EditView.tpl core file as explained in this article.</p>
<h4>Add the grandparent fields to the edit view</h4>
<p>Let&#8217;s start off by adding the grandparent account fields to our invoice line edit view. Adding them is simple, but I want them to be displayed in a certain way which needs a change to a core file. This change is only very small and other changes will need to be made to that same file anyway, so I find it a useful addition. I want the field to be displayed with the parameter &ldquo;disabled&rdquo; so that the browser will display it nicely grayed out. Start off by adding the field to the editviewdefs.php file.</p>
<pre>$viewdefs['inv_InvoiceLines'] = array (
'EditView' =&gt; array (
...
'panels' =&gt; array (
'default' =&gt; array (
...
array (
array (
'name' =&gt; 'account_name',
'displayParams' =&gt; array (
'hideButtons' =&gt; true,
'readOnly' =&gt; true,
'disabled' =&gt; true,
),
),
array (
'name' =&gt; 'amount',
),
),
array (
array (
'name' =&gt; 'description',
'label' =&gt; 'LBL_DESCRIPTION',
),
),
),
),
),
);</pre>
<p>The &ldquo;hideButtons&rdquo; parameter makes sure there are no Select and Clear buttons on the field. The &ldquo;readOnly&rdquo; marks the field as read-only for Sugar. The &ldquo;disabled&rdquo; parameter is the new parameter we will be adding below. You do that by adding an if statement to <tt>include/SugarFields/Fields/Relate/EditView.tpl</tt> in the line that renders the text field.</p>
<pre>&lt;input type=&quot;text&quot; name=&quot;{{sugarvar key='name'}}&quot; ... {{if $displayParams.disabled}}disabled=&quot;disabled&quot;{{/if}}&gt;</pre>
<p>The field now displays nicely but we need to make sure that it changes when the invoice field is updated. After all, when the invoice line is assigned to a different invoice the grandparent field may change. SugarCRM has taken care of this for us. You can add the &ldquo;field_to_name_array&rdquo; display parameter to the invoice name field to do this. Its contents is an array that tells SugarCRM what form fields need to change to what values. Normally Sugar fills in the invoice_id and invoice_name fields. We override it to also fill the account_name and account_id fields. Change the editviewdefs as shown below.</p>
<pre>array (
'name' =&gt; 'inv_invoices_inv_invoicelines_name',
'displayParams' =&gt; array (
'field_to_name_array' =&gt; array (
'id' =&gt; 'inv_invoicev_invoices_ida',
'name' =&gt; 'inv_invoices_inv_invoicelines_name',
'account_id' =&gt; 'account_id',
'account_name' =&gt; 'account_name',
),
),
),</pre>
<p>If you now deploy the module you will see that when you change the invoice name field, the account name field will change accordingly. But, there is a bug. When you press the Clear button on the invoice name, the account name is not cleared with it. This is a bug in SugarCRM but until they fix this upstream we will need to patch this ourselves. Open up the <tt>EditView.tpl</tt> for the relate field again and take a look at the clear button. It's onclick parameter is specified as follows.</p>
<pre>onclick=&quot;this.form.{{sugarvar key='name'}}.value = ''; this.form.{{sugarvar key='id_name'}}.value = '';&quot;</pre>
<p>Replace this onclick event with the following code. Thos code cycles through all the fields in the field_to_name_array and clears all the fields correctly. It should all be on one line but I have wrapped it here for visibility. The high number of literal tags are necessary because template will be parsed <em>twice</em> by Smarty.</p>
<pre>onclick='popup_request_data = {{$displayParams.popupData}};
for (var field in popup_request_data.field_to_name_array) {{literal}}{literal}{{/literal}{{/literal}}
document.forms[popup_request_data.form_name][popup_request_data.field_to_name_array[field]].value = &quot;&quot;;
{{literal}}{literal}}{/literal}{{/literal}}'</pre>
<p>Rebuild and install the package. You should now have fully working grandparent fields on your edit view.</p>
<h4>Adding filters to search popups</h4>
<p>Now that we have the correct value of the grandparent fields on our edit view, we can use it to fill the initial filter of the search popups. The popups in SugarCRM already support initial filters, but they are not implemented in the edit view. They simply default to an empty string.</p>
<p>Another little change to the relate field EditView.tpl enables us to specify the initial filter fields from within the editviewdefs.php file. Start off by adding the &ldquo;initialFilter&rdquo; display parameter to the edit viewdefs. We want the invoice search popup filtered based on the account. So, when you click the &ldquo;Search&rdquo; on invoice field it will by default only show invoiced from the same account as the currently selected invoice.</p>
<pre>'displayParams' =&gt; array (
'field_to_name_array' =&gt; array (
'id' =&gt; 'inv_invoicev_invoices_ida',
'name' =&gt; 'inv_invoices_inv_invoicelines_name',
'account_id' =&gt; 'account_id',
'account_name' =&gt; 'account_name',
),
'initialFilter' =&gt; array (
'account_name' =&gt; 'account_name_advanced',
),
),</pre>
<p>The array keys of the initialFilter define the edit form field whose value will be used. The array value is the search form field it will be mapped to. You can find these values by looking at the source of the search form.</p>
<p>Now edit the relate field EditView.tpl to add support for this initialFilter parameter. Take a look at the onclick event for the &ldquo;Search&rdquo; button. It calls the function open_popup() with the fourth parameter set to an empty string. This is the initialFilter parameter. Change the editview so that the fourth parameter becomes as follows:</p>
<pre>&quot;&quot;{{foreach from=$displayParams.initialFilter key=filter_field item=filter_target}} + &quot;&amp;amp;{{$filter_target}}=&quot; + this.form[&quot;{{$filter_field}}&quot;].value{{/foreach}}</pre>
<p>Note that there is no typo in the above code. It is an empty string immediately followed by a Smarty foreach loop. In case there is no initialFilter specified then the foreach loop will exit immediately and the string remains empty. When I am editing this part of the code I always take advantage and change the previous two parameters as well. Those are the size of the popup. The default 600 by 400 is too small for my liking and so I change that to 900 by 500.</p>
<p>Now build and install the package again. If you now edit an invoice line and click &ldquo;Search&rdquo; on the invoice field, the popup will only show search results for the related account, as show below.</p>
<img src="/images/articles/sugar-filters/search-filter.png" />
<p>Lastly, Don&#8217;t forget to commit the changes you made to the relate field EditView.tpl. You can <a href="/files/sugarcrm/invoicing-filters.zip">download the finished invoicing package</a> with the filters already added to the InvoiceLines module. Note that you still need to make the appropriate changes to the EditView.tpl core file as explained in this article. Have fun!</p>
http://www.jejik.com/articles/2008/12/enhance_you_sugarcrm_edit_views_with_filtersMon, 22 Dec 2008 16:50:00 CETLone Wolves