tag:blogger.com,1999:blog-96275762015-04-19T20:23:29.970+02:00unicolet.orgNot an ordinary blog.Umberto Nicolettihttps://plus.google.com/108409966984627342730noreply@blogger.comBlogger112125tag:blogger.com,1999:blog-9627576.post-13271312888879312952015-04-19T16:51:00.000+02:002015-04-19T20:23:29.987+02:00Detect missed executions with OpenNMSEveryone knows that <a href="http://www.opennms.org/">OpenNMS</a> is a powerful monitoring solution, but not everyone knows that since version 1.10 circa it <a href="http://www.opennms.org/wiki/Drools_Correlation_Engine">embeds the Drools</a> rule processing engine. Drools programs can then be used to extend the event handling logic in new and powerful ways.<br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-6WOyX5YPNpI/VTPAodqgSAI/AAAAAAAAApg/WG-FqZSNATQ/s1600/opennms_drools.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="http://3.bp.blogspot.com/-6WOyX5YPNpI/VTPAodqgSAI/AAAAAAAAApg/WG-FqZSNATQ/s1600/opennms_drools.png" height="68" width="400" /></a></div><br />The following example shows how OpenNMS can be extended to detect missed executions for recurring activities like backups or scheduled jobs.<br /><br /><a name='more'></a>The core functionality is implemented in the following Drools program (commented below):<br /><br /><script src="https://gist.github.com/1068dda960d9e549279d.js"></script> <noscript><pre><code><br />File: RepeatingActivity.drl<br />---------------------------<br /><br />package org.opennms.netmgt.correlation.drools;<br /><br />import java.util.Date;<br />import org.opennms.netmgt.correlation.drools.DroolsCorrelationEngine;<br />import org.opennms.netmgt.xml.event.Event;<br />import org.opennms.netmgt.xml.event.Parms;<br />import org.opennms.netmgt.xml.event.Parm;<br />import org.opennms.netmgt.xml.event.Value;<br />import org.opennms.netmgt.model.events.EventBuilder;<br />global org.opennms.netmgt.correlation.drools.DroolsCorrelationEngine engine;<br />global org.opennms.netmgt.correlation.drools.NodeService nodeService;<br />import java.text.ParseException;<br />import java.text.SimpleDateFormat;<br />import java.util.Calendar;<br />import java.util.Date;<br />import java.util.Iterator;<br />global java.lang.Integer REPEATING_INTERVAL; // in hours<br /><br />declare Execution<br /> nodeid : Long<br /> uei : String<br /> tag : String<br /> expireTimerId : Integer<br />end<br /><br />/*<br /> * Initial execution event for a node - send the initial translated event to generate notification<br /> */<br />rule &quot;initial backup received&quot;<br /> when<br /> $e : Event( $uei : uei, $nodeid : nodeid )<br /> eval( &quot;every&quot;.equals($e.getParm(&quot;every&quot;).getValue().getContent()) ) // filter out internally generated &#39;missed&#39; events<br /> then<br /> Execution execution = new Execution();<br /> execution.setNodeid( $nodeid );<br /> execution.setUei( $uei );<br /> execution.setTag( getTag($e) );<br /> execution.setExpireTimerId( engine.setTimer( getInterval($e, REPEATING_INTERVAL) ) );<br /> insert( execution );<br /> // the event can be retracted, or the subsequent backup completed rule will fire<br /> retract( $e );<br /> println( &quot;Initial backup tag=&quot;+getTag($e)+&quot; event &quot; + $uei + &quot; for node &quot; + $nodeid );<br />end<br /><br />/*<br /> * Subsequent backup completed<br /> */<br />rule &quot;subsequent backup completed&quot;<br /> when<br /> $e : Event( $uei : uei, $nodeid : nodeid )<br /> $execution : Execution( nodeid == $nodeid, uei == $uei, $expireTimerId : expireTimerId )<br /> eval( $execution.getTag().equals( getTag($e) ) )<br /> eval( &quot;every&quot;.equals($e.getParm(&quot;every&quot;).getValue().getContent()) ) // filter out internally generated &#39;missed&#39; events<br /> then<br /> retract( $e );<br /> engine.cancelTimer($expireTimerId);<br /> $execution.setExpireTimerId( engine.setTimer( getInterval($e, REPEATING_INTERVAL) ) );<br /> update( $execution );<br /> println( &quot;Subsequent execution event &quot; + $uei + &quot; for node &quot; + $nodeid +&quot; supressed.&quot; );<br />end<br /><br />/*<br /> * Expiration timer expires: warn user that another backup event was not received in the expected interval<br /> */<br />rule &quot;timer expired&quot;<br /> when<br /> $execution : Execution( $tag: tag, $nodeid : nodeid, $expireTimerId : expireTimerId, $uei : uei)<br /> $expire : TimerExpired( id == $expireTimerId )<br /> then<br /> sendExecutionMissedEvent(engine, $nodeid, $uei, $tag );<br /> retract( $execution );<br /> retract( $expire );<br /> println( &quot;Backup execution expiration for &quot; + $uei + &quot; for node &quot; + $nodeid +&quot;[&quot;+$tag+&quot;].&quot; );<br />end<br /><br />/*<br /> * Utility to send a (failed) execution event.<br /> */<br />function void sendExecutionMissedEvent( DroolsCorrelationEngine engine, Long nodeId, String uei, String tag ) {<br /> EventBuilder bldr = new EventBuilder(uei.replaceAll(&quot;Normal&quot;,&quot;Warning&quot;), &quot;Drools&quot;); // clone current event<br /> bldr.setNodeid(nodeId.intValue());<br /> bldr.addParam(&quot;correlationEngineName&quot;, &quot;Drools&quot;);<br /> bldr.addParam(&quot;correlationRuleSetName&quot;, engine.getName());<br /> bldr.addParam(&quot;correlationComments&quot;, &quot;RepeatingBackupRules&quot;);<br /> bldr.addParam(&quot;tag&quot;, tag);<br /> bldr.addParam(&quot;every&quot;, &quot;missed&quot;); // this will be used to discriminate between normal failures (-&gt;&quot;every&quot;) and missed executions (-&gt;&quot;missed&quot;)<br /> engine.sendEvent(bldr.getEvent());<br />}<br /><br />function String getTag(Event e) {<br /> String tag=null;<br /><br /> Parm p=e.getParm(&quot;backupset&quot;);<br /> if(p!=null) {<br /> tag=p.getValue().getContent();<br /> }<br /> p=e.getParm(&quot;job&quot;);<br /> if(p!=null) {<br /> tag=p.getValue().getContent();<br /> }<br /><br /> return tag;<br />}<br /><br />function int getInterval(Event e, Integer defaultInterval) {<br /> Integer interval = defaultInterval; // default, in hours<br /><br /> Parm p=e.getParm(&quot;interval&quot;);<br /> if(p!=null) {<br /> try {<br /> interval=new Integer(p.getValue().getContent());<br /> } catch(Exception exc) {<br /> println(&quot;Error parsing interval value=&quot;+p.getValue().getContent()+&quot; to integer. Using default REPEATING_INTERVAL&quot;);<br /> }<br /> }<br /><br /> return interval.intValue() * 60 * 60 * 1000; // hours -&gt; milliseconds<br /> //return 60 * 1000; // 1 minute (for debugging)<br />}<br /><br /><br />/*<br /> * println utility<br /> */<br />function void println(Object msg) {<br /> System.err.println(new Date() + &quot; RepeatingBackups : &quot; + msg);<br />}<br /></code></pre></noscript><br />First we need to define (at least) two UEIs: one for&nbsp;<b>uei.&lt;yournamespace&gt;/job/recurring/Warning</b> and one for&nbsp;<b>uei.&lt;yournamespace&gt;/job/recurring/Normal</b>. The events must be configured so that a Normal event clears any previous Warning. At the moment I feed these events into OpenNMS using syslog, but I am planning to replace syslog with my <a href="https://github.com/unicolet/opennms-sendevent-webhook">sendevent web-hook</a>.<br />Each event carries three additional params (visible in the screenshot above):<br /><br /><ol><li><b>job</b> or <b>backupset</b> : carries the job or backupset name, because one host can execute multiple jobs. It must be used in the event reduction key to achieve correct warnings resolution</li><li><b>every</b> : the value 'every' means it is an externally submitted event while 'missed' is used with events generated internally by expired Drools timers (missed executions). Every can be used as varbind filter to implement different notifications for 'regular' failures and missed execution failures</li><li><b>interval</b>: positive integer value indicating the repeating interval in hours (24 for daily jobs, 1 for hourly jobs, and so on)</li></ol><div>Note that with this setup a successful execution will also clear any missed execution alarm.</div><div><br /></div><div>As for the drools program the relevant parts are: the definition of the <b>Execution</b> fact. Execution carries the data necessary to identify the node and job plus the timer set to the interval value of the event.</div><div><br /></div><div>The 2 following rules define the handling of the initial and subsequent events while the third handles the expiration of an interval. The code should be self-explanatory, ask in the comments if you need help.</div>Umberto Nicolettihttps://plus.google.com/108409966984627342730noreply@blogger.com0tag:blogger.com,1999:blog-9627576.post-14603800230815462012015-04-18T14:36:00.000+02:002015-04-18T14:36:09.375+02:009 months with WIFIWEB<a href="http://www.wifiweb.it/">WIFIWEb</a> is a local WDSL internet provider. Since I moved last year I have been a customer with their <a href="http://www.wifiweb.it/privati/wifi/widsl-max10/">WDSL Max 10 profile</a>.<br /><br />This is the pingdom report for the last 9 months:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-De0Enj3Xt5s/VTJNBJBnsuI/AAAAAAAAAow/YamjN52XOfU/s1600/Schermata%2B2015-04-18%2Balle%2B14.21.44.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-De0Enj3Xt5s/VTJNBJBnsuI/AAAAAAAAAow/YamjN52XOfU/s1600/Schermata%2B2015-04-18%2Balle%2B14.21.44.png" height="252" width="640" /></a></div>Applications sensitive to latency and micro-interruptions (like Remote Desktop) would from time to time drop the connection.<br /><br />Bandwidth-wise results varied over the period, but except for one time when I had to call to fix a performance issue, the experience was pretty smooth with a download speed consistently in a 6~8 Mb/s window.<br />The 1Mb upload speed was always achieved.<br /><br />Call quality using free VOIP softphones (sflphone or linphone) was generally bad, but I dont't know how much the fault lies with software or the connection.<br /><br /><b><i>Verdict: recommended.</i></b>Umberto Nicolettihttps://plus.google.com/108409966984627342730noreply@blogger.com0tag:blogger.com,1999:blog-9627576.post-26066113354270303442015-04-14T16:11:00.002+02:002015-04-14T16:11:46.414+02:00RUNDECK job maintenance<div class="separator" style="clear: both; text-align: center;"><a href="http://rundeck.org/images/rundeck-logotype-512.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="http://rundeck.org/images/rundeck-logotype-512.png" height="46" width="320" /></a></div>Learn more <a href="http://rundeck.org/#LearnMore">about Rundeck</a>.<br /><br />Now that I have a fair number of jobs scheduled by Rundeck, how do I periodically prune the job execution history and keep only the last, say, 30 executions for each job?<br /><br /><a name='more'></a>Rundeck currently has no such feature and the following <a href="https://github.com/rundeck/rundeck/issues/357">RFE</a> has been opened to track its progress.<br /><br />In the meanwhile on my Rundeck setups I use the following script, which, so far (fingers crossed), has not caused any problem:<br /><br /><script src="https://gist.github.com/af648a97163ce6b44645.js"></script> <noscript><pre><code><br />File: rd-clean.sh<br />-----------------<br /><br />#!/bin/sh<br /><br /># setup ~/.pgpass to allow passwordless connection to postgres<br /># keep last 30 executions for each job<br />KEEP=30<br /><br /><br />cd /var/lib/rundeck/logs/rundeck<br /><br />JOBS=`find . -maxdepth 3 -path &quot;*/job/*&quot; -type d`<br /><br />for j in $JOBS ; do<br /> echo &quot;Processing job $j&quot;<br /> ids=`find $j -iname &quot;*.rdlog&quot; | sed -e &quot;s/.*\/\([0-9]*\)\.rdlog/\1/&quot; | sort -n -r`<br /> declare -a JOBIDS=($ids)<br /><br /> if [ ${#JOBIDS[@]} -gt $KEEP ]; then<br /> for job in ${JOBIDS[@]:$KEEP};do<br /> echo &quot; * Deleting job: $job&quot;<br /> echo &quot; rm -rf $j/logs/$job.*&quot;<br /> rm -rf $j/logs/$job.*<br /> echo &quot; psql -h YOURDBHOST &#39;delete from execution where id=$job&#39;&quot;<br /> psql -h YOURDBHOST -U rundeck rundeck -c &quot;delete from execution where id=$job&quot;<br /> echo &quot; psql -h YOURDBHOST -U rundeck rundeck -c &#39;delete from base_report where jc_exec_id=${job}::text&#39;&quot;<br /> psql -h YOURDBHOST -U rundeck rundeck -c &quot;delete from base_report where jc_exec_id=${job}::text&quot;<br /> done<br /> fi<br />done<br /></code></pre></noscript><br /><h3>Requirements</h3>Rundeck with postgres backend, psql client configured for passwordless authentication with <i>~/.pgpass</i>.<br />Substitute&nbsp;YOURDBHOST with the name of the postgres database host.<br /><br /><br />Umberto Nicolettihttps://plus.google.com/108409966984627342730noreply@blogger.com0tag:blogger.com,1999:blog-9627576.post-18486388834530963122015-04-02T17:18:00.002+02:002015-04-02T19:36:39.796+02:00OpenNMS performance: tune Jrobin RRD file strategy<div>One of the nice aspects of <a href="http://www.opennms.org/">OpenNMS</a> is that, out of the box, it will collect a <b>lot</b> of data from most snmp-enabled resources. The downside is that such collection is I/O heavy (iops, not throughput).</div><div><br /></div><div>Even on moderate installations with hundreds of nodes it is enough to swamp even the fastest disk subsystem (except for those with controllers supported by large write caches). A symptom is that I/O wait will be quite high on the opennms box itself.</div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-9frAt72iRd0/VR1ZfNQWiyI/AAAAAAAAAn8/BElTu9_40q4/s1600/image.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://4.bp.blogspot.com/-9frAt72iRd0/VR1ZfNQWiyI/AAAAAAAAAn8/BElTu9_40q4/s1600/image.png" height="259" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">I/O Wait before and after switch jrobin backend from FILE to MNIO</td></tr></tbody></table><div><a name='more'></a>The graph above show the I/O wait on a RAID 10 array with 4 15K drives, storing RRD data from approximately 300 nodes, sampled at the usual 5m interval.</div><div><br /></div><div>The I/O wait is, or better was, constantly at 30% (before I applied some <a href="http://unicolet.blogspot.it/2015/03/opennms-15-warm-your-postgres-cache.html">postgres tuning </a>it was at 70%).</div><div><br /></div><div>As you can see from the graph I/O wait fell sharply after 7.30 when I applied a simple change to <b><a href="http://www.opennms.org/wiki/Jrobin">Jrobin</a></b>&nbsp;which is the OpenNMS subsystem responsible for writing and reading RRDs.</div><div><br /></div><div>The change involves using an alternative I/O strategy called MNIO instead of FILE which is the default. It requires editing just a <a href="https://github.com/OpenNMS/opennms/blob/33f35f10581d6435d238baca9bc1f6630d21cdd9/opennms-base-assembly/src/main/filtered/etc/rrd-configuration.properties#L186">properties file</a>. Restart is required.</div><div><br /></div><div>The box has been running with the new setting for several days now without errors and excellent performance. On the mailing list someone reported years of running successfully with MNIO.</div><div><br /></div><div>HTH.</div>Umberto Nicolettihttps://plus.google.com/108409966984627342730noreply@blogger.com0tag:blogger.com,1999:blog-9627576.post-32566252427506871912015-03-26T11:24:00.000+01:002015-03-26T11:24:08.754+01:00OpenNMS 15: warm your postgres cacheOpenNMS 15 puts a much higher load on the database than previous versions.<br />Besides <a href="http://unicolet.blogspot.it/2012/03/opennms-and-postgresql-91-tuning.html">tuning</a> postgres, the OS and perhaps splitting the app and the db on different boxes one aspect that I found to really make a difference is having a warm postgres cache.<br /><br /><a name='more'></a><i>Additional tip:</i> if you haven't already put postgres on XFS. There is a reason that RH7 switched to XFS as the default fs and it is performance. You will also find that most postgres people recommend XFS instead of ext3/4.<br /><br />If you followed the <a href="http://unicolet.blogspot.it/2012/03/opennms-and-postgresql-91-tuning.html">instructions</a> on my previous post you should have a <i>v_database_cache</i> view in the opennms database. Soon after installing OpenNMS 15 I found that the events relation was not cached at all (less than 2% of it was cached after one day).<br /><br />This is probably due to to various reasons, most likely queries have been improved to use indices instead of scanning the tables, but the UI performance suffers (it takes 1-2 seconds to display the node pages)[1].<br /><br />To warm the datbase cache and improve general performance and responsiveness of the UI run this command as the postgres user:<br /><pre>psql -A opennms -c "select * from events; " &gt; /dev/null<br /></pre>If you have a large events table consider adding a filter (ie: only events from the last week).<br /><br />Now check the database cache: <i>percent_of_relation</i>&nbsp;should show a larger value for the events relation. In my case it was 100% (shared_buffers=1GB , event table is ~180M) and I found the UI to be much much snappier.<br /><br /><pre>opennms=# select * from v_database_cache ;<br /> relname | buffered | buffers_percent | percent_of_relation <br />-------------------------------+----------+-----------------+---------------------<br /><b> events | 181 MB | 17.7 | 100.0</b><br /> notifications | 44 MB | 4.3 | 84.6<br /> outages | 16 MB | 1.6 | 100.0<br /> events_ipaddr_idx | 4128 kB | 0.4 | 40.9<br /> bridgemaclink | 4704 kB | 0.4 | 100.7<br /> events_nodeid_idx | 4008 kB | 0.4 | 51.9<br /> events_nodeid_display_ackuser | 4480 kB | 0.4 | 42.9<br /> assets | 2848 kB | 0.3 | 101.1<br /> snmpinterface | 1576 kB | 0.2 | 100.0<br /> bridgemaclink_pk_idx2 | 2296 kB | 0.2 | 100.0<br /></pre><br />Thanks for reading.<br /><br />[1] yes I am running on somewhat aged hardware (Proliant DL580G5, RAID10 on 10K drives, 8GBRAM, 2 × XEON).Umberto Nicolettihttps://plus.google.com/108409966984627342730noreply@blogger.com0tag:blogger.com,1999:blog-9627576.post-58146165831613187312015-02-06T11:48:00.002+01:002015-02-06T15:00:10.072+01:00Auto-upload Elastisearch template mapping with Apache Camel<a href="http://camel.apache.org/images/camel-box-small.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="http://camel.apache.org/images/camel-box-small.png" /></a>When feeding data into Elastisearch, one important step is to configure the correct template for the index/type so that, for instance, numeric fields are stored as numbers to ensure that they can be sorted by and/or confronted correctly.<br /><div><br /></div><div>The Elasticsearch Logstash plugin has a handy <a href="http://logstash.net/docs/1.4.2/outputs/elasticsearch#template">option</a> just for this purpose. If you are not using Logstash you have to do it yourself, eithr through configuration mgmt, startup scripts or simply manaully launching the appropriate curl command.</div><div><br /></div><div>If you have followed my <a href="http://unicolet.blogspot.it/search/label/camel">previous post</a> on using Apache Camel to feed sql data into <a href="http://unicolet.blogspot.it/search/label/elasticsearch">Elasticsearch</a> then it might come natural to attempt to use Camel also for the purpose of uploading the template mapping.</div><div><a name='more'></a>How hard can it be? Turns out it's prety simple, so let me present you with the solution right away and leave the nitty gritty details for later:</div><div><br /></div><script src="https://gist.github.com/42608c4cded7fe7d4252.js"></script> <noscript><pre><code><br />File: camel_elastisearch_mapping.xml<br />------------------------------------<br /><br />&lt;!--<br /> starts first, then stops. All other routes start after this one has completed<br /> Same as: curl -XPUT http://localhost:9200/_template/opennms -d @elmapping.json<br />--&gt;<br />&lt;route id=&quot;elastisearchTemplateMapping&quot; autoStartup=&quot;true&quot; startupOrder=&quot;1&quot;&gt;<br /> &lt;from uri=&quot;timer://runOnce?repeatCount=1&amp;amp;delay=0&quot;/&gt;<br /> &lt;setHeader headerName=&quot;CamelHttpMethod&quot;&gt;<br /> &lt;constant&gt;PUT&lt;/constant&gt;<br /> &lt;/setHeader&gt;<br /> &lt;setHeader headerName=&quot;CamelContentType&quot;&gt;<br /> &lt;constant&gt;application/x-www-form-urlencoded&lt;/constant&gt;<br /> &lt;/setHeader&gt;<br /> &lt;setBody&gt;<br /> &lt;groovy&gt;new File(&#39;elmapping.json&#39;).text&lt;/groovy&gt;<br /> &lt;/setBody&gt;<br /> &lt;to uri=&quot;http:127.0.0.1:9200/_template/opennms&quot;/&gt;<br /> &lt;log message=&quot;${body}&quot;/&gt;<br />&lt;/route&gt;<br /></code></pre></noscript><br /><div>This route will run exactly only once at Camel startup and then fetch the file&nbsp;<b>elmapping.json</b> and PUT it into elastisearch. A sprinkle of groovy makes populating the body of the request a piece of cake (the route requires the <a href="http://mvnrepository.com/artifact/org.apache.camel/camel-groovy">camel-groovy</a>&nbsp;and&nbsp;camel-script components).</div><br />I have then added an <i>initialDelay</i> to the other routes to allow enough time for elasticsearch to process and acknowledge the mapping.<br /><br />Happy hacking!Umberto Nicolettihttps://plus.google.com/108409966984627342730noreply@blogger.com0tag:blogger.com,1999:blog-9627576.post-51825855647213045862015-01-13T15:42:00.003+01:002015-01-17T18:02:38.668+01:00Camel-Elasticsearch: create timestamped indicesOne nice feature of the logstash-elasticsearch integration is that, by default, logstash will use <a href="http://logstash.net/docs/1.4.2/outputs/elasticsearch#index">timestamped indices</a> when feeding data to elasticsearch.<br /><br />This means that yesterday's data is in a separate index from today's data and from each other day's data, simplifying index management. For instance, suppose you only want to keep the last 30 days:<br /><br /><a href="https://github.com/imperialwicket/elasticsearch-logstash-index-mgmt/blob/master/elasticsearch-remove-old-indices.sh">elasticsearch-remove-old-indices.sh</a> -i 30<br /><br />The Apache Camel <a href="http://camel.apache.org/elasticsearch.html">Elasticsearch</a> component provides no such feature out of the box, but luckily it is quite easy to implement (when you know what to do.&nbsp;<i>/grin</i> ).<br /><br /><a name='more'></a>As a matter of fact, it is enough to define the proper header on the message and the elasticsearch component will then use that header as the index name. Unfortunately this is not documented anywhere, but it can be understood by looking at the <a href="https://git-wip-us.apache.org/repos/asf?p=camel.git;a=blob_plain;f=components/camel-elasticsearch/src/main/java/org/apache/camel/component/elasticsearch/ElasticsearchProducer.java;hb=camel-2.14.x">source</a>. Once again: use the source, Luke.<br /><br />So, let's suppose the route is something as simple as:<br /><br /><pre> &lt;route autostartup="true" id="processMirthMessages-route"&gt;<br /> &lt;from uri="sql:{{sql.selectMessage}}?consumer.delay=5000&amp;consumer.onConsume={{sql.markMessage}}"&gt;<br /> &lt;to uri="elasticsearch://mirth?operation=INDEX&amp;indexType=mmsg"&gt;<br /> &lt;/to&gt;&lt;/from&gt;&lt;/route&gt;<br /></pre><br />Then all that is needed to is to define a content enricher bean as follows:<br /><br /><pre> &lt;route autostartup="true" id="processMirthMessages-route"&gt;<br /> &lt;from uri="sql:{{sql.selectMessage}}?consumer.delay=5000&amp;consumer.onConsume={{sql.markMessage}}"&gt;<br /> &lt;bean method="process" ref="eSheaders"&gt;<br /> &lt;to uri="elasticsearch://mirth?operation=INDEX&amp;indexType=mmsg"&gt;<br /> &lt;/to&gt;&lt;/bean&gt;&lt;/from&gt;&lt;/route&gt;</pre><br />The bean is also pretty simple (imports omitted for brevity): <br /><br /><pre>public class ESHeaders {</pre><pre> public void process(Exchange exchange) {<br /> Message in = exchange.getIn();<br /> DateFormat df=new SimpleDateFormat("YYYY.MM.dd");<br /> in.setHeader(ElasticsearchConfiguration.PARAM_INDEX_NAME, "mirth2-"+df.format(new Date()));<br /> }<br /></pre><pre>}<br /></pre><br /><b>Update: get timestamp index name from the message itself.</b><br /><br />If the data to be indexed contains, as it should, a&nbsp;@timestamp&nbsp;field then the content enricher bean can be imrproved to use it as follows:<br /><br /><pre><br />public void process(Exchange exchange) {<br /> Message in = exchange.getIn();<br /> String indexName=null;<br /> DateFormat df=new SimpleDateFormat("YYYY.MM.dd");<br /> try {<br /> Map body = (Map) in.getBody();<br /> if(body.containsKey("@timestamp")) {<br /> logger.trace("Computing indexName from @timestamp: "+body.get("@timestamp"));<br /> indexName = "mirth2-"+df.format((Date) body.get("@timestamp")));<br /> } else {<br /> indexName = "mirth2-"+df.format(new Date()));<br /> }<br /> } catch(Exception e) {<br /> logger.error("Cannot compute index name, failing back to default");<br /> indexName = "mirth2-"+df.format(new Date()));<br /> }<br /> in.setHeader(ElasticsearchConfiguration.PARAM_INDEX_NAME, indexName);<br /> }<br /></pre>Umberto Nicolettihttps://plus.google.com/108409966984627342730noreply@blogger.com0tag:blogger.com,1999:blog-9627576.post-66609529431780484582014-10-30T22:18:00.000+01:002014-10-30T22:18:13.675+01:00Extending a LVM logical volume with SaltStack<a href="http://pbs.twimg.com/profile_images/3350581600/67d1f0a30ca3d838b40340fd94790d26_normal.jpeg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="http://pbs.twimg.com/profile_images/3350581600/67d1f0a30ca3d838b40340fd94790d26_normal.jpeg" /></a>How do you, at once, extend a LVM logical volume on a fleet of identical linux (<a href="http://www.centos.org/">Centos</a>) servers using <a href="http://www.saltstack.com/">SaltStack</a>? Here's how and, thanks to Salt, it only took 5m.<br /><br /><a name='more'></a>Somebody comes into my office in a hurry: we need to extend the XYZ logical volume on all servers or the new app deployment will choke them! I am pretty sure it will not happen, but this would make for a rather uninteresting post, so I set myself to automate the whole thing.<br /><br />The environment is VMWare, so someone has manually add a 8GB disk to all servers. After that is done from the salt master I comfortably type:<br /><br /><pre>salt 'wftr[2-9].example.it' cmd.run 'parted -s /dev/sda -- mklabel msdos '<br />salt 'wftr[2-9].example.it' cmd.run 'parted -s /dev/sda -- mkpart primary 0 -0 '<br />salt 'wftr[2-9].example.it' cmd.run 'sfdisk -l /dev/sda | grep sda1'<br />salt 'wftr[2-9].example.it' cmd.run 'parted -s /dev/sda -- set 3 LVM on; pvcreate /dev/sda1'<br />salt 'wftr[2-9].example.it' cmd.run 'vgextend VolGroup00 /dev/sda1'<br />salt 'wftr[2-9].example.it' cmd.run 'lvextend -l +100%FREE -r /dev/VolGroup00/LogVol00'<br /></pre><br />Note that I not actually extending *all* servers, but to make it more interesting only those whose name ends in a number between 2 and 9.<br /><br />The first command writes an empty partition table on the disk, the second creates a primary partition that fills the whole disk. The third command displays the partition table and should be inspected for errors.<br />The fourth and fifth change the partition type to LVM and creates a physical volume. With the sixth command the pv is added to a volume group, and the new space on the vg is then allocated the logical volume named <i>LogVol00</i>.<br />The <i>-r</i> option to <i>lvextend</i> tell LVM to also extend the filesystem (ext3) in the same step.<br /><br />Check out my <a href="http://unicolet.blogspot.it/search/label/saltstack">other SaltStack-related posts</a>.Umberto Nicolettihttps://plus.google.com/108409966984627342730noreply@blogger.com0tag:blogger.com,1999:blog-9627576.post-44441035859929812602014-09-12T10:18:00.000+02:002015-04-10T22:24:50.756+02:00Indexing Apache access logs with ELK (Elasticsearch+Logstash+Kibana)Who said that grepping Apache logs has to be boring?<br /><div><br /></div><div><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-TevQjxdj-zw/VBKq7O7T9wI/AAAAAAAAAjk/gy16GLD6Rpg/s1600/elk.png" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="http://1.bp.blogspot.com/-TevQjxdj-zw/VBKq7O7T9wI/AAAAAAAAAjk/gy16GLD6Rpg/s1600/elk.png" height="187" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Sample of dashboard that can be created with ELK. Pretty impressive, huh?</td></tr></tbody></table>The truth is that, as Enteprise applications move to the browser too, Apache access logs are a gold mine, it does not matter what your role is: developer, support or sysadmin.</div><div>If you are not mining them you are most likely missing out a ton of information and, probably, making the wrong decisions.</div><div><br /></div><div><a href="http://www.elasticsearch.org/">ELK</a> (Elasticsearch, Logstash, Kibana) is a terrific, Open Source stack for visually analyzing Apache (or nginx) logs (but also any other timestamped data).<br /><br /><a name='more'></a>Provided you have Java installed, its setup is rather easy, so I am not going too much into the details. I will instead focus on a couple of points that are not easily found documented online.<br /><br />My setup is as follows: the Apache host which serves a moderately used Intranet application sends the <i>access_log</i> to another host for near-line processing via syslog.<br />Relaying <i>access_log</i> through syslog is activated as follows in <i>httpd.conf</i>:<br /><pre>LogFormat "%h %l %u \"%r\" %&gt;s %b \"%{Referer}i\" \"%{User-Agent}i\" %D" extendedcombined<br />CustomLog "| /bin/logger -t httpd -p local0.notice" extendedcombined<br /></pre><br /><div class="p1">This&nbsp;<i>extendedcombined</i>&nbsp;format is basically the standard combined format plus %D or the time taken to serve the request&nbsp;(in microseconds, reference <a href="http://httpd.apache.org/docs/2.2/mod/mod_log_config.html">here</a>). We will use this field (which I will refer to as <i>responsetime</i> from now on) to plot a nice histogram in Kibana.</div><div class="p1"><br /></div><div class="p1">I will leave configuring syslog, syslog-ng or rsylog out and skip ahead to the point where logs are now stored in another server in a custom directory, say <i>/var/log/access_logs</i>.<br /><br />This server will host the complete ELK stack and we will use Logstash to read, parse and feed the logs to Elasticsearch and Kibana (a single page web app) for browsing them.<br /><br /><h3>Raise file descriptor limits for Elasticsearch</h3><div>Elastisearch uses a lot of file descriptors and will quickly run out of them, unless the default limit is raised or removed.<br />Adding:</div><pre>ulimit -n 65536</pre>to the ES startup script will raise the limit to a value that should be very hard to reach.<br /><br /><h3>Configure logstash grok pattern</h3><div>The following is a grok pattern for the log format above:</div><pre>EXTENDEDAPACHELOG %{SYSLOGTIMESTAMP:timestamp} %{GREEDYDATA:source} %{IPORHOST:clientip} %{USER:ident} %{USER:auth} "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-) "%{GREEDYDATA:referer}" "%{GREEDYDATA:agent}" %{NUMBER:responsetime}</pre><div>Admittedly getting the grok pattern right is the hardest part of the job. The <a href="https://grokdebug.herokuapp.com/">grok debugger app</a> is an indispensable tool if you need to figure out a custom pattern. <br /><br /></div><h3>Configure Elastisearch template</h3>Without a properly configured mapping template ES will not index the data fed by logstash in any useful way, especially if you plan on using the histogram panel to plot responsetime values or sort the table panel by bytes or responsetime&nbsp;(hint: those two fields must be stored as a number in ES).<br />Also you probably don't want certain fields like request to be analyzed (broken into tokens and then indexed) at all.<br />The following mapping fixes all that can be used with the log format described in this post or can be quickly adapted. Even if you don't end up using all the fields defined in the template, Elasticsearch is not going to complain.<br /><br /><br /></div></div><script src="https://gist.github.com/b42ff8a8f57e6f938652.js"></script> <noscript><pre><code><br />File: el_template.json<br />----------------------<br /><br />{<br /> &quot;template&quot;:&quot;logstash-*&quot;,<br /> &quot;settings&quot;:{<br /> &quot;index.refresh_interval&quot;:&quot;5s&quot;<br /> },<br /> &quot;mappings&quot;:{<br /> &quot;_default_&quot;:{<br /> &quot;dynamic_templates&quot;:[<br /> {<br /> &quot;string_fields&quot;:{<br /> &quot;match_mapping_type&quot;:&quot;string&quot;,<br /> &quot;match&quot;:&quot;*&quot;,<br /> &quot;mapping&quot;:{<br /> &quot;index&quot;:&quot;analyzed&quot;,<br /> &quot;omit_norms&quot;:true,<br /> &quot;type&quot;:&quot;string&quot;,<br /> &quot;fields&quot;:{<br /> &quot;raw&quot;:{<br /> &quot;index&quot;:&quot;not_analyzed&quot;,<br /> &quot;ignore_above&quot;:256,<br /> &quot;type&quot;:&quot;string&quot;<br /> }<br /> }<br /> }<br /> }<br /> }<br /> ],<br /> &quot;properties&quot;:{<br /> &quot;geoip&quot;:{<br /> &quot;dynamic&quot;:true,<br /> &quot;path&quot;:&quot;full&quot;,<br /> &quot;properties&quot;:{<br /> &quot;location&quot;:{<br /> &quot;type&quot;:&quot;geo_point&quot;<br /> }<br /> },<br /> &quot;type&quot;:&quot;object&quot;<br /> },<br /> &quot;@version&quot;:{<br /> &quot;index&quot;:&quot;not_analyzed&quot;,<br /> &quot;type&quot;:&quot;string&quot;<br /> },<br /> &quot;referer&quot;:{<br /> &quot;index&quot;:&quot;not_analyzed&quot;,<br /> &quot;type&quot;:&quot;string&quot;<br /> },<br /> &quot;request&quot;:{<br /> &quot;index&quot;:&quot;not_analyzed&quot;,<br /> &quot;type&quot;:&quot;string&quot;<br /> },<br /> &quot;responsetime&quot;:{<br /> &quot;type&quot;:&quot;long&quot;<br /> },<br /> &quot;bytes&quot;:{<br /> &quot;type&quot;:&quot;long&quot;<br /> }<br /> },<br /> &quot;_all&quot;:{<br /> &quot;enabled&quot;:true<br /> }<br /> }<br /> },<br /> &quot;aliases&quot;:{<br /><br /> }<br />}<br /><br /></code></pre></noscript><br /><h3>Logstash configuration file</h3>Finally you can use the following configuration file as a starting point:<br /><script src="https://gist.github.com/0468aff837aacc6f9a91.js"></script> <noscript><pre><code><br />File: logstash.conf<br />-------------------<br /><br />input {<br /> file {<br /> type =&gt; &quot;accesslog&quot;<br /> path =&gt; [ &quot;/opt/http_logs/access_log&quot; ]<br /> }<br />}<br /><br />filter {<br /> grok {<br /> match =&gt; { &quot;message&quot; =&gt; &quot;%{EXTENDEDAPACHELOG}&quot; }<br /> }<br /> date {<br /> match =&gt; [ &quot;timestamp&quot; , &quot;MMM dd HH:mm:ss&quot;, &quot;MMM d HH:mm:ss&quot; ]<br /> }<br />}<br /><br />output {<br /> elasticsearch {<br /> host =&gt; localhost<br /> template =&gt; &quot;/opt/monitoring/logstash-1.4.2/el_template.json&quot;<br /> template_overwrite =&gt; true<br /> }<br /> #stdout { codec =&gt; rubydebug }<br />}<br /><br /></code></pre></noscript>Umberto Nicolettihttps://plus.google.com/108409966984627342730noreply@blogger.com0tag:blogger.com,1999:blog-9627576.post-58473778482865655502014-08-27T11:37:00.001+02:002014-08-27T11:43:03.364+02:00Extract TABLE data from a large postgres SQL dump (with postgis)What do you do when postgres refuses to import a dump because it <a href="http://serverfault.com/questions/48438/cant-restore-postgresql-database-backup">contains invalid byte</a> sequences?<br /><div><br /></div><div>Solution: feed the sql script to iconv then import it as usual.</div><div><br /></div><div>That's easier said than done especially if your database contains postgis data which must be restored through a custom postgres dump (instructions <a href="http://postgis.net/docs/postgis_installation.html#hard_upgrade">here</a>).</div><div><br /></div><div>I recently experienced this issue on a relatively small table in a large-ish database. Since hand editing the SQL dump is cumbersome and hard (it is over 500MB in size) the only and most elegant alternative was to do it with a script.</div><div><br /></div><div>The following is an awk script which will extract the COPY instructions relative to a table from a postgres SQL dump:</div><div><br /></div><script src="https://gist.github.com/625dbd6d5c99ee551922.js"></script> <noscript><pre><code><br />File: copy_estract.awk<br />----------------------<br /><br />BEGIN {start=0}<br />/^COPY &quot;/ { if(index($0,TBL)!=0) { start=1; } }<br />// {if(start==1) print $0;}<br />/\\\./ {start=0;}<br /><br /></code></pre></noscript><br /><div><br /></div>Usage:<br /><pre>awk -f copy_extract.awk -v TBL=TABLENAME pgdump/database_dump.sql</pre><br />One liner: <br /><pre>awk -f copy_extract.awk -v TBL=TEST pgdump/db.sql | iconv -f latin1 -t utf8 | psql db</pre>Umberto Nicolettihttps://plus.google.com/108409966984627342730noreply@blogger.com0tag:blogger.com,1999:blog-9627576.post-78934881869155365622014-06-11T15:19:00.002+02:002014-06-11T15:19:52.682+02:00Ehcache: deploy multiple versions of a Grails app (fix javax.management.InstanceAlreadyExistsException)When a Grails application makes use of the <a href="http://grails.org/plugin/cache-ehcache">Ehcache cache</a> plugin in its default configuration it can be impossibile to perform <a href="http://unicolet.blogspot.it/2011/06/howto-parallel-deployment-on-tomcat-7.html">deploys of multiple versions</a> of the app, even though the container might support it.<br /><div>The same plugin (in its default configuration) also breaks deploying multiple different Grails apps on the same container.</div><div><br /></div><div>The problem is in the way the plugin generates the name for the cache (which will then be used to register the cache jmx bean): the name is by default set <i>grails-cache-ehcache</i>. When another second application or another application version is deployed registration will fail because the name already exists. The exception message is the following (indented for clarity):</div><div><br /></div><pre>org.springframework.beans.factory.BeanCreationException:</pre><pre>Error creating bean with name 'ehCacheManagementService':</pre><pre>Invocation of init method failed;</pre><pre>nested exception is net.sf.ehcache.CacheException:</pre><pre>javax.management.InstanceAlreadyExistsException:<br />net.sf.ehcache:type=CacheManager,name=grails-cache-ehcache</pre><div><br /></div><div>The (undocumented) solution is easy to implement. Edit the Config.groovy file and add the following configuration bit:</div><div><br /></div><div><pre>grails.cache.config = {<br />&nbsp; provider {<br />&nbsp; &nbsp; name "ehcache-&lt;yourappname&gt;-"+(new Date().format("yyyyMMddHHmmss"))<br />&nbsp; }<br />}<br /></pre><div><br /></div><div>If you are using the ehcache.xml file instead it might be more difficult to randomize the cache name, but it could be done during the build.<br /><br />Tested on Grails 2.1.5 and Tomcat 7.</div></div>Umberto Nicolettihttps://plus.google.com/108409966984627342730noreply@blogger.com0tag:blogger.com,1999:blog-9627576.post-5430766289218991682014-02-07T00:51:00.001+01:002014-02-07T00:51:22.326+01:00Create an OpenLayers map programmaticallySometimes it is useful to abstract away the repetitive layer creation code with a configuration-based approach.<br /><br />For example consider this very simple map taken from the OpenLayers <a href="http://dev.openlayers.org/releases/OpenLayers-2.13.1/examples/bing.html">examples</a>:<br /><br /><script src="https://gist.github.com/unicolet/8854173.js"></script> How could we avoid repeating invoking the layer contructor and instead provde a framework that allows us to instantiate any layer with just configuration? The solution is quite simple.<br /><br /><a name='more'></a>First of all we need to define our configuration structure, which is straightforward to do in javascript:<br /><br /><script src="https://gist.github.com/unicolet/8854292.js"></script> Note that the constructor, which we call provider, has not yet been dereferenced to the actual function. This is on purpose because in this way we can load the configuration independently of OpenLayers. This can be useful especially when trying to optimize page loading time. <br /><br />Now all we need is some basic glue to make it all work. Basically a builder function and the a for loop over the configuration array.<br /><br /><script src="https://gist.github.com/unicolet/8854552.js"></script>Umberto Nicolettihttps://plus.google.com/108409966984627342730noreply@blogger.com0tag:blogger.com,1999:blog-9627576.post-57995026568852932752013-12-31T10:28:00.000+01:002014-01-03T13:28:51.097+01:00Book review: Sproutcore Web Application Development<b>TL;DR:</b>&nbsp;Sproutcore is a <i>huge</i> framework, and this book will save you a lot of time (and headaches). Buy it.<br /><b>Disclaimer</b>: this is a review of a free copy that Packt kindly sent me.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://goo.gl/5UZxiP" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="SproutCore Web Application Development cover" border="0" src="http://dgdsbygo8mp3h.cloudfront.net/sites/default/files/imagecache/productview_larger/7706OS.jpg" title="SproutCore Web Application Development" /></a></div><br /><b>Win a free copy of this book, scroll down to know how to participate!</b><br /><br />If only I had this book 3 years ago!<br /><br />3 years ago I started developing a Sproutcore app as a learning experience and, in all honesty, the path has been rough. Sproutcore is a <b>massive</b>&nbsp;framework with lots of features: some are well documented, some mentioned casually in the guides, some others...well you don't know they are there until you start reading the code or chat with one of the more knowledgeable devs in IRC.<br /><a name='more'></a>Documentation has always been an issue for Sproutcore but now, thankfully, there is a book. If I had this book 3 years ago I would have gotten much quicker up to speed and the code base would have been (much, in fact) cleaner (and smaller!).<br /><br /><a href="http://goo.gl/5UZxiP">Sproutcore Web Application Development</a> covers all most important areas of Sproutcore: installation, concepts and getting started, key-value-observing, the model, view and the controller/states objects, styling, domain objects modeling and using a data source to fetch them from a server, and, last but not least, deployment.<br />Sometimes Tyler will highlight a specific feature when it can be generally useful. For instance in Chapter 2 there is a whole section dedicated to <i>mixins</i> and how they can improve code sharing and reduce duplication when classical object inheritance cannot be used.<br />In other cases he provides guidance on avoiding common pitfalls, as in the chapter on s<i>tatecarts</i>.<br /><br />Code fragments are always clear, coincise and generally well-written. I have not tried following through an example but I am quite sure it would work on the first or second try. Tyler is a rather gifted coder and I suggest you check out his github account for more Sproutcore resources.<br /><br />One thing I would like to suggest is that in the future the source for example apps was made available through github or another public repository for easier checking out. As of now the code for the <i>Contacts</i> app is only available from the <a href="http://goo.gl/5UZxiP">PacktPub</a> website as a ZIP archive.<br /><br />The book does not cover: <a href="http://docs.sproutcore.com/symbols/SC.routes.html">routes</a>, <a href="http://docs.sproutcore.com/symbols/SC.Page.html#doc=SC&amp;method=.outlet&amp;src=false">outlets</a>, <a href="http://guides.sproutcore.com/theming_app.html">theming</a>&nbsp;and some lesser used views or, to avoid unnecessary duplication, topics already documented in the <a href="http://guides.sproutcore.com/">guides</a>.<br /><br />Straight from the source: the author is the current project lead, the reviewers are or were prominent members of the community.<br /><br />Absolutely recommended for beginners and heartily recommended even for medium-advanced users (as I would define myself).<br /><br /><h3 style="text-align: center;">Win a free ecopy of the book!</h3><br />Readers would be pleased to know that&nbsp;<a href="http://www.packtpub.com/">Packt Publishing</a>&nbsp;graciously offered to organize a Giveaway of the <a href="http://goo.gl/5UZxiP">SproutCore Web Application Development</a> book and three lucky winners stand a chance to win a free ecopy. Keep reading to find out how you can be one of the Lucky Winners.<br /><br /><i>Book Overview:</i><br />• Use SproutCore’s object model to organize code into classes, subclasses, and mixins<br />• Observe and bind properties across the code for efficient updates and error-free consistency<br />• Structure code and separate responsibilities using client-side MVC<br />• Define and build the user interface of extremely complex applications using SproutCore’s view library<br />• Interact with remote data sources and model and store data in the client for immediate use<br />• Connect an application together without messy, bug-prone controller code using SproutCore’s statechart library<br />• Combine all of these skills in a repeatable process to create production-ready software<br />• Test and deploy SproutCore applications<br /><br /><i>How to Enter?</i><br /><br />All you need to do is head on over to the book page (<a href="http://goo.gl/5UZxiP">SproutCore Web Application Development</a>) and look through the product description of the book and drop a line via the comments below this post to let us know what interests you the most about this book. It’s that simple.<br /><br />Winners from the U.S. and Europe can either choose a physical copy of the book or the eBook. Users from other locales are limited to the eBook only.<br /><br /><i>Deadline</i><br /><br />The contest will close on Jan, 7 2014 PT. Winners will be contacted by email, so be sure to use your real disqus account when you comment!Umberto Nicolettihttps://plus.google.com/108409966984627342730noreply@blogger.com0tag:blogger.com,1999:blog-9627576.post-88210505485877664102013-10-12T19:44:00.001+02:002013-10-12T19:46:08.244+02:00Easy animations with Sproutcore 1.10The release of <a href="http://blog.sproutcore.com/sproutcore-1-10-0-release/">Sproutcore 1.10</a>&nbsp;marks an important step in the life of this very popular framework. Lots of new features make developing applications on the Sproutcore framework even easier and fun.<br /><br />One improvement that I am sure will catch your eye (pun intended) is&nbsp;<a href="http://blog.sproutcore.com/dispatches-from-the-edge-automatic-transitions-and-sc-view-optimizations/">view animations</a>. Coding view animations was rather easy also on previous versions, but with 1.10 animations are now first class citizens bolted into the core rendering subsystem.<br />For an example of what is available out of the box see this&nbsp;<a href="http://showcase.sproutcore.com/#demos/Transition%20Plugins">demo</a>.<br /><br />So how would you use this goodness in an actual Sproutcore application? And how much code would it take?<br /><br />As an example I have put together a very basic Sproutcore app (<a href="https://github.com/unicolet/sc10">source</a>, <a href="https://sc10-c9-unicolet.c9.io/pony/">demo</a>) which has two states: an authentication form and a main screen. Logging in transitions the app from the login form to the main screen and logging out returns the app to the login screen. Pretty simple.<br /><br /><a name='more'></a><br /><br />By default the transition is immediate: the HTML elements are removed from the DOM and those representing the next state are appended in their place. Instead we want the transition to be animated with the elements of each screen sliding in and out with an effect similar to those on OS X and iOS.<br /><br />Thanks to the scaffolding introduced in 1.10 it just so happens that very little coding is required.<br /><br />First we specify the animations that we want on each view with <a href="https://github.com/unicolet/sc10/blob/master/pony/apps/pony/resources/main_page.js#L21">4 lines of code</a> like the following:<br /><br /><script src="https://gist.github.com/6950020.js"></script> <noscript><pre><code><br />File: views.js<br />--------------<br /><br />// define the transition duration globally<br />Pony.transitionSpeed=0.5;<br /><br />// [more code here] //<br /><br />// Automatic transitions, courtesy of SC 1.10<br />transitionIn: SC.View.SLIDE_IN,<br />transitionInOptions: { direction: &#39;down&#39;, duration: Pony.transitionSpeed, delay: Pony.transitionSpeed },<br /><br />transitionOut: SC.View.SLIDE_OUT,<br />transitionOutOptions: { direction: &#39;up&#39;, duration: Pony.transitionSpeed, delay: Pony.transitionSpeed },<br /></code></pre></noscript><br />At this point views are animated on append, but not on removal (except for the toolbar since it has a high zIndex). The animation on remove is not visible simply because the new view is appended over the one being animated and therefore hiding it.<br /><br />We need to delay the append step for the time necessary to the animation to complete, which we do with the&nbsp;<a href="https://github.com/unicolet/sc10/blob/master/pony/apps/pony/states/statechart.js#L8">following code</a>:<br /><br /><script src="https://gist.github.com/6950049.js"></script> <noscript><pre><code><br />File: state.js<br />--------------<br /><br />this.invokeLater(function(){<br /> Pony.getPath(&#39;mainPage.mainPane&#39;).append();<br />}, Pony.transitionSpeed*1000);<br /></code></pre></noscript><br />In total: 4 lines of code for each view we want to animate plus 3 for each state, grandtotal: 16+6=22 lines of code! (without indentation for readability it would have been just 18).<br /><br />Not bad, huh?Umberto Nicolettihttps://plus.google.com/108409966984627342730noreply@blogger.com0tag:blogger.com,1999:blog-9627576.post-30355703995083677162013-08-29T10:26:00.003+02:002013-08-29T10:26:27.541+02:00Manage Windows printer event log settings from command line (i.e. GPO scripts)<a href="http://res2.windows.microsoft.com/resbox/en/windows/main/eb4f0171-7cb7-428a-afcc-d93a6b84525c_33.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://res2.windows.microsoft.com/resbox/en/windows/main/eb4f0171-7cb7-428a-afcc-d93a6b84525c_33.png" /></a>Just a quick note to self that to enable/disable/query event log registration from the command line on Windows releases greater than XP and Server 2003 you can use the <a href="http://technet.microsoft.com/en-us/library/cc732848.aspx">wevutil</a> tool.<br /><br />For example to enable logging of print requests on Windows 7 for auditing purposes:<br /><br /><pre>wevtutil sl Microsoft-Windows-PrintService/Operational /e:true<br /></pre><br />The equivalent command for the the above on Windows XP is the following:<br /><br /><pre>reg add HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Print\Providers /v EventLog /t REG_DWORD /d 7 /f<br />net stop spooler<br />net start spooler<br /></pre>Umberto Nicolettihttps://plus.google.com/108409966984627342730noreply@blogger.com0tag:blogger.com,1999:blog-9627576.post-82134046829588419832013-07-21T17:25:00.002+02:002013-07-21T17:35:09.966+02:00Developing Sproutcore apps on c9.io<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-KdL6Xt3NGHE/Uev6LASQk0I/AAAAAAAAAX8/p4hEk2RjT0s/s1600/Schermata+2013-07-21+a+11.50.18.png" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="164" src="http://3.bp.blogspot.com/-KdL6Xt3NGHE/Uev6LASQk0I/AAAAAAAAAX8/p4hEk2RjT0s/s200/Schermata+2013-07-21+a+11.50.18.png" width="200" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">The Cloud9 online IDE running on Chrome/Mac.</td></tr></tbody></table>Today I was looking for a way to quickly edit a <a href="https://github.com/sproutcore/sproutcore/pull/1013">pull request</a> to the <a href="http://sproutcore.com/">Sproutcore</a> project without booting up my laptop, but using the MacMini in the living room instead, which is always on, being our main entertainment system.<br /><br />Turns out it's quite easy if you do not mind signing in into another online service: the&nbsp;<a href="https://c9.io/">Cloud9 online IDE</a>.<br /><br /><a name='more'></a>Cloud9 can sign you in with your GitHub credentials, so I signed in, selected <a href="https://github.com/unicolet/sproutcore">my sproutcore fork</a> from the project list, cloned it and, presto, I was ready to do some programming.<br /><br />In addition to loading a complete web-based IDE,&nbsp;Cloud9&nbsp;also boots up a virtual machine from RedHat's PAAS <a href="https://www.openshift.com/">OpenShift</a> to host a very cool web-based command line. The CLI can be used to run git commands but can also be used to run (almost) any command you like.<br /><br />The virtual machines for free accounts are automatically suspended when not used for more than 15 days, which means that the next time you sign in you will still find your environment, but you also will have to wait a little while the VM wakes up again. Paid accounts VMs are never suspended and are always running.<br /><br />Anyways, going back to my pull request, I quickly added a unit test for the new behavior. I also wanted to try and run the unit test, but as you might know, to run any Sproutcore app in development you first have to launch their Ruby-based server which takes care of assembling the app, delivering it to the browser and also proxying requests to any remote backends.<br /><br />To run the server, which is called <i>sc-server</i> btw, you first have to install the ruby gem called <a href="http://rubygems.org/gems/sproutcore">sproutcore</a>.<br />So on the command line I typed the following commands:<br /><br /><pre class="brush: bash"># ruby -v<br />ruby 1.9.3p448 (2013-06-27) [x86_64-linux] &nbsp;<br /></pre><br />to check the ruby version. Sproutcore requires 1.9.x, so we are good to go. Then simply:<br /><br /><pre class="brush: bash">gem install sproutcore<br />sc-server&nbsp;--host $IP --port $PORT --allow-from-ips='*.*.*.*'<br /></pre><br />to install the gem and its dependencies and the run the server.<br />The HOST and IP variables are set for you by C9 especially for the purpose of running network daemons (like a database, a node server, etc) that can be accessed externally.<br /><br />After sc-server has started, to access the Sproutcore server you simply have to point your browser to:<br /><br />http://<b>workspace</b>.<b>username</b>.c9.io/<br /><br />which in may case was:<br /><br />http://sproutcore.unicolet.c9.io/<br /><br />Happy hacking!Umberto Nicolettihttps://plus.google.com/108409966984627342730noreply@blogger.comtag:blogger.com,1999:blog-9627576.post-20767932649297117002013-07-11T22:44:00.005+02:002013-10-03T17:04:29.675+02:00Book Review: Instant OpenNMS Starter<div class="separator" style="clear: both; text-align: center;"><a href="http://dgdsbygo8mp3h.cloudfront.net/sites/default/files/imagecache/productview_larger/5763OT.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="http://dgdsbygo8mp3h.cloudfront.net/sites/default/files/imagecache/productview_larger/5763OT.jpg" /></a></div><div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><i>Disclaimer: Packt kindly sent me a free copy for review.</i></span><br /><span style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><i><br /></i></span><span style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><i>TL;DR: Rating 4/5. Recommended for beginners and intermediate.</i></span></div><b id="docs-internal-guid-476c4d94-cf77-28ab-334b-86e433b12a73" style="font-weight: normal;"><br /><span style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span></b><br /><div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">The <a href="http://link.packtpub.com/rQO8Ro">book</a> itself is short, but packed with information. A fast reader with some experience with OpenNMS should be able to finish it in 4 to 6 hours. Beginners will probably want to follow the pointers to the online documentation, check the configuration files and possibly experiment so they should allocate more time.</span></div><b style="font-weight: normal;"><br /><span style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span></b><br /><div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Before being published the book has been reviewed by Jeff Gehlbach. Anyone who has been involved with OpenNMS for some time know him, as he is one of the many brilliant minds working for the OpenNMS company, the commercial entity which develops and supports OpenNMS. Surely his involvement serves as a kind of seal of quality for the book. I for one was surprised by the clarity with even the most complex aspects of OpenNMS were presented in such a short text.</span></div><b style="font-weight: normal;"><br /><span style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span></b><br /><div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><a href="http://link.packtpub.com/rQO8Ro">Instant OpenNMS Starter</a> is divided in three main parts: installation, quick start and an advanced section that the book calls ‘the top 5 features’. The final section is a reference of sites and humans with more information on OpenNMS.</span></div><b style="font-weight: normal;"><br /><span style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span></b><br /><div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">The author has been careful to link to the relevant sections of the online wiki when he felt that the wiki content was adequate, without devoiding the book of any additional practical information. For instance in the installation section he actually describes a more secure way of installing OpenNMS than that described in the online user guide and he does so by simply citing the extra steps while leaving to the online documentation to specify the rest.</span></div><b style="font-weight: normal;"><br /><span style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span></b><br /><div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">The quick start section is useful for those in a hurry to just have something monitored with OpenNMS and needing to for a pointer on what all those links in the web ui do.</span></div><b style="font-weight: normal;"><br /><span style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span></b><br /><div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">The advanced section is where probably you will spent most of your time as it describes the most interesting features of OpenNMS which are:</span></div><ol style="margin-bottom: 0pt; margin-top: 0pt;"><li dir="ltr" style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; list-style-type: decimal; text-decoration: none; vertical-align: baseline;"><div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">service assurance through polling</span></div></li><li dir="ltr" style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; list-style-type: decimal; text-decoration: none; vertical-align: baseline;"><div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">data collection through collectors</span></div></li><li dir="ltr" style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; list-style-type: decimal; text-decoration: none; vertical-align: baseline;"><div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">thresholds and notifications</span></div></li><li dir="ltr" style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; list-style-type: decimal; text-decoration: none; vertical-align: baseline;"><div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">events, alarms and automations</span></div></li><li dir="ltr" style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; list-style-type: decimal; text-decoration: none; vertical-align: baseline;"><div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">reports</span></div></li></ol><b style="font-weight: normal;"><br /><span style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span></b><br /><div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">IMHO one glaring omission in this list is the <a href="http://www.opennms.org/w/images/c/ca/ProvisioningUsersGuide.pdf">Provisioning system </a>which was introduced with OpenNMS 1.8 and is a key feature because it covers a critical aspect: how nodes are added into OpenNMS for monitoring. I reread the book twice hoping that perhaps it was mistake on my part, but I could not find a single reference to it.</span></div><b style="font-weight: normal;"><br /><span style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span></b><br /><div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">The book covers each of the five areas with enough depth to give a dedicated beginner useful pointers and background on how to implement the most advanced features of OpenNMS. The author again intelligently uses links to the online wiki to extend the text.<br class="kix-line-break" />Only the section on reports felt a little thin. In defense of the author one could say that the reports area is so complex that it would have quickly grown out of hand for this kind of book. Perhaps in a second edition he should consider expanding it to at least mention the possibility of creating Jasper reports from collected data.</span></div><b style="font-weight: normal;"><br /><span style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span></b><br /><div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><a href="http://link.packtpub.com/rQO8Ro">Instant OpenNMS Starter</a> is clearly aimed at, and I recommend it for, people starting with OpenNMS, evaluating it or who might have inherited a working installation and now have to maintain it. Users seeking to master one of the 5 areas listed above should certainly consider buying it when the online bits and pieces feel not enough or too sparse.</span></div><div dir="ltr" style="margin-bottom: 0pt; margin-top: 0pt;"><div style="line-height: 1.15;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">By the title it should come as no surprise that advanced users are not likely to find any new or useful information at all, but, again given the price and the short text, it could still be used as a kind of self-check.</span></div><div style="line-height: 1.15;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span></div><div style="line-height: 1.15;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><b>Update Oct/2/2013</b></span></div><div style="line-height: 1.15;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span></div><div style="line-height: 1.15;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">There was a brief exchange of emails on the opennms-discuss mailing list with the author, I think it gives useful context to some of the items in my review. I reproduce it under here in full (<a href="http://sourceforge.net/mailarchive/message.php?msg_id=31443933">link</a>) :</span></div><div style="line-height: 1.15;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span></div><span style="background-color: transparent; font-family: Arial; font-size: 15px; line-height: 17px; vertical-align: baseline; white-space: pre-wrap;"><i>Regarding Provisiond, I agree that it needs to be there. When I wrote the book I had to follow very specific guidelines from the publisher. In the top 5 features section I had to decide how to organize it. It was either going to be Capsd or the new and improved Provisiond as one of the 5. When I wrote it, Capsd was still enabled by default and I thought it was easier to get started with. If I would redo it now, I would change the section to Provisiond with a simple mention of how it evolved. In fact, I am preparing this very section now and will make it available on my site. Would be nice to have a revised edition though, I'll check with the publisher. Regarding reports I think it would be nice to have a similar short book of its own on the subject, going through OpenNMS' default reporting capabilities, more advanced custom JasperReports and maybe some modern ajax report dashboards built on top of the nice RESTful API (something I've been wanting to explore). Just thoughts...</i></span><br /><div style="line-height: 1.15;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span></div><div style="line-height: 1.15;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 15px; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span></div></div><br /><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;"></span>Umberto Nicolettihttps://plus.google.com/108409966984627342730noreply@blogger.com0tag:blogger.com,1999:blog-9627576.post-42032543594832236722013-05-20T12:13:00.001+02:002013-05-20T12:13:46.365+02:00Monitoring Oracle tablespace quota with OpenNMS<h3>Going beyond the normal application availability check</h3>One interesting use of the&nbsp;<a href="http://www.opennms.org/wiki/JDBC_Data_Collection_Tutorial">OpenNMS</a> JDBC poller is for extracting data from the Oracle administrative database tables, for example tracking tablespace quota usage to detect quota exhaustion, sudden usage peaks and graph usage over time.<br /><br /><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; text-align: right;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-S3ksQw40gnc/UZn2icwf5oI/AAAAAAAAAVc/pcQjvte4mVA/s1600/quota.png" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="112" src="http://2.bp.blogspot.com/-S3ksQw40gnc/UZn2icwf5oI/AAAAAAAAAVc/pcQjvte4mVA/s320/quota.png" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Graph of quota usage for user [redacted] on tablespace DAT.<br />Notice the cleaning operation running at 3.30 AM</td></tr></tbody></table>Tablespace quotas is a feature present in the Oracle database that allows the DBA to set a limit on the amount of storage that any given user can consume on a specific tablespace. This allows the DBA to share tablespaces across users yet still be able to policy users into predefined usage boundaries. When a user consumes all its quota it can no longer store data, but it can delete it, thus allowing self-recovery.<br /><br /><a name='more'></a><br /><br />Configuring OpenNMS to monitor quota usage is rather simple. First of all make sure that you have the <a href="http://www.oracle.com/technetwork/database/features/jdbc/index-091264.html">Oracle JDBC driver</a> in <i>$OPENNMS_HOME/lib</i>. If not download and copy the jar file in that directory. Do not restart OpenNMS as we will need to restart it later.<br /><br />Now cd into $OPENNMS_HOME/etc, make a backup copy of the configuration files (not necessary if you already use version control)&nbsp;and then make the changes described below. The code is available at this <a href="https://gist.github.com/unicolet/5600678">location</a>. Note that in these files I report only the relevant fragments. It should be straightforward to merge these fragments in the right context of your files.<br /><br />Things that you will have to change and adapt to your environment:<br /><ol><li><b>user</b> and <b>password</b> of the Oracle user used to connect to the Oracle database and query the<b> dba_ts_quotas</b> table. Your DBA should also take care of granting the appropriate rights to this user</li><li><b>hostname</b> and <b>service name</b> for the JDBC URL. This is repeated in two different places, so make sure to change them all. If you don't know the right server and service names consult with your DBA. It might get tricky especially with Oracle RAC configurations. Usually I first try with &nbsp;<b>OPENNMS_JDBC_HOSTNAME</b>, then, if it fails, I fall back to specifying an hostname. <u><b>Caution</b></u>: if you specify an hostname, say <i>srvora1</i>, AND assign this service to another host, say <i>srvora2</i>, you will NOT be monitoring quotas on <i>srvora2</i>, but rather on <i>srvora1</i>!</li><li>since 1.10 the <b>datacollection-config.xml</b> fragment can be modularized in the <i>datacollection</i> directory as described <a href="http://www.opennms.org/wiki/Data_Collection_Configuration_How-To#Modular_Configuration">here</a></li></ol><br /><script src="https://gist.github.com/5600678.js"></script> <noscript><pre><br />File: capsd-configuration.xml<br />-----------------------------<br /><br />&lt;protocol-plugin protocol=&quot;OracleMonitoring&quot; class-name=&quot;org.opennms.netmgt.capsd.plugins.JDBCPlugin&quot; scan=&quot;on&quot;&gt;<br /> &lt;property key=&quot;driver&quot; value=&quot;oracle.jdbc.driver.OracleDriver&quot;/&gt;<br /> &lt;property key=&quot;user&quot; value=&quot;opennms&quot;/&gt;<br /> &lt;property key=&quot;password&quot; value=&quot;opennms&quot;/&gt;<br /> &lt;property key=&quot;url&quot; value=&quot;jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=oracle)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=orcl)))&quot;/&gt;<br /> &lt;property key=&quot;retry&quot; value=&quot;1&quot;/&gt;<br />&lt;/protocol-plugin&gt;<br /><br /><br />File: collectd-configuration.xml<br />--------------------------------<br /><br />&lt;!-- add this under the default package (example1) --&gt;<br /> &lt;service name=&quot;OracleMonitoring&quot; interval=&quot;600000&quot; user-defined=&quot;false&quot; status=&quot;on&quot;&gt;<br /> &lt;parameter key=&quot;collection&quot; value=&quot;OracleMonitoring&quot;/&gt;<br /> &lt;parameter key=&quot;thresholding-enabled&quot; value=&quot;true&quot;/&gt;<br /> &lt;parameter key=&quot;driver&quot; value=&quot;oracle.jdbc.driver.OracleDriver&quot;/&gt;<br /> &lt;parameter key=&quot;user&quot; value=&quot;opennms&quot;/&gt;<br /> &lt;parameter key=&quot;password&quot; value=&quot;opennms&quot;/&gt;<br /> &lt;parameter key=&quot;url&quot; value=&quot;jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=oracle)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=orcl)))&quot;/&gt;<br /> &lt;/service&gt;<br /> <br />&lt;!-- do not forget this at the bottom of the file --&gt;<br />&lt;collector service=&quot;OracleMonitoring&quot; class-name=&quot;org.opennms.netmgt.collectd.JdbcCollector&quot;/&gt;<br /><br />File: datacollection-config.xml<br />-------------------------------<br /><br />&lt;resourceType name=&quot;oracleQuota&quot; label=&quot;Oracle Quota&quot;<br /> resourceLabel=&quot;Account ${UserName} (index:${index})&quot;&gt;<br /> &lt;persistenceSelectorStrategy<br /> class=&quot;org.opennms.netmgt.collectd.PersistAllSelectorStrategy&quot;/&gt;<br /> &lt;storageStrategy<br /> class=&quot;org.opennms.netmgt.dao.support.IndexStorageStrategy&quot;/&gt;<br />&lt;/resourceType&gt;<br /><br /><br />File: jdbc-datacollection-config.xml<br />------------------------------------<br /><br />&lt;?xml version=&quot;1.0&quot;?&gt;<br />&lt;jdbc-datacollection-config rrdRepository=&quot;/opt/opennms/share/rrd/snmp/&quot; xmlns=&quot;http://xmlns.opennms.org/xsd/config/jdbc-datacollection&quot;&gt;<br />&lt;!-- mysql data collection removed for brevity --&gt;<br /> &lt;jdbc-collection name=&quot;OracleMonitoring&quot;&gt;<br /> &lt;rrd step=&quot;300&quot;&gt;<br /> &lt;rra&gt;RRA:AVERAGE:0.5:1:2016&lt;/rra&gt;<br /> &lt;rra&gt;RRA:AVERAGE:0.5:12:1488&lt;/rra&gt;<br /> &lt;rra&gt;RRA:AVERAGE:0.5:288:366&lt;/rra&gt;<br /> &lt;rra&gt;RRA:MAX:0.5:288:366&lt;/rra&gt;<br /> &lt;rra&gt;RRA:MIN:0.5:288:366&lt;/rra&gt;<br /> &lt;/rrd&gt;<br /> &lt;queries&gt;<br /> &lt;query name=&quot;oracleQuota&quot; ifType=&quot;ignore&quot; instance-column=&quot;USERNAME&quot; resourceType=&quot;oracleQuota&quot;&gt;<br /> &lt;statement&gt;<br /> &lt;queryString&gt;<br />select<br /> username||&#39;.&#39;||tablespace_name as username,<br /> bytes,<br /> max_bytes<br />from<br /> dba_ts_quotas<br />where<br /> max_bytes &gt; 0<br /> &lt;/queryString&gt;<br /> &lt;/statement&gt;<br /> &lt;columns&gt;<br /> &lt;column name=&quot;USERNAME&quot; alias=&quot;UserName&quot; data-source-name=&quot;UserName&quot; type=&quot;string&quot;/&gt;<br /> &lt;column name=&quot;BYTES&quot; alias=&quot;BytesUsed&quot; data-source-name=&quot;BytesUsed&quot; type=&quot;gauge&quot;/&gt;<br /> &lt;column name=&quot;MAX_BYTES&quot; alias=&quot;MaxBytes&quot; data-source-name=&quot;MaxBytes&quot; type=&quot;gauge&quot;/&gt;<br /> &lt;/columns&gt;<br /> &lt;/query&gt;<br /> &lt;/queries&gt;<br /> &lt;/jdbc-collection&gt;<br /><br />&lt;/jdbc-datacollection-config&gt;<br /><br /><br />File: snmp-graph.properties<br />---------------------------<br /><br /># do not forget to add this graph to the report list (reports property) at the beginning of the file<br /><br />report.oracleQuota.name=Oracle Tablespace Quotas<br />report.oracleQuota.columns=BytesUsed,MaxBytes<br />report.oracleQuota.propertiesValues=UserName<br />report.oracleQuota.type=oracleQuota<br />report.oracleQuota.command=--title=&quot;Oracle Tablespace Quota {UserName}&quot; \<br /> --vertical-label=&quot;Usage&quot; \<br /> DEF:a={rrd1}:BytesUsed:AVERAGE \<br /> DEF:b={rrd2}:MaxBytes:AVERAGE \<br /> AREA:b#00ddcc:&quot; max&quot; \<br /> AREA:a#0000cc:&quot; used&quot; \<br /> GPRINT:a:AVERAGE:&quot;Avg: %8.2lf %s&quot; \<br /> GPRINT:b:MIN:&quot;Min: %8.2lf %s&quot; \<br /> GPRINT:b:MAX:&quot;Max: %8.2lf %s\\n&quot; <br /><br /></pre></noscript><br /><br />After the required changes are in place restart OpenNMS and then assign the OracleMonitoring service to the right node. Please note that due the way that the JDBC URL is constructed it is not possible to assign this service to more than one database instance, unless they all have the same <b>service name</b>.<br />To monitor database instances with different service names one must duplicate the whole configuration, with perhaps the exception of graphs.<br /><br />The graphs will be shown only for Oracle users that have quotas set (ie MaxBytes &gt; 0) on at least one tablespace. Users without quotas will not be shown in the graphs list for the host.<br /><br /><h3>Thresholds and notifications</h3>To enable notifications we must first establish thresholds. For that see the last two code fragments in the gist above. I didn't want to create custom UEIs, so I didn't specify any in the UEI fields.<br /><br />Note that I have specified the&nbsp;<i>UserName</i> in the datasource label field. This allows us to show useful information in the notification message such as the tablespace and user that have triggered or rearmed the threshold by inserting the&nbsp;<b>%parm[label]%</b> tag in the subject and/or message body.<br /><br /><br />Umberto Nicolettihttps://plus.google.com/108409966984627342730noreply@blogger.com0tag:blogger.com,1999:blog-9627576.post-47136681167266408962013-03-26T17:19:00.001+01:002013-11-27T08:36:36.979+01:00A not so short guide to ZFS on Linux<b>Updated Oct 16 2013: shadow copies, memory settings and links for further learning.</b><br /><b>Updated Nov 15 2013: shadow copies example, samba tuning.</b><br /><br />Unless you've been living under a rock you should have by now heard many stories about how awesome <a href="http://en.wikipedia.org/wiki/ZFS">ZFS</a> is and the many ways it can help with <a href="http://sysadvent.blogspot.it/2012/12/day-7-bacon-preservation-with-zfs.html">saving your bacon</a>.<br /><br />The downside is that ZFS is not available (natively) for Linux because the <a href="http://en.wikipedia.org/wiki/Common_Development_and_Distribution_License">CDDL</a> license under which it is released is incompatible with the GPL. Assuming you are not interested in converting to one of the many Illumos distributions or FreeBSD this guide might serve you as a starting point if you are attracted &nbsp;by ZFS features but are reluctant to try it out on production systems.<br /><br />Basically in this post I note down both the tought process and the actual commands for implementing a fileserver for a small office. The fileserver will run as a virtual machine in a large ESXi host and use ZFS as the filesystem for shared data.<br /><br /><a name='more'></a><h3>Scenario</h3><div>A small office consisting of under 10 Windows clients. The reasons for choosing ZFS over other filesystems (already tried and tested on linux) are:</div><div><ol><li>snapshots: we want to take daily snapshots so that users can easily and autonomously recover previous or deleted versions of documents/files</li><li>compression: well...to save space and perhaps improve i/o. A quick test with lz4 showed a compression ratio between 2.3X and 1.27X with no performance loss</li></ol><div>Deduplication will not be activated because it is heavy, data that will sit on the disks is highly varied, and IMHO simply not stable enough yet for first tier storage.<br /><br /></div><h3>The plan</h3></div><div>Create a new VM on the ESXi host to which we need to assign 4 CPUs and 8GB of RAM (as ZFS is CPU and RAM hungry). The VM will use a small disk for the OS and then a number of larger disks for the ZFS pool.</div><div><br /></div><div><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; text-align: right;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-_nuXCOKlHlA/UVHK7-d9QKI/AAAAAAAAAUM/Up59ym07y_o/s1600/the_a_team_group_1_1024.jpg" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="200" src="http://1.bp.blogspot.com/-_nuXCOKlHlA/UVHK7-d9QKI/AAAAAAAAAUM/Up59ym07y_o/s200/the_a_team_group_1_1024.jpg" width="173" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">This one knows about plans</td></tr></tbody></table>OS is CentOS x64, ZFS will be built from source rpms from <a href="http://zfsonlinux.org/">zfsonlinux.org</a>.</div><div><br /><h4>Plan b</h4></div><div>Very important: as an emergency recovery plan we also want to make sure that we can boot the VM from an Illumos live CD, mount the zfs pool and access the data. After lurking on the <a href="https://groups.google.com/a/zfsonlinux.org/forum/#!forum/zfs-discuss">zfsonlinux Google Group</a> for a while I can tell you that mounting your pool on an Illumos derivative to fix errors or just to regain access to data is a suggestion I have seen far too often to ignore it.<br /><br /><h3>1. OS and ZFS installation</h3></div><div>Install a base CentOS server. I started with a small 8GB disk for the root, boot and swap volumes.<br /><b><br /></b><b>Tip n.1:</b>&nbsp;do not use the default VMWare paravirtual SCSI adapter because it is not supported by Illumos/Solaris: opt for <i>LSI Logic Parallel</i> instead.<br /><br /><b>Tip n.2:</b> Since ZFS recommends referencing <b>disks by</b> id we will have to edit the vm and set an advanced option to enable that feature (VMWare does not support it by default). See the image below for directions or <a href="http://www.novell.com/support/kb/doc.php?id=7002966">follow this link</a>.<br /><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-pHH-z1-gnIc/UVG5TLnqF0I/AAAAAAAAAT8/HpWPocz82J0/s1600/zfs_disk_by_id.png" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="163" src="http://4.bp.blogspot.com/-pHH-z1-gnIc/UVG5TLnqF0I/AAAAAAAAAT8/HpWPocz82J0/s320/zfs_disk_by_id.png" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Enable disk id support in VMWare before creating zpools</td></tr></tbody></table><br />During the install I resized the swap partition to 2GB instead of the default 4GB (half the RAM).</div><div>After the installer is done remember to disable SELinux (edit <i>/etc/sysconfig/selinux</i>&nbsp;and set <i>SELINUX</i> to <i>disabled</i>). I also usually run <i>yum -y update &amp;&amp; reboot</i> to bring the system up to date and then move on with configuration.</div><div><br /></div><div>When the system comes up online again install vmware tools, then proceed with setting up zfs.</div><div>Since we need to compile zfs there are a number of packages to install. The following command should install all required dependencies in two shots:</div><div><br /></div><pre class="brush: bash">yum -y groupinstall "Development Tools"<br />yum -y install wget zlib-devel e2fsprogs-devel libuuid-devel libblkid-devel bc lsscsi mdadm parted mailx</pre><div><br /></div><div>ZOL documentation only mentions the "Development Tools" dependency, but I found out that the other mentioned in the second command are required further in the build process. Mailx is not exactly a dependency for ZFS, but we will need it later to send periodic email reports on the ZFS pool status.</div><div><br /></div><div>To build ZFS we need to download the source rpms first. At the time of this writing 0.6.0 is the latest stable version. Remember to update the version numbers/urls!</div><div><br /></div><pre class="brush: bash">cd /root<br />mkdir zfs<br />cd zfs<br />wget http://archive.zfsonlinux.org/downloads/zfsonlinux/spl/spl-0.6.0-rc14.src.rpm<br />wget http://archive.zfsonlinux.org/downloads/zfsonlinux/spl/spl-modules-0.6.0-rc14.src.rpm<br />wget http://archive.zfsonlinux.org/downloads/zfsonlinux/zfs/zfs-0.6.0-rc14.src.rpm<br />wget http://archive.zfsonlinux.org/downloads/zfsonlinux/zfs/zfs-modules-0.6.0-rc14.src.rpm</pre><div><br /></div><div>We can now build the binary packages with the following commands, YMMV:</div><div><br /></div><pre class="brush: bash">rpmbuild --rebuild spl-0.6.0-rc14.src.rpm<br />rpm -ivh /root/rpmbuild/RPMS/x86_64/spl-0.6.0-rc14.el6.x86_64.rpm<br />rpmbuild --rebuild spl-modules-0.6.0-rc14.src.rpm<br />rpm -ivh /root/rpmbuild/RPMS/x86_64/spl-modules-*<br />rpmbuild --rebuild zfs-modules-0.6.0-rc14.src.rpm<br />rpmbuild --rebuild zfs-0.6.0-rc14.src.rpm<br />rpm -ivh /root/rpmbuild/RPMS/x86_64/zfs-*</pre><div><br /></div><div>If everything went well the zfs kernel module and utilities should have been installed. Let's try loading the kernel module:</div><pre class="brush: bash">modprobe zfs</pre><div>if it loads correctly we should also be able to run the zpool/zfs commands like:</div><pre class="brush: bash">zpool list</pre><div><br /></div><div>which should report no available pools.</div><h4>Have ZFS load on boot</h4><div>As it is ZFS will not load automatically on boot which means that your data will not be available, but the following script takes care of loading the ZFS module. Pools and filesystems will be automatically detected by the kernel module and mounted.</div><div><br /></div><pre class="brush: bash">cat &gt; /etc/sysconfig/modules/zfs.modules &lt;&lt;EOF<br />#!/bin/bash<br />if [ ! -e /sys/module/zfs ] ; then<br /> modprobe zfs;<br />fi<br />EOF<br />chmod +x /etc/sysconfig/modules/zfs.modules<br /></pre><div><br />Restart the server and verify that ZFS is correctly loaded with:<br /><pre class="brush: bash">lsmod | grep zfs</pre><br />If it's there then we can start creating our fist pool.<br /><br /><h3>2. Creating a pool</h3></div><div>To create a pool we first need to add a disk to the vm. I chose to hot-add a thin-provisioned 100GB drive. To activate the drive we need to issue a rescan command to the SCSI controller:</div><div><br /></div><pre class="brush: bash">echo "- - -" &gt; /sys/class/scsi_host/host0/scan</pre><div><br /></div><div>the new disk should now be ready for use, inspect <i>dmesg</i> or list <i>/dev/disk/by-id</i> to confirm.<br />Supposing the disk id is <b>scsi-36000c2978b3f413efb817a086ccfd31b</b> the new pool can be created with the following command:<br /><br /><pre class="brush: bash">zpool create officedata1 scsi-36000c2978b3f413efb817a086ccfd31b</pre><br />when the command returns the pool will be automatically mounted under <b>/officedata1</b>, ready for use.<br />Let's now create the filesystem which we will share with Samba later on:<br /><pre class="brush: bash">zfs create -o casesensitivity=mixed -o compression=lz4 officedata1/data1</pre><br />I have enabled compression and also mixed case sensitivity support which is needed to correctly support Windows clients. Important: casesensitivity can only be set at fs creation time and cannot be changed later.<br /><br />Install samba and configure it to share /officedata1/data1 as usual, then copy some data on the share. You should be able to review compression stats with the following command:</div><pre class="brush: bash">[root@server ~]# zfs get all officedata1/data1 | grep compressratio<br />studioente1/data1 compressratio 1.28x -<br />studioente1/data1 refcompressratio 1.27x -<br /></pre><br />You can see that in my case compression yelds a reasonable 27% save on disk space. Not bad.<br /><br /><h3>3. Maintenance</h3><div>ZFS gurus suggest that zfs pools be periodically scrubbed to detect data corruption before it's too late.</div><div>Scrubbing can be performed with a cron job like the following:</div><div><br /></div><pre>&nbsp; 0 &nbsp;5 &nbsp;* &nbsp;* &nbsp;0 root /sbin/zpool scrub officedata1 &gt; /dev/null 2&gt;&amp;1</pre><div><br /></div><div>Note: the command returns immediately, and the scrub process continues in background.<br />While we are at it we also want to receive a <a href="https://gist.github.com/unicolet/5098053">monthly report</a> on the pool status.<br /><br /><h3>4. When all else fails</h3></div><div>When for whatever reason you cannot mount your ZFS pool(s) on Linux, all else fails and just before recovering from backups, one would often try with an Illumos build. In order to be ready for this situation download an unstable build of <a href="http://omnios.omniti.com/">OmniOS</a>, reboot your virtual machine from it and then make sure that you can access your zpools (import and export them when you're done). At this point I really hope you followed tip n.1 and used an LSI Logic Adapter instead of the VMWare Paravirtual which is offered to you by default.</div><div><br /></div><div>This might sound paranoid but booting and importing your pools from Solaris/Illumos is an often heard last-resort suggestion on the ZOL mailing list to regain access to otherwise lost data.</div><div><br /></div><h3>5. Daily (and possibily more frequent) snapshots</h3><div>ZFS without snapshots is just not cool. To enable automated snapshotting features we will use the zfs auto snapshot script which can be found <a href="https://github.com/zfsonlinux/zfs-auto-snapshot">here</a>. Install/copy the script on your system and make it executable.<br />To start taking daily snapshots with a seven days retention policy place a script in /etc/cron.daily with the following content (adjust the path to zfs-auto-snapshot.sh):<br /><br /><pre class="brush: bash">#!/bin/sh<br />exec /root/bin/zfs-auto-snapshot.sh --quiet --syslog --label=daily --keep=7 //<br /><br /></pre><div>Users should be able to access the snapshots directly through samba by manually typing the (normally hidden) path:<br /><pre>\\server\data1\.zfs\snapshots</pre>in Windows Explorer and autonomously retrieve previous file versions without bothering the sysadmin.<br /><br /><b>Update 16/10/2013:</b> Samba can be told to access ZFS snapshots and exposes them as Shadow Copies. Microsoft clients (XP requires <a href="http://www.microsoft.com/en-us/download/details.aspx?id=16220">an add-on</a>, newer OSes support them natively) can then browse previous file versions directly from the properties tab of each file. Samba must be configured to use the <a href="http://www.samba.org/samba/docs/man/manpages/vfs_shadow_copy2.8.html">vfs_shadow_copy2 module</a>. This <a href="https://github.com/zfsonlinux/zfs/issues/626#issuecomment-9423892">comment</a> explains how.<br /><br /></div><div><h3>6. Memory issues</h3></div></div><div>ZFS does not use the Linux VM for caching, but instead implements its own, or better ported a <i>Solaris Compatibility Layer</i> (SPL, the other kernel module that must be installed with ZFS). This explains the issues that some users experience with metadata-heavy operations, like rsync.</div><div>Depending on the number of files and their size you might not hit memory issues once, howver to err on the safe side I applied the following configuration to my systems:</div><div><br /></div><div>set <b>vm.min_free_kbytes</b> to 512MB (on a 36GB RAM system)</div><div><b>limit ZFS ARC</b> usage by imposing a lower limit than default (1/2 of physical RAM). This <a href="http://www.solaris-cookbook.com/linux/debian-ubuntu-centos-zfs-on-linux-zfs-limit-arc-cache/">link</a> provides pretty good instructions on how to do it.<br /><b>Last resort</b>: if all else fail schedule<br /><br /><pre>echo 1 &gt; /proc/sys/vm/drop_caches<br /></pre>to run regularly from crontab.<br /><br /><h3>7. Samba performance</h3>You might find that Samba performs poorly undes ZFS on Linux, especially while browsing directories. Throughput is generally good, but browsing directories (evan small trees) can occasionally stall Windows Explorer.<br />The following settings improve Samba (and ZFS) performance in general:<br /><br /><pre>zfs set xattr=sa tank/fish<br />zfs set atime=off tank/fish<br /></pre>The first one (<a href="http://www.nerdblog.com/2013/10/zfs-xattr-tuning-on-linux.html">source</a>) tells ZFS to store extended attributes in the inodes instead of a hidden folder which, rather surprisingly, is the default! Performance improvement after applying the first one should be immediately visible! The second one disables atime, which you should always do on any filesystem.<br /><br />Also apply the following modifications to&nbsp;<i>smb.conf</i>&nbsp;:<br /><br /><pre>socket options = IPTOS_LOWDELAY TCP_NODELAY<br />max xmit = 65536<br /></pre>For those interested&nbsp;<a href="https://github.com/zfsonlinux/zfs/issues/1773">issue 1773</a>&nbsp;on github tracks Samba/ZoL performance problems.<br /><br /></div><h3>8. Additional resources</h3><div>Links to resources that provide in-depth explanation on ZFS:</div><div><a href="https://pthree.org/category/zfs/">https://pthree.org/category/zfs/</a></div><div><a href="http://www.solaris-cookbook.com/linux/debian-ubuntu-centos-zfs-on-linux-zfs-limit-arc-cache/">http://www.solaris-cookbook.com/linux/debian-ubuntu-centos-zfs-on-linux-zfs-limit-arc-cache/</a></div><div><br /></div><div>Others:</div><div><a href="http://www.matisse.net/bitcalc/">http://www.matisse.net/bitcalc/</a>&nbsp;(to facilitate bit/bytes/kbytes conversions)</div>Umberto Nicolettihttps://plus.google.com/108409966984627342730noreply@blogger.com0tag:blogger.com,1999:blog-9627576.post-31514188414691115552013-02-23T11:57:00.004+01:002013-02-24T10:17:14.227+01:00Development is fun again with nodejsBeing a longtime Java developer, back from when servlets where cool and Struts was making MVC popular among web devs, I always try to find new and more productive ways to deliver software within the Java ecosystem.<br /><br />Recently I turned to Grails and delivered several projects with it. When developing with Grails you can use the power and the expressiveness of Groovy to write compact, elegant, fluent and readable code.<br />The downside is that Grails is huge:<br /><br /><ol><a href="http://dgdsbygo8mp3h.cloudfront.net/sites/default/files/imagecache/productview_larger/7188OS_Node%20Cookbookcov.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="200" src="http://dgdsbygo8mp3h.cloudfront.net/sites/default/files/imagecache/productview_larger/7188OS_Node%20Cookbookcov.jpg" width="164" /></a><li>even the most simple Grails apps will weigh in the 40MB range</li><li>compilation, test and building takes a long time (minutes, actually) even in recent versions</li><li>it is (fairly) complex, but that I can understand because it does so much</li><li>it will consume a huge chunk of you app server memory when deployed</li></ol><div><div>I have been using Grails as the backend for&nbsp;<a href="http://unicolet.github.com/mappu/" onclick="_gaq.push(['_trackEvent', 'Link', 'Click', 'Mappu Home']);">Mappu</a>&nbsp;too, initially just because I wanted to bootstrap the project quickly and Grails is simply perfect for that. But as time passed I started to find Grails too heavy for a simple REST API. I am currently running the demo on the smallest Rackspace server and it's constantly swapping. It's not slow, but it could be better.</div></div><div><br /><a name='more'></a><br /></div><div>Then last year <a href="http://www.packtpub.com/">Packt Publishing</a> gave away a free ebook to all those who claimed it. I rushed and got the <a href="http://www.packtpub.com/node-to-guide-in-the-art-of-asynchronous-server-side-javascript-cookbook/book">Node Cookbook</a>. The book is well written and starts from the basics of node. Javascript knowledge is required. I read the book in small bites and finished it last month.</div><div><br /></div><div>Node, when combined with Express, is basically another MVC framework, only leaner than any I've seen in Java (well, maybe Struts in the early days was lean too). Also Node+Express gives you a very low level access to the HTTP layer, which feels quite weird for Java devs as most Java frameworks put numerous layers of abstraction between the HTTP protocol and the application.</div><div><br /></div><div>Initially I resisted the idea of using Javascript on the server side and for some time I thought that maybe instead of totally switching side I could just move from the Grails to the Scala camp, but then I saw the size of the average Scala/Play app and decided that it is not for me. Not anymore anyway.</div><div><br /></div><div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-V1gGYAraMZA/USiPllqO8TI/AAAAAAAAATo/VPagudTXueI/s1600/resizedimage4043-nodejs-logo.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-V1gGYAraMZA/USiPllqO8TI/AAAAAAAAATo/VPagudTXueI/s1600/resizedimage4043-nodejs-logo.png" /></a></div><span id="goog_1855608121"></span><span id="goog_1855608122"></span><a href="https://www.blogger.com/"></a>So all this just to tell you that this week I started rewriting the Mappu API in Node+Express. My Javscript skills are in a pretty good shape and it was easier than I thought.</div><div>The major differences I stumbled upon were:</div><div><ol><li>POSTs are handled differently&nbsp;than GETs&nbsp;by Express. Surprisingly (well, for me) POST params are stored in the <b>req.body</b> property, whereas GET params are stored in <b>req.params</b>. It took a couple of hours before I figured this out and I felt pretty dumb afterwards</li><li>there is no standard for almost anything: in Java the ecosystem has grown and stabilized (actually it shrunk) so picking, say, an&nbsp;authentication&nbsp;authorization framework is a straightforward process. Node.js is quite the bazaar as there are hundreds of modules to choose from (all on github, btw) for pretty much anything. So now, when I look at my <a href="https://gist.github.com/unicolet/5014907" onclick="_gaq.push(['_trackEvent', 'Link', 'Click', 'Gist 5014907']);">package.json </a>file I think (and worry): how many of these modules are actually still going to be maintained in the next 6 months?</li><li>because of 2 I had to actually code an <a href="https://gist.github.com/unicolet/5014973" onclick="_gaq.push(['_trackEvent', 'Link', 'Click', 'Gist 5014973']);">authorization middleware</a>. I think I had not done it since 2004, but doing in Javascript is a breeze and a couple of hours later it was done, complete with Mocha tests</li><li>asynchronous&nbsp;callbacks felt weird at first, but I got over it pretty quickly</li></ol><div>The (big) advantages over Grails are:</div></div><div><ol><li>speed: running all (integration+unit) tests takes 141 ms, and I have a lot more in Node+Express! In Grails it took minutes. I dare not do the math on wasted time.</li><li>flexibility: Javascript lets you <strike>mess with</strike>&nbsp;extend pretty much anything and this is a powerful feature. Forget about interfaces, abstract classes, private functions, protected members and all that. Of course with great power comes great responsibility so you and I have to be careful not to abuse this power and craft spaghetti code</li><li>low level: you can work around pretty much anything, for instance gracefully handle database errors by retrying a couple of seconds later. You can do the same in Grails too but it is harder because of the Hibernate ORM layer</li><li>it is ready for the cloud. Seriously, believe the hype. I will explain another time</li></ol></div><div>Time to go now, let me know what you think in the comments.</div><div><br /></div><div><br /></div>Umberto Nicolettihttps://plus.google.com/108409966984627342730noreply@blogger.com0tag:blogger.com,1999:blog-9627576.post-28633872280380317312013-02-01T13:58:00.000+01:002013-02-24T10:17:32.117+01:003 new features that I wish were in OpenNMS 2.0 As a long time OpenNMS user I've been often impressed with its extensibility and the completeness of its feature set. There is support for lots of data collection techniques: from the old school snmp exec extensions, to the http poller, from the JDBC poller to the XML poller and <a href="http://www.opennms.org/wiki/Features_List">many others</a>&nbsp;that I probably forgot to mention.<br /><br />Supporting new probes is therefore just a matter of <i>how,&nbsp;</i>not&nbsp;<i>if</i>&nbsp;, it can be done. And with new monitoring tools popping up every day this is clearly good as it allows OpenNMS to keep up with the competition.<br />So the present looks bright, but what about the future? With OpenNMS 2.0 not yet on the radar I thought I could put together a list of features I would love to have. What do you think of them?<br /><br /><a name='more'></a><br /><h3>1. Receiving metrics over the Graphite/Carbon protocol</h3><div><a href="http://graphite.wikidot.com/local--files/screen-shots/graphite_fullscreen_800.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="122" src="http://graphite.wikidot.com/local--files/screen-shots/graphite_fullscreen_800.png" width="200" /></a><a href="http://graphite.wikidot.com/">Graphite</a> is the new kid on the monitoring block and its primary concern is with receiving, storing and graphing data. It does not rely on any existing protocol for data collection, but instead invented its own.<br /><br /></div><div>The <a href="https://github.com/graphite-project/carbon">protocol</a> is dead simple: text lines sent over a tcp socket with the following format:</div><pre>metric-path value timestamp</pre><div>A client can be as simple as this shell one-liner:</div><pre>echo -e "local.meaning.of.life 42 `date +%s`\n\n" | nc graphitehost 2003</pre><div>Enabling the collection of metrics over this protocol would enable OpenNMS to:</div><div><ol><li>use any of the numerous&nbsp;<a href="https://collectd.org/">collectd</a>&nbsp;(<b>not</b> OpenNMS collectd) plugins (like tailing a log file and counting istances of certain text patterns)</li><li>let applications push custom metrics into OpenNMS through libraries available for most programming languages</li><li>reduce XML configuration overhead</li><li>leverage it as an extensible platform for integration with other systems</li><li>position itself as a Graphite (partial?) replacement (in conjunction with item 2) because Graphite does not do thresholding or alerting</li></ol></div><div>Notes: OpenNMS should find a way to correlate the metric to the node by looking at the first component in the metric-path and matching it with the node name or the node id and should handle the creation of new metrics on the fly without any configuration.<br /><br /></div><div><h3>2. Expose collected metrics over JSON</h3></div><div>RRD png graphs are fine, but to be cool and hang out with the new kids you have to render stuff in the browser and do it like <a href="http://square.github.com/cubism/">this</a>:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-yHgNKcdgh68/UQpK3Kp6npI/AAAAAAAAATE/lHW7LcejIpY/s1600/Screenshot+from+2013-01-31+11:43:12.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="80" src="http://2.bp.blogspot.com/-yHgNKcdgh68/UQpK3Kp6npI/AAAAAAAAATE/lHW7LcejIpY/s320/Screenshot+from+2013-01-31+11:43:12.png" width="320" /></a></div><br />Graphs like that require that metrics be accessible through JSON and recent versions of RRD <a href="http://monkeyswithbuttons.wordpress.com/2012/03/30/json-from-rrd/">already support</a> exporting to this format.<br />Checking this item would also open a new world of possibilities as people could write graphs and dashboards in javascript/html which nowadays seem to be way more popular than <a href="http://www.opennms.org/wiki/SNMP_Reports_How-To#Add_your_graph_definitions">properties files</a>&nbsp;(and I can see why).</div><div><br /><h3>3. Provide a pub/sub event bus over an open message protocol (amqp, openwire, jms, etc)</h3></div><div><div class="separator" style="clear: both; text-align: center;"></div>OpenNMS already uses an event bus internally and it would be super cool if events could also be broadcasted (and received) over an open protocol like AMQP.<br />Broadcasting would be especially useful for:<br /><ol><li>handling of notifications outside of OpenNMS with systems like PagerDuty, or simply with scripts that will send notifications to different recipients/endpoints depending on the affected service/node (this has always been a pain point with the current OpenNMS notification system)</li><li>complex event processing with tools like <a href="http://esper.codehaus.org/">Esper</a></li><li>relying of events into other systems for trouble ticketing, performance analysis, correlation</li><li>automated event handling: restarting hung services, killing runaway processing, relocating instances, etc</li></ol></div><div>As for receving I can't see any big driver for implementation yet as OpenNMS already has send-event.pl and I guess we could live with it if just someone made a Java client. Of course the adoption of AMQP or similar protocol would remove the necessity for this client entirely.</div>Umberto Nicolettihttps://plus.google.com/108409966984627342730noreply@blogger.com0tag:blogger.com,1999:blog-9627576.post-55381446308535136532013-01-20T10:39:00.001+01:002013-02-24T10:17:47.202+01:00Triggering OpenNMS notifications when patterns occur in a log fileA common problem with <a href="http://www.opennms.org/">OpenNMS</a> is how to monitor a log file and trigger alerts when certain conditions are met. Let me clarify with an example: you have this mission critical app that sometimes experiences internal errors. The application keeps running and still responds to requests, but the error will slow down the system and/or delay further processing. Monitoring the process and/or network polling will obviously not be able to detect the issue and the only way is to tail the application log file and look for certain messages.<br /><br />The problem can usually be solved simply by forwarding the log file to OpenNMS through syslog, but what for logs generated by applications that don't speak syslog or if you don't want to configure syslog forwarding?<br /><br /><a name='more'></a><br /><br /><a href="http://www.collectd.org/">Collectd</a> <a href="https://collectd.org/wiki/index.php/Plugin:Tail">Tail</a> plugin comes to the rescue. Collectd is an interesting monitoring agent which basically can be integrated with anything, even though I think it is primarily used together with <a href="http://graphite.wikidot.com/">Graphite</a>.<br />Since Collectd does not natively speak any of the protocols supported by OpenNMS integration has to be done some through some sort of scripting.<br /><br /><h3>Solution Overview</h3>I installed Collectd (5.2, custom built rpm, thanks <a href="https://github.com/jordansissel/fpm">fpm</a>!) on the host running the application and configured collectd to tail the log file and look for lines matching certain patterns. Whenever a line matches, a counter is incremented and if the value exceeds a threshold an external notification script is invoked. In my case I want to be notified of every single occurrence so the threshold condition is: <i>value != 0</i><br />The notification script then forks out a call to OpenNMS'own&nbsp;<a href="http://www.opennms.org/wiki/Send-event.pl">send-event.pl</a>. In OpenNMS I have configured a notification connected to the event UEI which sends out alerts to our support personnel.<br /><div><br /></div>Shown below are Collectd configuration file and the notification script. <i>send-event.pl</i> can be simply copied over from the OpenNMS host.<br /><br /><script src="https://gist.github.com/4556268.js"></script> <noscript><pre><br />File: collectd.conf<br />-------------------<br /><br />Interval 10<br /><br />LoadPlugin logfile<br />#LoadPlugin write_graphite<br />LoadPlugin csv<br />LoadPlugin threshold<br />LoadPlugin exec<br />LoadPlugin tail<br /><br />&lt;Plugin &quot;logfile&quot;&gt;<br /> LogLevel &quot;debug&quot;<br /> File &quot;stdout&quot;<br /> Timestamp true<br />&lt;/Plugin&gt;<br /><br />&lt;Plugin exec&gt;<br /> NotificationExec &quot;me&quot; &quot;/opt/collectd/bin/notif.pl&quot;<br />&lt;/Plugin&gt;<br /><br />&lt;Plugin &quot;csv&quot;&gt;<br /> DataDir &quot;/tmp&quot;<br /> StoreRates true<br />&lt;/Plugin&gt;<br /><br />&lt;Plugin &quot;tail&quot;&gt;<br /> &lt;File &quot;/tmp/scp.log&quot;&gt;<br /> Instance &quot;scp&quot;<br /> &lt;Match&gt;<br /> Regex &quot;ERROR&quot;<br /> DSType &quot;CounterInc&quot;<br /> Type &quot;counter&quot;<br /> Instance &quot;hi_error&quot;<br /> &lt;/Match&gt;<br /> &lt;/File&gt;<br />&lt;/Plugin&gt;<br /><br /># Load required matches:<br />#LoadPlugin match_empty_counter<br />#LoadPlugin match_hashed<br />LoadPlugin match_regex<br />LoadPlugin match_value<br />#LoadPlugin match_timediff<br /><br /># Load required targets:<br />LoadPlugin target_notification<br />#LoadPlugin target_replace<br />#LoadPlugin target_scale<br />#LoadPlugin target_set<br />#LoadPlugin target_v5upgrade<br /><br />PostCacheChain &quot;SelectHiErrors&quot;<br />&lt;Chain &quot;SelectHiErrors&quot;&gt;<br /> &lt;Rule &quot;selecthi&quot;&gt;<br /> &lt;Match &quot;regex&quot;&gt;<br /> TypeInstance &quot;^hi_error$&quot;<br /> &lt;/Match&gt;<br /> &lt;Target &quot;jump&quot;&gt;<br /> Chain &quot;CheckHiErrors&quot;<br /> &lt;/Target&gt;<br /> &lt;/Rule&gt;<br /> &lt;Target &quot;write&quot;&gt;<br /> &lt;/Target&gt;<br />&lt;/Chain&gt;<br /><br />&lt;Chain &quot;CheckHiErrors&quot;&gt;<br /> &lt;Rule &quot;checkhivalue&quot;&gt;<br /> &lt;Match &quot;value&quot;&gt;<br /> Min 0<br /> Max 0<br /> Invert true<br /> &lt;/Match&gt;<br /> &lt;Target &quot;notification&quot;&gt;<br /> Message &quot;%{type_instance}&quot;<br /> Severity &quot;WARNING&quot;<br /> &lt;/Target&gt;<br /> &lt;/Rule&gt;<br />&lt;/Chain&gt;<br /><br /><br />File: notif.pl<br />--------------<br /><br />#!/usr/bin/perl<br />use Sys::Syslog;<br />use Sys::Syslog qw(:standard :macros);<br /><br />openlog(&#39;collectd_notif&#39;, &quot;ndelay,pid&quot;, LOG_USER);<br /><br />while(&lt;&gt;) {<br /> chomp;<br /> ($key, $val) = split(&quot;\:&quot;, $_);<br /><br /> if ($key =~ /TypeInstance/ &amp;&amp; $val =~ /hi_error/) {<br /> my @args = (&quot;/opt/collectd/bin/send-event.pl&quot;, &quot;uei.my.org/collectd/scp/HiError&quot;, &quot;-i&quot;, &quot;192.168.123.123&quot;, &quot;opennms.my.org&quot;);<br /> system(@args) == 0 or syslog(LOG_ERROR|LOG_USER, &quot;Error sending UEI uei.my.org/collectd/scp/HiError&quot;);<br /> }<br />}<br /><br /></pre></noscript><br /><h3>Notes</h3>To accept events from other hosts eventd has to be configured to listen on all ip addresses (by default it binds only to 127.0.0.1). Since this can pose a security risk iptables should be used to restrict access.<br /><br />The configuration file in the example above instructs Collectd to use standard output for logging and to write values out to a csv file in /tmp: I left them in so that those unfamiliar with Collectd could run collectd in foreground to figure it out, but you should disable both in production.Umberto Nicolettihttps://plus.google.com/108409966984627342730noreply@blogger.com0tag:blogger.com,1999:blog-9627576.post-47626136491039482802012-12-31T14:58:00.003+01:002013-02-24T10:17:58.391+01:00The sorry state of ATI drivers in Linux<h3>Subtitle: broken by impress.js</h3>I've been using Linux as the primary OS on all my laptops since a loooong time ago and until recently I always chose computers fitted with nvidia graphics over any other brand because their drivers probably have the best quality and performance.<br /><div><br /></div><div>My last (and current) laptop, an HP ProBook, came with ATI graphics, but since in the last years ATI seemed to have caught up, I decided to give it a try.</div><div>For over two yeas the laptop performed extremely well with Ubuntu 10.04 LTS as everything worked right out of the box with no customization whatsoever. Suspend/resume worked, performance was great and boot time pretty good too (for a conventional hard drive at 7200rpm, at least). The problems came with the upgrade to Ubuntu 12.04.</div><div><br /><a name='more'></a><br /></div><div>The first one was an overheating issue: the laptop (especially the bottom, but it could be felt on the top too) would simply become so hot that I would feel&nbsp;uncomfortable at&nbsp;resting the palm of the hand over it. Luckily that was <a href="http://www.techytalk.info/linux-kernel-2-6-38-2-6-39-power-regression-workaround/">fixed</a> (but it took several weeks of googling) by adding this boot option:</div><div><br /></div><pre>pcie_aspm=force</pre><br />After that I started getting random lock ups on shutdown. The issue is far too common with ATI cards, if you just take the time to&nbsp;<a href="https://www.google.it/search?q=ubuntu+firegl_sig_notifier">google it</a>. That I was never able to really fix it and decided that for a while I would just put up with it as, after all, it dit not occur frequently.<br /><br />Then, this month, I started putting together a presentation for an upcoming talk and decided that I would write it with <a href="https://github.com/bartaz/impress.js">impress.js</a>&nbsp;(btw, I believe that HTML will rule the world AND I just couldn't wait to ditch yet another Office application).<br /><br /><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-cYvQvhlOsvQ/UOFpwoXiljI/AAAAAAAAASk/pfWX-E64iiQ/s1600/Screenshot+from+2012-12-29+20:24:05.png" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="174" src="http://4.bp.blogspot.com/-cYvQvhlOsvQ/UOFpwoXiljI/AAAAAAAAASk/pfWX-E64iiQ/s1600/Screenshot+from+2012-12-29+20:24:05.png" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">rendering artifacts on Chrome with ATI drivers: note the cut off<br />letters and the broken rotation effect on the bottom right.</td></tr></tbody></table>As you might know impress.js pushes browsers to the limits by making heavy use of animation, transitions and all sort of cool HTML5 and CSS3 features. Unfortunately impress.js broke Google Chrome.<br />I had already experienced rendering artifacts with Google Chrome on other <a href="http://unicolet.github.com/mappu/">apps</a>&nbsp;but thought that perhaps it was a problem with a specific Chrome version. Until I <a href="https://www.google.it/search?q=chrome+rendering+issues">googled it</a> and found <a href="http://code.google.com/p/chromium/issues/detail?id=135341">this</a>.<br /><br />The solution is, in theory, pretty simple: upgrade your ATI drivers. Which I did, just to find out that my card (a Radeon HD 43xx) is no more supported and driver updates stop just short of fixing this issue. Annoying huh? Luckily Firefox still works so I could keep working on my presentation nonetheless.<br /><br />Today I decided that I would fix the issue once and for all and decided to play with Chrome options to see if I could find a lucky charm to make Chrome behave. The option that (almost) fixed Chrome is the following:<br /><br /><pre>--blacklist-accelerated-compositing</pre><br /><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-krteLUTnzWY/UOFrLLpNsaI/AAAAAAAAAS0/SVB_9WPSiOk/s1600/Screenshot+from+2012-12-29+20:23:21.png" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="191" src="http://1.bp.blogspot.com/-krteLUTnzWY/UOFrLLpNsaI/AAAAAAAAAS0/SVB_9WPSiOk/s1600/Screenshot+from+2012-12-29+20:23:21.png" width="200" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">broken 3D composition:&nbsp;the camera<br />should have rotated by almost 90° to<br />show the slide</td></tr></tbody></table>Unfortunately this options breaks the rendering of 3D compositions, see picture on the left.<br /><br />I was baffled. And annoyed. And decide to try my last card: the opens source <a href="https://help.ubuntu.com/community/RadeonDriver">radeon</a> drivers.&nbsp;One more reboot later I was running radeon drivers and hesitantly opened Google Chrome.<br /><br />3D performance is as bad as it gets (60 fps on glxgears versus 2000 with firegl!) but at least I could browse the web and view my presentation with Chrome again.<br /><br />I played through the presentation, looking carefully for artifacts and when I reached the 3D-transformed slide...it was still broken.<br /><br /><h3>Conclusions</h3>I will be keeping the radeon drivers, just to avoid the hard lockups on shutdown. Chrome rendering has improved, even if it's not perfect.<br /><br />Performance wise, the readeon drivers still allow me to run Unity 3D, but I am thinking about switching to XFCE. Surely in the future I will not buy computers with AMD/ATI cards anymore.<br /><br /><br />Umberto Nicolettihttps://plus.google.com/108409966984627342730noreply@blogger.com0tag:blogger.com,1999:blog-9627576.post-27536051986907115752012-12-22T10:33:00.001+01:002013-02-24T10:18:17.926+01:00Merry Christmas!It's that time of the year again and, since it appears that the Maya spared us, I want to share with you a couple of gists that I came up with recently that could be generally useful. Btw, there are lots of other gists on my <a href="https://gist.github.com/unicolet">gist.github.com</a> profile, check them out.<br /><br />If you find these script useful star them on github, drop me a comment or just share them. Once again, Merry Christmas everyone!<br /><br />The first one is for Java people and is a HttpServletRequestWrapper that supports:<br /><ol><li>injection of the principal: for those cases when you use trust authentication and you are rolling your own SSO solution and/or you need to integrate with an existing SSO solution (I used it with for <a href="http://www.jasig.org/cas">CAS</a>)</li><li>supports reading of the InputStream multiple times. We all know that in a POST the request input stream can only be read once, so this will definitely help you if you need to access the post body or a request parameter in a Filter and make sure the upstream servlets/filters still work</li></ol><br /><a name='more'></a><br /><br /><script src="https://gist.github.com/4327522.js"></script> <noscript><pre><br />File: AwesomeRequestWrapper.java<br />--------------------------------<br /><br />package java.is.awesome; // joking<br /><br />import org.apache.commons.httpclient.URIException;<br />import org.apache.commons.httpclient.util.URIUtil;<br />import org.apache.commons.io.IOUtils;<br />import org.apache.log4j.Logger;<br />import javax.servlet.ServletInputStream;<br />import javax.servlet.http.*;<br />import java.io.*;<br />import java.security.Principal;<br />import java.util.HashMap;<br />import java.util.Map;<br /><br />/**<br /> * AwesomeRequestWrapper adds two important features:<br /> * <br /> * 1. allow to set the user principal on the request so that the CAS can trust the user id<br /> * 2. cache post data and parse them offline so that inpustream can be read multiple times<br /> */<br />public class AwesomeRequestWrapper extends HttpServletRequestWrapper {<br /> Logger logger;<br /> Principal awesomePrincipal = null;<br /> Map parameters = null;<br /> byte[] postData = null;<br /><br /> public AwesomeRequestWrapper(HttpServletRequest request) {<br /> super(request);<br /> logger = Logger.getLogger(this.getClass());<br /> parameters = new HashMap();<br /> }<br /><br /> public Principal getUserPrincipal() {<br /> if (awesomePrincipal == null) {<br /> return super.getUserPrincipal();<br /> }<br /> return awesomePrincipal;<br /> }<br /><br /> public String getRemoteUser() {<br /> if (awesomePrincipal == null) {<br /> return super.getRemoteUser();<br /> }<br /> return awesomePrincipal.getName();<br /> }<br /><br /> void setUserPrincipal(Principal p) {<br /> awesomePrincipal = p;<br /> }<br /><br /> @Override<br /> public String getParameter(String name) {<br /> String value = super.getParameter(name);<br /> if (value == null) {<br /> value = (String) parameters.get(name);<br /> }<br /> return value;<br /> }<br /><br /> @Override<br /> public ServletInputStream getInputStream() throws IOException {<br /> logger.trace(&quot;called getInputStream&quot;);<br /> if (postData == null) {<br /> postData = IOUtils.toByteArray(super.getInputStream());<br /> parameters = getQueryMap(new String(postData));<br /> }<br /> logger.trace(&quot;post data read, parsed and cached: &quot; + new String(postData));<br /> return new BAServletInputStream(new ByteArrayInputStream(postData));<br /> }<br /><br /> private class BAServletInputStream extends ServletInputStream {<br /> InputStream is;<br /><br /> BAServletInputStream(InputStream is) {<br /> this.is = is;<br /> }<br /><br /> @Override<br /> public int read() throws IOException {<br /> return is.read();<br /> }<br /><br /> @Override<br /> public int readLine(byte[] b, int off, int len) throws IOException {<br /> return is.read(b, off, len);<br /> }<br /><br /> @Override<br /> public int read(byte[] bytes) throws IOException {<br /> return is.read(bytes);<br /> }<br /><br /> @Override<br /> public int read(byte[] bytes, int i, int i1) throws IOException {<br /> return is.read(bytes, i, i1);<br /> }<br /><br /> @Override<br /> public long skip(long l) throws IOException {<br /> return is.skip(l);<br /> }<br /> }<br /><br /> /* this could actually be improved, there should be a method that does the same in Spring */<br /> public Map&lt;String, String&gt; getQueryMap(String query) {<br /> String[] params = query.split(&quot;&amp;&quot;);<br /> Map&lt;String, String&gt; map = new HashMap&lt;String, String&gt;();<br /> for (String param : params) {<br /> try {<br /> String name = URIUtil.decode(param.split(&quot;=&quot;)[0]);<br /> String value = URIUtil.decode(param.split(&quot;=&quot;)[1]);<br /> map.put(name, value);<br /> } catch (Exception e) {<br /> logger.error(&quot;Cannot decode request parameter: &quot; + e.getMessage());<br /> }<br /> }<br /> return map;<br /> }<br />}<br /><br /></pre></noscript> The second <strike>gift</strike> gist is for Windows admins and is written in vbs (I even do VBS when it is necessary, now you get my twitter <a href="https://twitter.com/AFactotum">handle</a>, don't you?). It is a login script that can be used in a Windows Domain to recreate Desktop links on each user logon. The configuration for each link is stored in the script as a dictionary of dictionaries and link-to-user assignment is done by adding the user to an AD group. The source is heavily commented and should be easy enough to understand for anyone who's ever programmed, even if not in vbs.<br /><br /><br /><script src="https://gist.github.com/4352412.js"></script> <noscript><pre><br />File: desktoplinksmanager.vbs<br />-----------------------------<br /><br />Set FSO = CreateObject(&quot;scripting.filesystemobject&quot;)<br />Set objShell = WScript.CreateObject(&quot;WScript.Shell&quot;)<br />strDesktop = objShell.SpecialFolders(&quot;Desktop&quot;)<br /><br />&#39; Could you believe this is a comment? I know, vbs sucks<br />&#39;<br />&#39; The following is a dictionary of dictionaries.<br />&#39; The first dictionary is keyed by group name (case sensitive!)<br />&#39; while the second holds the attributes for the link to be created.<br />&#39; With the example cfg below the script will create a link on the desktop<br />&#39; of all users in the app_NOTEPAD group.<br />&#39;<br />&#39; To add a new link duplicate lines 22-28 and customize to taste<br />&#39; Remember: options are all required (or improve this script so that)<br />&#39; some can actually be left empty<br />&#39;<br />&#39; After that create a new group with the same name as the key<br />&#39; you specified at line 28<br />&#39;<br />Set shortcuts = CreateObject(&quot;Scripting.Dictionary&quot;) <br /><br />Set app = CreateObject(&quot;Scripting.Dictionary&quot;) <br />app.Add &quot;nome&quot;, &quot;NOTEPAD.lnk&quot;<br />app.Add &quot;arguments&quot;, &quot;&quot;<br />app.Add &quot;targetpath&quot;, &quot;%windir%\system32\notepad.exe&quot;<br />app.Add &quot;workingdirectory&quot;, &quot;%windir%\system32\notepad.exe&quot;<br />app.Add &quot;icon&quot;, &quot;%windir%\system32\notepad.exe&quot;<br />shortcuts.Add &quot;app_NOTEPAD&quot;, app<br /><br />Set oMember = GetObject(&quot;WinNT://&quot;+objShell.ExpandEnvironmentStrings(&quot;%UserDomain%&quot;)+&quot;/&quot; + objShell.ExpandEnvironmentStrings(&quot;%UserName%&quot;))<br /><br />For Each oGroup in oMember.Groups<br /> wscript.echo oGroup.Name<br /> If shortcuts.Exists(oGroup.Name) Then<br /> wscript.echo &quot; Trovato shortcut: &quot; &amp; oGroup.Name<br /> Set scut = shortcuts.Item(oGroup.Name)<br /> If FSO.FileExists(strDesktop &amp; &quot;\&quot; &amp; scut.Item(&quot;nome&quot;)) = True Then<br /> FSO.GetFile( strDesktop &amp; &quot;\&quot; &amp; scut.Item(&quot;nome&quot;) ).Delete(True)<br /> End If<br /> Set oMyShortcut = objShell.CreateShortcut( strDesktop &amp; &quot;\&quot; &amp; scut.Item(&quot;nome&quot;) )<br /> oMyShortcut.WindowStyle = 3 &#39;Maximized 7=Minimized 4=Normal <br /> oMyShortcut.IconLocation = scut.Item(&quot;icon&quot;)<br /> oMyShortcut.Arguments = scut.Item(&quot;arguments&quot;)<br /> oMyShortcut.TargetPath = scut.Item(&quot;targetpath&quot;)<br /> oMyShortcut.WorkingDirectory = scut.Item(&quot;workingdirectory&quot;)<br /> oMyShortCut.Save<br /> End If<br />Next<br /></pre></noscript>Umberto Nicolettihttps://plus.google.com/108409966984627342730noreply@blogger.com0tag:blogger.com,1999:blog-9627576.post-44814200701646408592012-11-16T00:28:00.002+01:002013-02-24T10:18:42.583+01:00Testing OpenLayers with Selenium (Patch)To improve the quality of both my sleep <i>and</i> Mappu development I have started using Selenium IDE for automated testing of Mappu's UI.<br /><br />It was kind of hard at the beginning, being the UI based on Sproutcore which has the annoying habit of changing the controls ids with every page load, but after a while I was able to get it rolling quite nicely. Then I hit a major roadblock: the OpenLayers-based map control wouldn't react to clicks, mousedown, mouseup, fireEvent or anything else I threw at it.<br /><a name='more'></a><br />At first I thought I was sending the events to the wrong element (and I was, at the beginning), but after a while I realized there was something else.<br /><br />So I forked OL and started hacking away. To make a long story short I debugged the OpenLayers code until I figured it out. The details are in this cold and lonely thread on the <a href="http://lists.osgeo.org/pipermail/openlayers-users/2012-November/026791.html">openlayers-users</a> mailing list.<br />Honestly I don't know why Selenium IDE is sending that event with negative coords (which is not triggered during normal user interation), but it seems like the <a href="https://github.com/unicolet/openlayers/compare/selenium">fix</a> I came up with is mostly harmless as all the tests still pass.<br /><br />So if you want to use Selenium and OpenLayers and you're willing to put up with the risk of monkey patching OpenLayers you can just add this fragment of code in your page <i>after</i> OpenLayers has loaded and <i>before</i> using any OpenLayers object:<br /><br /><pre style="brush: javascript;">OpenLayers.Handler.Click.prototype.mousedown=function(evt) {<br /> if(evt.xy &amp;&amp; (evt.xy.x &lt;= 0.0 &amp;&amp; evt.xy.y &lt;= 0.0)) {<br /> return true;<br /> }<br /> this.down = this.getEventInfo(evt);<br /> this.last = this.getEventInfo(evt);<br /> return true;<br /> };<br /></pre><br />Tested with OpenLayer 2.11 and 2.12. Use at your own risk.Umberto Nicolettihttps://plus.google.com/108409966984627342730noreply@blogger.com0