tag:blogger.com,1999:blog-95186392018-03-19T08:30:24.780+00:00Humphrey SheilHumphrey Sheil's blog covering software engineering design and technology (JEE, .NET, intelligent searching, artificial intelligence), SCEA exam from Oracle.Humphrey Sheilhttps://plus.google.com/118284397577501285021noreply@blogger.comBlogger62125tag:blogger.com,1999:blog-9518639.post-63800876885811341792015-06-05T14:48:00.000+01:002015-06-05T14:48:01.903+01:00Moving to humphreysheil.comAfter 11 (crikey!) years on blogger, I'm moving to <a href="http://humphreysheil.com/">a new home</a>. Hopefully this refresh will encourage more blog posting..Humphrey Sheilhttps://plus.google.com/118284397577501285021noreply@blogger.com0tag:blogger.com,1999:blog-9518639.post-7072014311769336502014-10-06T16:23:00.002+01:002014-10-07T08:20:28.242+01:00Named Entity Recognition - short tutorial and sample business applicationA latent theme is emerging quite quickly in mainstream business computing - the inclusion of Machine Learning to solve thorny problems in very specific problem domains. For me, Machine Learning is the use of <i>any</i> technique where system performance improves over time by the system either being trained or learning.<br /><br />In this short article, I will quickly demonstrate how an off the shelf Machine Learning package can be used to add significant value to vanilla Java code for language parsing, recognition and entity extraction. In this example, adopting an advanced, yet easy to use, Natural Language Parser (NLP) combined with Named Entity Recognition (NER), provides a deeper, more semantic and more <i>extensible</i> understanding of natural text commonly encountered in a business application than any non-Machine Learning approach could hope to deliver.<br /><br />Machine Learning is one of the oldest branches of Computer Science. From <a href="http://en.wikipedia.org/wiki/Perceptron">Rosenblatt's perceptron in 1957</a> (and even earlier), Machine Learning has grown up alongside other subdisciplines such as language design, compiler theory, databases and networking - the nuts and bolts that drive the web and most business systems today. But by and large, Machine Learning is not <i>straightforward</i> or <i>clear-cut</i> enough for a lot of developers and until recently, its' application to business systems was seen as not strictly necessary. For example, we know that investment banks have put significant efforts applying neural networks to market prediction and portfolio risk management and the efforts of Google and Facebook with deep learning (the third generation of neural networks) <a href="http://www.theregister.co.uk/2013/06/19/google_machine_learning/">has been widely reported</a> in the last three years, particularly for image and speech recognition. But mainstream business systems do not display the same adoption levels..<br /><br /><b>Aside</b>: <i>accuracy</i> is important in business / real-world applications.. the picture below shows why you <i>now</i> have Siri / Google Now on your iOS or Android device. Until 2009 - 2010, accuracy had flat-lined for almost a decade, but the application of the next generation of artificial neural networks drove the error rates down to a usable level for millions of users (graph drawn from Yoshua Bengio's ML&nbsp;<a href="http://www.iro.umontreal.ca/~bengioy/talks/KDD2014-tutorial.pdf">tutorial at KDD this year</a>).<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-_ssUSGWoI9k/VDKqE7WAXYI/AAAAAAAAAqo/9-I98P_UORg/s1600/Screen%2BShot%2B2014-09-16%2Bat%2B11.16.51.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://3.bp.blogspot.com/-_ssUSGWoI9k/VDKqE7WAXYI/AAAAAAAAAqo/9-I98P_UORg/s1600/Screen%2BShot%2B2014-09-16%2Bat%2B11.16.51.png" height="241" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Dramatic reduction in error rate on Switchboard data set post introduction of deep learning techniques.</td></tr></tbody></table><br /><br />Luckily you don't need to build a deep neural net just to apply Machine Learning to your project! Instead, let's look at a task that many applications can and should handle better - mining unstructured text data to extract meaning and inference.<br /><br />Natural language parsing is tricky. There are any number of seemingly easy sentences which demonstrate how much context we subconsciously process when we read. For example, what if someone comments on an invoice: "<i>Partial invoice (€100,000, so roughly 40%) for the consignment C27655 we shipped on 15th August to London from the Make Believe Town depot. INV2345 is for the balance.. Customer contact (Sigourney) says they will pay this on the usual credit terms (30 days).</i>".<br /><br />Extracting tokens of interest from an arbitrary String is pretty easy. Just use a StringTokenizer, use space (" ") as the separator character and you're good to go.. But code like this has a high maintenance overhead, needs a lot of work to extend and is fundamentally only as good as the time you invest into it. Think about <a href="http://tartarus.org/~martin/PorterStemmer/index.html">stemming</a>, checking for ',','.',';' characters as token separators and a whole slew more of plumbing code hoves into view.<br /><h3>How can Machine Learning help?</h3>Natural Language Parsing (NLP) is a mature branch of Machine Learning. There are many NLP implementations available, the one I will use here is the CoreNLP / NER framework from the language research group at Stanford University. CoreNLP is underpinned by a robust theoretical framework, has a good API and reasonable documentation. It is slow to load though.. make sure you use a Factory + Singleton pattern combo in your code as it is thread-safe since ~2012. An online demo of a 7-class (recognises seven different things or entities) trained model is available at <a href="http://nlp.stanford.edu:8080/ner/process">http://nlp.stanford.edu:8080/ner/process</a> where you can submit your own text and see how well the classifier / tagger does. Here's a screenshot of the default model on our sample sentence:<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-a9FAeigxN-s/VDKvywBNVmI/AAAAAAAAArA/LbjS0lmTV7k/s1600/Screen%2BShot%2B2014-10-06%2Bat%2B16.05.15.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://3.bp.blogspot.com/-a9FAeigxN-s/VDKvywBNVmI/AAAAAAAAArA/LbjS0lmTV7k/s1600/Screen%2BShot%2B2014-10-06%2Bat%2B16.05.15.png" height="287" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Output from a trained model without the use of a supplementing dictionary / gazette.</td></tr></tbody></table><br />You will note that "Make Believe Town" is classified (incorrectly in this case) as an ORGANIZATION. Ok, so let's give this "out of the box" model a bit more knowledge about the geography our company uses to improve its' accuracy. <i>Note</i>: I would have preferred to use the <a href="http://nlp.stanford.edu/software/crf-faq.shtml#gazette">gazette feature</a> in Stanford NER (I felt it was a more elegant solution), but as the documentation stated, gazette terms are not set in stone, behaviour that we require here.<br /><br />So let's create a simple tab-delimited text file as follows:<br /><br />Make Believe Town<span class="Apple-tab-span" style="white-space: pre;"> </span>LOCATION<br /><br />(make sure you don't have any blank lines in this file - RegexNER <i>really</i> doesn't like them!)<br /><br />Save this one line of text into a file named locations.txt and place it in a location available to your classloader at runtime. I have also assumed that you have installed the Stanford NLP models and required jar files into the same location.<br /><br />Now re-run the model, but this time asking CoreNLP to add the regexner to the pipeline.. You can do this by running the code below and changing the value of the useRegexner boolean flag to examine the accuracy with and without our small dictionary.<br /><br />Hey presto! Our default 7-class model now has a better understanding of our unique geography, adding more value to this data mining tool for our company (check out the output below vs the screenshot from the default model above)..<br /><br /><h4>Code</h4><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">package phoenix;</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">import java.util.ArrayList;</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">import java.util.List;</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">import java.util.Properties;</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">import org.junit.Test;</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">import org.slf4j.Logger;</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">import org.slf4j.LoggerFactory;</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">import edu.stanford.nlp.ling.CoreLabel;</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">import edu.stanford.nlp.ling.CoreAnnotations.NamedEntityTagAnnotation;</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">import edu.stanford.nlp.ling.CoreAnnotations.SentencesAnnotation;</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">import edu.stanford.nlp.ling.CoreAnnotations.TextAnnotation;</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">import edu.stanford.nlp.ling.CoreAnnotations.TokensAnnotation;</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">import edu.stanford.nlp.pipeline.Annotation;</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">import edu.stanford.nlp.pipeline.StanfordCoreNLP;</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">import edu.stanford.nlp.util.CoreMap;</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span><span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">/**</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp;* Some simple unit tests for the CoreNLP NER (http://nlp.stanford.edu/software/CRF-NER.shtml) short</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp;* article.</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp;*&nbsp;</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp;* @author hsheil</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp;*</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp;*/</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">public class ArticleNlpRunner {</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; private static final Logger LOG = LoggerFactory.getLogger(ArticleNlpRunner.class);</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; @Test</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; public void basic() {</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; LOG.debug("Starting Stanford NLP");</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; // creates a StanfordCoreNLP object, with POS tagging, lemmatization, NER, parsing, and</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; Properties props = new Properties();</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; boolean useRegexner = true;</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; if (useRegexner) {</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; props.put("annotators", "tokenize, ssplit, pos, lemma, ner, regexner");</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; props.put("regexner.mapping", "locations.txt");</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; } else {</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; props.put("annotators", "tokenize, ssplit, pos, lemma, ner");</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; }</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; StanfordCoreNLP pipeline = new StanfordCoreNLP(props);</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; // // We're interested in NER for these things (jt-&gt;loc-&gt;sal)</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; String[] tests =</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; {</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; "Partial invoice (€100,000, so roughly 40%) for the consignment C27655 we shipped on 15th August to London from the Make Believe Town depot. INV2345 is for the balance.. Customer contact (Sigourney) says they will pay this on the usual credit terms (30 days)."</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; };</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; List<embeddedtoken> tokens = new ArrayList&lt;&gt;();</embeddedtoken></span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; for (String s : tests) {</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; // run all Annotators on the passed-in text</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; Annotation document = new Annotation(s);</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; pipeline.annotate(document);</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; // these are all the sentences in this document</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; // a CoreMap is essentially a Map that uses class objects as keys and has values with</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; // custom types</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; List<coremap> sentences = document.get(SentencesAnnotation.class);</coremap></span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; StringBuilder sb = new StringBuilder();</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp;&nbsp;</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; //I don't know why I can't get this code out of the box from StanfordNLP, multi-token entities</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; //are far more interesting and useful..</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; //TODO make this code simpler..</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; for (CoreMap sentence : sentences) {</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; // traversing the words in the current sentence, "O" is a sensible default to initialise</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; // tokens to since we're not interested in unclassified / unknown things..</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; String prevNeToken = "O";</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; String currNeToken = "O";</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; boolean newToken = true;</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; for (CoreLabel token : sentence.get(TokensAnnotation.class)) {</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; currNeToken = token.get(NamedEntityTagAnnotation.class);</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; String word = token.get(TextAnnotation.class);</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; // Strip out "O"s completely, makes code below easier to understand</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; if (currNeToken.equals("O")) {</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; // LOG.debug("Skipping '{}' classified as {}", word, currNeToken);</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; if (!prevNeToken.equals("O") &amp;&amp; (sb.length() &gt; 0)) {</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; handleEntity(prevNeToken, sb, tokens);</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; newToken = true;</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; }</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; continue;</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; }</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; if (newToken) {</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; prevNeToken = currNeToken;</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; newToken = false;</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; sb.append(word);</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; continue;</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; }</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; if (currNeToken.equals(prevNeToken)) {</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; sb.append(" " + word);</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; } else {</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; // We're done with the current entity - print it out and reset</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; // TODO save this token into an appropriate ADT to return for useful processing..</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; handleEntity(prevNeToken, sb, tokens);</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; newToken = true;</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; }</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; prevNeToken = currNeToken;</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; &nbsp; }</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; }</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp;&nbsp;</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; //TODO - do some cool stuff with these tokens!</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; &nbsp; LOG.debug("We extracted {} tokens of interest from the input text", tokens.size());</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; }</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; }</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; private void handleEntity(String inKey, StringBuilder inSb, List<embeddedtoken> inTokens) {</embeddedtoken></span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; LOG.debug("'{}' is a {}", inSb, inKey);</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; inTokens.add(new EmbeddedToken(inKey, inSb.toString()));</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; inSb.setLength(0);</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; }</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span><span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">}</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">class EmbeddedToken {</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; private String name;</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; private String value;</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; public String getName() {</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; return name;</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; }</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; public String getValue() {</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; return value;</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; }</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; public EmbeddedToken(String name, String value) {</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; super();</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; this.name = name;</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; &nbsp; this.value = value;</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">&nbsp; }</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">}</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span><br /><h4><span style="font-family: inherit; font-size: x-small;">Output</span></h4><div><div class="p1"><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">16:01:15.465 [main] DEBUG phoenix.ArticleNlpRunner - Starting Stanford NLP</span></div><div class="p2"><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">Adding annotator tokenize</span></div><div class="p2"><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">TokenizerAnnotator: No tokenizer type provided. Defaulting to PTBTokenizer.</span></div><div class="p2"><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">Adding annotator ssplit</span></div><div class="p2"><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">edu.stanford.nlp.pipeline.AnnotatorImplementations:</span></div><div class="p2"><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">Adding annotator pos</span></div><div class="p2"><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">Reading POS tagger model from edu/stanford/nlp/models/pos-tagger/english-left3words/english-left3words-distsim.tagger ... done [1.5 sec].</span></div><div class="p2"><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">Adding annotator lemma</span></div><div class="p2"><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">Adding annotator ner</span></div><div class="p2"><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">Loading classifier from edu/stanford/nlp/models/ner/english.all.3class.distsim.crf.ser.gz ... done [6.6 sec].</span></div><div class="p2"><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">Loading classifier from edu/stanford/nlp/models/ner/english.muc.7class.distsim.crf.ser.gz ... done [3.1 sec].</span></div><div class="p2"><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">Loading classifier from edu/stanford/nlp/models/ner/english.conll.4class.distsim.crf.ser.gz ... done [8.6 sec].</span></div><div class="p2"><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">sutime.binder.1.</span></div><div class="p2"><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">Initializing JollyDayHoliday for sutime with classpath:edu/stanford/nlp/models/sutime/jollyday/Holidays_sutime.xml</span></div><div class="p2"><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">Reading TokensRegex rules from edu/stanford/nlp/models/sutime/defs.sutime.txt</span></div><div class="p2"><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">Reading TokensRegex rules from edu/stanford/nlp/models/sutime/english.sutime.txt</span></div><div class="p2"><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">Oct 06, 2014 4:01:37 PM edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor appendRules</span></div><div class="p2"><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">INFO: Ignoring inactive rule: null</span></div><div class="p2"><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">Oct 06, 2014 4:01:37 PM edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor appendRules</span></div><div class="p2"><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">INFO: Ignoring inactive rule: temporal-composite-8:ranges</span></div><div class="p2"><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">Reading TokensRegex rules from edu/stanford/nlp/models/sutime/english.holidays.sutime.txt</span></div><div class="p2"><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">Adding annotator regexner</span></div><div class="p2"><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">TokensRegexNERAnnotator regexner: Read 1 unique entries out of 1 from locations.txt, 0 TokensRegex patterns.</span></div><div class="p1"><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">16:01:38.077 [main] DEBUG phoenix.ArticleNlpRunner - '$ 100,000' is a MONEY</span></div><div class="p1"><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">16:01:38.080 [main] DEBUG phoenix.ArticleNlpRunner - '40 %' is a PERCENT</span></div><div class="p1"><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">16:01:38.080 [main] DEBUG phoenix.ArticleNlpRunner - '15th August' is a DATE</span></div><div class="p1"><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">16:01:38.080 [main] DEBUG phoenix.ArticleNlpRunner - 'London' is a LOCATION</span></div><div class="p1"><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">16:01:38.080 [main] DEBUG phoenix.ArticleNlpRunner - 'Make Believe Town' is a LOCATION</span></div><div class="p1"><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">16:01:38.080 [main] DEBUG phoenix.ArticleNlpRunner - 'Sigourney' is a PERSON</span></div><div class="p1"><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">16:01:38.081 [main] DEBUG phoenix.ArticleNlpRunner - '30 days' is a DURATION</span></div><div class="p1"><span style="font-family: Courier New, Courier, monospace; font-size: x-small;">16:01:38.081 [main] DEBUG phoenix.ArticleNlpRunner - We extracted 7 tokens of interest from the input text</span></div></div><br />There are some caveats though - your dictionary needs to be carefully selected to not overwrite the better "natural" performance of Stanford NER using its' <a href="http://nlp.stanford.edu/~manning/papers/gibbscrf3.pdf">Conditional Random Field (CRF)-inspired logic augmented with Gibbs Sampling</a>. For example, if you have a customer company called Make Believe Town Limited (unlikely, but not impossible), then Stanford NER will mis-classify <span style="font-family: Courier New, Courier, monospace;"><organization>Make Believe Town Limited</organization></span> to <span style="font-family: Courier New, Courier, monospace;"><location>Make Believe Town</location></span>. However, with careful dictionary population and a good understanding of the target raw text corpus, this is still a very fruitful approach.<br /><br /><h3>Summary</h3><br />In summary, a robust natural language parser with integrated Named Entity Recognition like the Stanford NLP libraries used here provide a strong base to build from for business applications needing more powerful text analysis, particularly in conjunction with approaches like gazettes that allow the overlay of business terms to improve the accuracy of the vanilla model.<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />Humphrey Sheilhttps://plus.google.com/118284397577501285021noreply@blogger.com2tag:blogger.com,1999:blog-9518639.post-3675557797914205912012-09-30T12:39:00.000+01:002012-09-30T12:40:15.726+01:00SCEA study guide errataThanks to&nbsp;Kristiyan Marinov who sent in this comment calling out three typos in the <a href="http://www.amazon.co.uk/Certified-Enterprise-Architect-Edition-ebook/dp/B00371V81Y/ref=sr_1_1?ie=UTF8&amp;qid=1349005189&amp;sr=8-1">book</a>. I figure this is interesting enough to other readers of the book currently working through the exam to re-publish the comment in full below along with my reply..<br /><br /><h3>Original comment</h3>Hi,<br /><br /><br />I recently read your book and find it an interesting and helpful read. I found a couple of mistakes during some of the test-yourself questions though.<br />Since I didn't manage to get a hold of you or Mark Cade any other way, I'll be posting my notes in this comment.<br /><br />Typo 1: On page 81, question 6. The Answer says D but the explanation below it explains why C is the correct answer (which it is indeed).<br /><br />Typo 2: On page 95, question 1. The Answer says C and D but the correct answers are B and C, as is given in the explanation.<br /><br />Typo 3: On page 148, question 5. The Answer says B, C and F while the actual answers are B, D and F.<br /><br />Thanks for the otherwise great book!<br /><br />Kristiyan<br /><br /><h3>My reply</h3><br /><br />Hi Kristiyan<br /><br />Thanks for the comments. These are all typos that need to go into the errata for the book. I'll reach out to the publshers to see when the next run is and also create a blog post to call out these three for other readers. Thanks!<br /><br />Humphrey<br /><br /><br />Humphrey Sheilhttps://plus.google.com/118284397577501285021noreply@blogger.com2tag:blogger.com,1999:blog-9518639.post-66081932225334485612012-07-08T20:59:00.002+01:002012-07-08T20:59:16.850+01:00Fixing the Google Analytics API (v3) examples<br />I'm currently working on consuming data from the <a href="https://developers.google.com/analytics/">Google Analytics API</a> for one of the data sources we're using at <a href="http://www.eysys.com/">Eysys</a>.<br /><br />So far, so normal. But what <i>was</i> strange was the poor state of the Google Analytics API documentation. I don't think I've ever seen one of their APIs be documented so poorly - missing source code, typos in example source code provided, a real rambling tone to the docs pointing off to different areas (a lot of this is to do with the OAuth 2.0 hoops you have to jump through before even starting to pull down the data you want to analyse).<br /><br />I also couldn't believe how many dependencies the API has - I ended up with 29 jar files in my Eclipse project's lib folder! Surely this could all be a lot leaner, meaner and easier - it's just an API returning JSON data at the end of the day..<br /><br />Anyway, if you're interested in getting up and running with the API examples, here's what to fix.<br /><br />First of all, there is actually missing source code in the distro itself (or at least the one I used - google-api-services-analytics-v3-rev10-1.7.2-beta.zip), so you need to get <span style="font-family: 'Courier New', Courier, monospace;">LocalServerReceiver</span>, <span style="font-family: 'Courier New', Courier, monospace;">OAuth2Native </span>and <span style="font-family: 'Courier New', Courier, monospace;">VerificationCodeReceiver </span><span style="font-family: inherit;">directly&nbsp;</span>from the <a href="http://code.google.com/p/google-api-java-client/source/browse/shared/shared-sample-cmdline/src/main/java/com/google/api/services/samples/shared/cmdline/oauth2/">Google code repo</a>.<br /><br /><span style="font-family: 'Courier New', Courier, monospace;">LocalServerReceiver</span> uses an old version of Jetty, so we need to migrate it to use the latest Jetty (I used 8.1.4.v20120524) which now has an org.eclipse.* package structure. So we need to update the imports as follows:<br /><br /><span style="font-family: 'Courier New', Courier, monospace;">import org.eclipse.jetty.server.Connector;</span><br /><span style="font-family: 'Courier New', Courier, monospace;">import org.eclipse.jetty.server.Request;</span><br /><span style="font-family: 'Courier New', Courier, monospace;">import org.eclipse.jetty.server.Server;</span><br /><span style="font-family: 'Courier New', Courier, monospace;">import org.eclipse.jetty.server.handler.AbstractHandler;</span><br /><br />Refreshing Jetty will also necessitate upgrading the <span style="font-family: 'Courier New', Courier, monospace;">handle(..)</span> method to fit in with the new signature, as follows:<br /><br /><span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"><span class="Apple-tab-span" style="white-space: pre;"> </span>@Override</span><br /><span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"><span class="Apple-tab-span" style="white-space: pre;"> </span>public void handle(String target, Request arg1,</span><br /><span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"><span class="Apple-tab-span" style="white-space: pre;"> </span>HttpServletRequest request, HttpServletResponse response)</span><br /><span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"><span class="Apple-tab-span" style="white-space: pre;"> </span>throws IOException, ServletException {</span><br /><span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"><span class="Apple-tab-span" style="white-space: pre;"> </span>if (!CALLBACK_PATH.equals(target)) {</span><br /><span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"><span class="Apple-tab-span" style="white-space: pre;"> </span>return;</span><br /><span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"><span class="Apple-tab-span" style="white-space: pre;"> </span>}</span><br /><span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"><span class="Apple-tab-span" style="white-space: pre;"> </span>writeLandingHtml(response);</span><br /><span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"><span class="Apple-tab-span" style="white-space: pre;"> </span>response.flushBuffer();</span><br /><span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"><span class="Apple-tab-span" style="white-space: pre;"> </span>((Request) request).setHandled(true);</span><br /><span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"><span class="Apple-tab-span" style="white-space: pre;"> </span>String error = request.getParameter("error");</span><br /><span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"><span class="Apple-tab-span" style="white-space: pre;"> </span>if (error != null) {</span><br /><span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"><span class="Apple-tab-span" style="white-space: pre;"> </span>System.out.println("Authorization failed. Error=" + error);</span><br /><span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"><span class="Apple-tab-span" style="white-space: pre;"> </span>System.out.println("Quitting.");</span><br /><span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"><span class="Apple-tab-span" style="white-space: pre;"> </span>System.exit(1);</span><br /><span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"><span class="Apple-tab-span" style="white-space: pre;"> </span>}</span><br /><span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"><span class="Apple-tab-span" style="white-space: pre;"> </span>code = request.getParameter("code");</span><br /><span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"><span class="Apple-tab-span" style="white-space: pre;"> </span>synchronized (LocalServerReceiver.this) {</span><br /><span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"><span class="Apple-tab-span" style="white-space: pre;"> </span>LocalServerReceiver.this.notify();</span><br /><span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"><span class="Apple-tab-span" style="white-space: pre;"> </span>}</span><br /><span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"><span class="Apple-tab-span" style="white-space: pre;"> </span>}</span><br /><br /><br />There's also a small mod to be made in getRedirectUri(), change this line:<br /><br /><span style="font-family: 'Courier New', Courier, monospace;"><span class="Apple-tab-span" style="white-space: pre;"> </span>server.addHandler(new CallbackHandler());</span><br /><br />to this instead:<br /><br /><span style="font-family: 'Courier New', Courier, monospace;"><span class="Apple-tab-span" style="white-space: pre;"> </span>server.setHandler(new CallbackHandler());</span><br /><br /><br />The logic of this class also seems pretty flawed to me - generating a random port for the redirect URI that OAuth calls back to at the end of authentication every time it's run, which by definition you won't be able to put into the <a href="https://code.google.com/apis/console/">APIs console</a>. So I commented out the getUnusedPort() method and simply hard-coded one.<br /><br />And after these mods, hey presto it works! :-)<br />Humphrey Sheilhttps://plus.google.com/118284397577501285021noreply@blogger.com1Twickenham, UK51.4390736 -0.363411931.0072686 -40.7930994 71.8708786 40.0662756tag:blogger.com,1999:blog-9518639.post-3678845323395485612012-05-11T14:56:00.000+01:002012-05-11T14:59:49.993+01:00Writing a new book about software development!<br />Following a respectable interval of time after the launch of the official <a href="http://humphreysheil.blogspot.co.uk/2010_01_01_archive.html">Enterprise Architect study guide</a> (which was absolutely necessary to allow painful memories of the writing and editing process to fade :-) ), I've teamed up with my editor from that book -&nbsp;<a href="http://www.informit.com/authors/bio.aspx?a=EAACC036-1F81-41E8-8D1A-132DDED0F07C">Greg Doench</a> &nbsp;from <a href="http://www.pearson.com/">Pearson</a> on a new book about software development. I can't believe that Greg is signing up for round two with me, and am grateful to have him on board again!<br /><br />The central premise of this new book stems from an observation that I have seen time and time again - a lot of smart people in business that I work with just don't get software at any kind of meaningful level - the coders who program it, their unique culture, the actual process of designing and writing software, and most importantly - why things (inevitably) go wrong and how to fix them when they do (and for "wrong", you can substitute any value of "late", "over-budget" or "doesn't do what it should" or all three that floats your boat).<br /><br />This disconnect would be ok if it weren't for the fact that these same smart business people almost always end up in a position where software projects are a key part of what they need to achieve - they become customers, or key stakeholders. If they rise high enough, they become actual budget holders - then it gets interesting!&nbsp;Simply put, it's rapidly becoming a career-limiting move in business to say that "I'm not technical". And motivated business people who want to become conversant with their software projects are finding a gap in readable, digestible content that helps them to bridge their gap in understanding. That's where this book comes in.<br /><br />The book structure itself is pretty new - although the chapters are designed to be read together (although not in a regimented order), Greg is encouraging me to write the chapters so that they will also read "standalone". There's a strong chance then that individual chapters will be available well before the book is scheduled to complete in Q2 of 2013.<br /><br />What the book is:<br /><br />* A guide to software development for people who are not technical by background and want to learn<br />* A map to navigate a software project by - regardless of programming language used or target application<br />* A guide that should stand the test of time - it's not about buzzwords, it's about the core building blocks that make up software projects<br /><br />What the book isn't:<br /><br />* An idiot's guide to software development - you will be stretched intellectually by the content we're planning to put into the book<br />* A technology-specific guide - I'll be consciously writing the book with a view to covering all technologies, and concrete examples will be provided across the spectrum of commonly-used programming languages<br /><div><br /></div><div>Time to start writing!</div><div><br /></div>Humphrey Sheilhttps://plus.google.com/118284397577501285021noreply@blogger.com0tag:blogger.com,1999:blog-9518639.post-39536752172063519112012-03-21T15:34:00.000+00:002012-03-21T15:34:23.591+00:00Awesome Java developers wanted!We're hiring! If you've got a hankering to work for a startup operating in stealth mode based in Cardiff, Wales, using the latest Java frameworks and techniques to build the coolest ecommerce platform around, read on!<br /><br /><br />ABOUT THE OPPORTUNITY<br /><br />Once in a while, a chance comes up to be part of something special. We are looking for excellent developers to join our team and help build the next-generation global ecommerce platform incorporating big data mining and analysis, machine learning, cloud computing (EC2 and App Engine) and the latest advances in online commerce.<br /><br />WHO WE ARE LOOKING FOR<br /><br />* Our platform is primarily Java-based, so strong Java and OOD skills are an absolute must<br /><br />* A willingness to pro-actively research and use new libraries and projects as needed to add new platform capabilities to complete our roadmap<br /><br />* Experience with Linux and MySQL is also advantageous<br /><br />* You will have a solid grounding in how web based server side applications and databases work<br /><br />* Be comfortable working in a rapid iteration development cycle moving from prototype to production while engineering to a high level of quality, using leading automated testing techniques<br /><br />* Enjoy / understand the importance of working in all layers of the platform architecture - UI, business logic and persistence<br /><br />* Understand how to describe and design a system in terms of data structures and algorithms, in order to participate effectively in core design workshops<br /><br />* Be totally committed to writing the most efficient, scalable and robust code possible, and to continously improve your ability in this area<br /><br />* Prior experience with Bayesian techniques and artificial neural networks is beneficial, but is not strictly necessary<br /><br />* Prior experience with Hadoop and HBase is beneficial, but is not strictly necessary<br /><br />* Most important of all.. where you don't know something, be happy and ready to roll your sleeves up and learn it!<br /><br /><br />ABOUT US<br /><br />Our culture is to work hard using the latest and most relevant technologies and to have lots of fun while doing it! We believe passionately in building and delivering truly game-changing software to our customers. Our ideal candidates are self-starting, good communicators, love coding and work well in a team.<br /><br />For more information and to submit a CV, please email careers@eysys.com.<br /><br />To all recruitment agencies: eysys does not accept agency CVs. Please do not forward CVs to our jobs alias, eysys employees or any other company location. eysys is not responsible for any fees related to unsolicited CVs.<br /><div><br /></div>Humphrey Sheilhttps://plus.google.com/118284397577501285021noreply@blogger.com0tag:blogger.com,1999:blog-9518639.post-62601272919815428462011-08-07T22:59:00.001+01:002011-08-07T23:01:29.414+01:00NoSQL / NewSQL / SQL - future-proofing your persistence architecture (part one)Although its been a few years in the making, the noise / buzz around <a href="http://en.wikipedia.org/wiki/NoSQL">NoSQL</a> has now reached fever pitch. Or to be more precise, the promise of something better / faster / cheaper / more scalable than standard RDBMSs has sucked in a lot of people (plus getting to use <a href="http://en.wikipedia.org/wiki/MapReduce">MapReduce</a> in an application even if it's not needed is a temptation very hard to resist..). And pretty recently, the persistence hydra has grown another head - NewSQL. NewSQL adherents <a href="http://highscalability.com/blog/2010/6/28/voltdb-decapitates-six-sql-urban-myths-and-delivers-internet.html">essentially believe</a>&nbsp;that NoSQL is a design pig and that a better approach is to fix relational databases. In turn, NewSQL claims have been open to counter-claim on the constraints inherent in the NewSQL approach. It's all <a href="http://highscalability.com/blog/2011/7/25/is-nosql-a-premature-optimization-thats-worse-than-death-or.html">very fascinating</a>&nbsp;(props for working Lady Gaga into a technical article as well..).<br /><br />As it turns out, traditional RDBMSs are sometimes slow for valid reasons, and while you can certainly speed things up by relaxing constraints or optimising heavily for a specific use case, that's not a panacea or global solution to the problem of a generic, fast way to store and access structured data. On the other hand, the assertion that Oracle, MySQL and SQL Server have become fat and inefficient because of backwards compatibility requirements definitely strikes a chord with me personally.<br /><br />The sheer variety of NoSQL <a href="http://nosql-database.org/">candidates</a> (this web page lists ~122!) is evidence that the space is still immature. I don't have a problem with that (every technology goes through the same cycle), but it does raise one nasty problem: what happens if you back the wrong candidate now in 2012 that has disappeared in 2015?<br /><br />The current NoSQL marketplace demands a defensive architecture approach - it's reasonable to expect that over the next three years some promising current candidates will lose momentum and support, others will merge and still others will be bought up by a commercial RDBMS vendor, and become quite costly to license.<br /><br />What we need is a good, implementation-independent abstraction layer to model the reading and writing from and to a NoSQL store. No hard coding of specific implementation details into multiple layers of your application - instead segregate that reading and writing code into a layer that is written with change in mind - we're talking about pluggable modules, sensible use of interfaces and design patterns to make the replacement of your current NoSQL squeeze as low-pain as possible <i>if and when that replacement is ever needed</i>.<br /><br />If the future shows that the current trade-offs made in the NoSQL space (roughly summed up as - a weaker take on A(tomicity),C(onsistency), I(solation) or D(urability), plus with your own favourite blend of <a href="http://www.julianbrowne.com/article/viewer/brewers-cap-theorem">Brewer's CAP theorem</a>) are rendered unnecessary by software and hardware advances (as is very likely to be the case), then the API should ideally insulate our application code from this change.<br /><br />There are interesting moves afoot that demonstrate that the community is actively thinking about this, specifically the very recent <a href="http://www.arnnet.com.au/article/395469/couchbase_sqlite_launch_unified_nosql_query_language">announcement</a> ) of UnQL (the NoSQL equivalent to SQL - i.e. a unified NoSQL Query Language). That's good, but UnQL is young enough to shrivel and die just like any of the NoSQL implementations themselves. Also, we know that what has inspired UnQL - SQL - is itself fragmented / with vendor-specific extensions like T-SQL from Microsoft and PL/SQL from Oracle.<br /><br />So then, in part one of this two-parter, I've worked to justify what's coming in part two - a minimal set of Java classes and interfaces to provide a concrete implementation of the abstract ideas discussed above.<br /><div><br /></div>Humphrey Sheilhttps://plus.google.com/118284397577501285021noreply@blogger.com0tag:blogger.com,1999:blog-9518639.post-90827008458923983612011-07-31T19:27:00.000+01:002011-07-31T19:27:05.504+01:00New Google Analytics location report doesn't like Connacht so much..The new UI for Google Analytics has a distinctly Cromwellian vibe to it, as the screenshot below shows. Is this just my GA account, or does everyone else see Galway and Sligo a bit more surrounded by the Atlantic than normal?<br /><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-80THavNcWnQ/TjWeA-v2rbI/AAAAAAAAABY/BgfsjcqbWqg/s1600/Untitled.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://2.bp.blogspot.com/-80THavNcWnQ/TjWeA-v2rbI/AAAAAAAAABY/BgfsjcqbWqg/s320/Untitled.png" width="315" /></a></div><div><br /></div>Humphrey Sheilhttps://plus.google.com/118284397577501285021noreply@blogger.com0tag:blogger.com,1999:blog-9518639.post-8115272985955161162011-07-06T14:24:00.001+01:002011-07-08T14:39:23.450+01:00Umbraco on Azure - take 1.5 (not 2)!Back in August of last year I wrote a step-by-step article on how to get Umbraco running on Windows Azure (the Microsoft cloud computing platform). It got a lot of hits from people looking to do just exactly that.<br /><br />There were a few loose ends in that piece, notably not using shared rather than VM-local storage to allow for Umbraco clustering and also not using the .NET 4.0 runtime rather than .NET 3.5 (4.0 was a recent addition to Azure in Aug 2010 and it just didn't work out of the box - missing sections in the machine.config).<br /><br />So a follow-up article has been on my to-do list for a while now to tie up these loose ends, and then I found this - <a href="http://waacceleratorumbraco.codeplex.com/">the Umbraco Accelerator for Windows Azure</a>.<br /><br />I have no idea how good / bad it is, but it's a great idea and well worth a road test if you're looking to use Umbraco in production with Azure.<br /><br />Seems to be still active since it's initial release in Oct 2010 with a point release put out there in mid-June.<br /><br />It also appears to be part of a wider plan to standardise how ASP.NET applications can be moved to Azure in a standard way (<a href="http://azureaccelerators.codeplex.com/">the Windows Azure Accelerators project</a>), again a good thing IMHO.<br /><br />Let me know how it works for you.<br /><br /><br /><br /><br /><div><br /></div>Humphrey Sheilhttps://plus.google.com/118284397577501285021noreply@blogger.com0tag:blogger.com,1999:blog-9518639.post-20494940476538648052011-05-14T02:38:00.000+01:002011-05-14T02:38:24.447+01:00Google IO 2011 Day Two recapOh the perils of making predictions when there is still a conference keynote to go!<br /><br />It turns out that Chrome OS and the associated hardware&nbsp;<a href="http://humphreysheil.blogspot.com/2011/05/google-io-2011-io2011-day-one-recap.html">hasn't been read the last rites after all</a>. Rather, v1.0 is almost ready for primetime (scheduled for release in mid-June - about a month away). You have to imagine over time though that Google will want one code base for phones, tablets and chromebooks. At the very least, they will want to make it as easy as possible for developers to write their applications once and have them "just work" on devices with radically different screen sizes and input methods, something that Android developers today are already doing. Nonetheless, a very brave play, especially in targeting the enteprise space, where significant replacement costs exist. If it pays off, it will be huge.<br /><br />Moving on from Chrome, a couple of sessions I attended yesterday were really interesting, specifically two - Full Text Search and Smart App Design.<br /><br />Full Text Search is Google's take on Lucene / Solr and integrated into the App Engine Datastore as well, so it will be compelling for developers who just want to start indexing and scoring documents quickly. The "fully automatic" mode of operation with the Datastore should also be a timesaver.<br /><br />Smart App Design covered material of a completely different color. I had already read about the <a href="http://code.google.com/apis/predict/">Prediction API</a> in the blogosphere but I hadn't realised exactly what it did until this session. Essentially, Google offers the discerning developer the ability to add machine learning techniques to their application by leveraging a cloud-based service.<br /><br />At first glance, I had <i>thought</i> that the API gave access to the same model that Google uses to predict search terms, and I guess that is one use case. But Google has done much more than that - they have effectively white-labelled their machine learning technology and made it available to non-Google developers <i>to use with their own data</i>, i.e. learn what's important for their application / business.<br /><br />As with all machine-learning techniques, the nub of the matter remains the correct selection and efficient representation of the key attributes in the training set, and that is quite simply a problem that requires deep domain knowledge. <a href="http://googleenterprise.blogspot.com/2011/05/build-smarter-apps-with-improved-google.html">One announcement yesterday</a> was quite interesting however, in that Google are now allowing good model authors to sell their models to others. So if I come up with a model that predicts shopping basket behavior on leisure travel websites and a tour operator used that to bump their online conversion rate by 33%, then that model has a <b>lot</b> of value and it's a win-win situation for the model author and the model user.<br /><br />So an API with a lot of promise. But also with two potential flies in the ointment, one commercial and one cultural:<br /><br />(a) Commercial - Google are trying to charge for use of the API from day one, this will stymie adoption in the earliest stage<br /><br />(b) Cultural - an endemic problem with a lot of machine learning techniques is their black box nature. As someone who spent a fair bit of time working with artificial neural networks at university, quite often a machine learning approach will yield the correct answer but the researcher can't exactly explain why! That's not a Google-specific weakness, but what is Google-specific is that the modules you access via the Prediction API (the man behind the curtain if you will) is not made open at all, so can a company really invest time in building, training and using models that they don't really understand and can never hope to do so? Only time will tell.<br /><br />So to recap then, Google IO was definitely worth attending this year - and not just for the hardware gifts! The main items on my research list post the event are:<br /><br />1. <a href="http://www.google.com/events/io/2011/sessions/writing-web-apps-in-go.html">Google Go running on App Engine</a><br /><br />2. <a href="http://www.google.com/events/io/2011/sessions/smart-app-design.html">The Prediction API</a><br /><br />3. <a href="http://www.google.com/events/io/2011/sessions/full-text-search.html">Full Text Search enhancements / module for App Engine</a><br /><br />4. <a href="http://www.google.com/events/io/2011/sessions/map-your-business-inside-and-out.html">Adding my own hooks and content into Google Maps and Street View</a> to greatly enhance what the end user sees when they access Maps from my site<br /><br />5. <a href="http://code.google.com/apis/fusiontables/">Fusion tables</a> + <a href="http://code.google.com/apis/chart/">Charting </a>- a good / cheap way to rapidly slice and dice data and provide good interactive widgets to visualize same to end users.Humphrey Sheilhttps://plus.google.com/118284397577501285021noreply@blogger.com0tag:blogger.com,1999:blog-9518639.post-79169456989620927642011-05-11T05:36:00.001+01:002011-05-11T05:41:15.274+01:00Google IO 2011 (#io2011) - day one recapThe <a href="http://code.google.com/">official Google code site</a> &nbsp;has the lowdown on all of the announcements that came thick and fast today (some 11 major items last time I checked and plenty of API revs and upgrades) and I won't replay them all here.<br /><br />Specific announcements that interested me today:<br /><br />Google Go is about to become an officially supported language on App Engine, alongside Python and Java (it's currently in "Trusted Tester" mode).<br /><br />Rhetorical question: what value does a complete end-to-end technology stack with no overhanging IPR issues or blockers have to Google as a potential insurance policy in case the Oracle lawsuit does not go in their favor / be settled reasonably? Two things I heard today convinced me that there is now serious engineering investment going into Go (as opposed to a small, talented team cranking things out as they work down the list):<br /><br />(a) The afore-mentioned App Engine support (this won't have been trivial to implement - Go is the first compiled language to run on App Engine after all for one thing)<br /><br />(b) The info that a "comprehensive" Go library for ultimately all of the Google APIs is in development and will be with us "soon".<br /><br />Go is a very nice language to write in, and the App Engine support announced today addresses one of the major gaps I identified when I took a <a href="http://humphreysheil.blogspot.com/2009/11/google-go-overview-for-jee-and-net.html">look at Go</a> when it was first released in Nov 2009.<br /><br />Three final comments on day one:<br /><br />1. Press articles I read in March / April this year about the +1 button being a make or break deal for Google to compete with Facebook seem overblown. The +1 button has merited just one session so far and apart from that you wouldn't even know Google had it. Either that or the memo didn't make it to the IO organisers in time.<br /><br />2. It's instructive to watch Google see the mistake that companies like Sun Microsystems made and impressive to watch how they studiously avoid it. It's not enough to develop great code / software / hardware - you have to have people **<i>using</i>** it. Google's continued push into content ensures that usage. Google is not just the place you go to find content on the web, it's also where you consume that content (first youtube, but now books, movies and music too). I'm glad Google don't have a social network offering in their portfolio of services - they would be simply too powerful if they did.<br /><br />3. Google IO seems to be **<i>all</i>** about Android so far - it's absolutely everywhere you look and consumed the entire keynote this morning (Ice Cream in Q4 that unifies tablet and phone, Futures (Android @ Home), open accessories etc.). Barring some crazy and unforeseen announcement tomorrow, I'd say Chrome OS has been given the last rites internally. But then again, who knows what day two will bring?Humphrey Sheilhttps://plus.google.com/118284397577501285021noreply@blogger.com0tag:blogger.com,1999:blog-9518639.post-72079534500337829992011-03-06T20:10:00.000+00:002011-03-06T20:10:00.605+00:00A vision for big data in leisure travel ecommerce[<i>This is an article for people working in leisure travel technology / ecommerce online conversion who visit this blog, although many of the take-home points are transferable to other industry verticals.</i>]<br /><br />Data is big, and <a href="http://en.wikipedia.org/wiki/Big_data">getting bigger</a>. The more we track and log, the more storage is needed to warehouse it, and the more CPU horsepower is needed to mine it to answer questions posed by the business. As an aside, everyone is facing this issue and it's sink or swim, with the swimmers sure to get a competitive advantage over the sinkers. In this article, I'll examine the main data feeds that matter in leisure travel, and propose an architecture to collect, manage and mine them for business benefit. The end goal is to propose a <i>vision, </i>explaining why and how to collect data to better inform and drive business decisions that improve ecommerce performance.<br /><br />But why now - hasn't this always been an issue? Yes, but now more than ever, leisure travel is poised on the cusp of another big game-changer. Companies like Google and Microsoft are clearly already <a href="http://www.google.com/press/ita/">focusing more on travel as a segment</a>, and their data gathering and mining capabilities are considerable. But tour operators and online travel agencies (OTAs) have a significant competitive advantage over pure play technology companies as we'll see a little later.<br /><br /><span class="Apple-style-span" style="font-size: large;">Important data sources in leisure travel ecommerce</span><br /><br />First, let's examine the primary data sources that affect leisure travel ecommerce. There are some obvious entries in the table that follows, and some less so.<br /><br /><br /><table border="1"><tbody><tr> <td>ID</td><td>Name</td> <td>Internal / External</td> <td>Controllable</td> <td>Purpose / Comment</td> </tr><tr> <td>1</td><td>Availability (internal)</td> <td>Internal</td> <td>Yes</td> <td>Stock (internal, at-risk / committed inventory) available to sell, down to room type / meal plan / cabin and fare class</td> </tr><tr> <td>2</td><td>Pricing (internal)</td> <td>Internal</td> <td>Yes</td> <td>Pricing for internal stock. Entire teams stay focused on this source, ensuring it is (a) competitive, and (b) profitable</td> </tr><tr> <td>3</td><td>Availability (external)</td> <td>External</td> <td>Yes</td> <td>Stock available to sell contracted through third parties (usually not committed stock), down to room type / meal plan / cabin and fare class. Usually used to plug gaps in internal stock (resort coverage, star rating, price band etc.). Sources include GDSs, bed banks, car rental companies etc.</td> </tr><tr> <td>4</td><td>Pricing (external)</td> <td>External</td> <td>Yes</td> <td>Pricing for third party stock.&nbsp;</td> </tr><tr> <td>5</td><td>Rich content (internal)</td> <td>Internal</td> <td>Yes</td> <td>Provide compelling, unique, accurate text, images and video to convince the consumer to buy</td> </tr><tr> <td>6</td><td>Rich content (external)</td> <td>External</td> <td>Usually</td> <td>Provide compelling, unique, accurate text, images and video to convince the consumer to buy. Needs to be differentiated otherwise your search engine ranking score will suffer due to duplicate content penalties.</td> </tr><tr> <td>7</td><td>Attributes</td> <td>Both</td> <td>Yes</td> <td>Attributes (aka facets) are becoming increasingly important - star rating, price bands, family-friendly (has a creche, rooms are adjoining), "has a", "is a", "is close to" - attributes provide consumers with a more intelligent and targeted search capability</td> </tr><tr> <td>8</td><td>User generated content</td> <td>External</td> <td>No</td> <td><a href="http://www.tripadvisor.co.uk/">Tripadvisor</a> is the poster child here, but user generated content (UGC) can be in-house too - but it must be perceived as <i>unbiased</i> by the consumer, otherwise it becomes a negative.</td> </tr><tr> <td>9</td><td>Meta data</td> <td>Both</td> <td>Yes</td> <td>Every business tags its own data - timestamps, version numbers, # revisions, author, approver, when last yielded. The more meta data you have the merrier - it often helps to tie disparate data sources together and enriches the overall data pool</td> </tr><tr> <td>10</td><td>Search, cost, book funnel</td> <td>Internal</td> <td>Yes</td> <td>Traditionally the core of any ecommerce strategy - measures the complete search, cost and book journey. Needs to be fully instrumented to collect data so that A/B and multivariate testing can be used to fine-tune performance over time. <a href="http://humphreysheil.blogspot.com/2011/01/ecommerce-online-conversion-simple.html">Google Analytics does this very, very well</a>.</td> </tr><tr> <td>11</td><td>Offline (shop) interactions</td> <td>Internal</td> <td>Yes</td> <td>Few businesses try to tie shop activity back to online activity, but for a bricks and mortar plus clicks business, this is an opportunity missed</td> </tr><tr> <td>12</td><td>Online advertising (SEO)</td> <td>Internal</td> <td>Partially</td> <td>SEO can be thought of as PPC you don't pay for! Critical to making cost of acquisition online as efficient as possible. Only partially controllable due to businesses being at the mercy of search engine scoring (which both Google and Microsoft (Bing)&nbsp;keep&nbsp;as a black box algorithm)</td> </tr><tr> <td>13</td><td>Online advertising (PPC)</td> <td>Internal</td> <td>Yes</td> <td>Where Google makes its money!.. PPC has pride of place in every well-constructed ecommerce campaign, but the cost and effectiveness should be continuously monitored, challenged and tuned. CSV exports out of AdWords provide a good way to do this</td> </tr><tr> <td>14</td><td>Personalisation</td> <td>Internal</td> <td>Yes</td> <td>Personalisation - both anonymous and known, is a great way to learn <i>what</i> kind of holiday / vacation people want to buy from you and <i>how</i> they want to find and buy it. Just don't try to build personalisation before you have (10) working well - personalisation needs a really solid foundation to work well..</td> </tr><tr> <td>15</td><td>Social media</td> <td>External</td> <td>No</td> <td>The rising star that no-one really knows how to handle. The Facebook API contains a lot of potential for travel ecommerce&nbsp;</td> </tr><tr> <td>16</td><td>Offline / traditional advertising</td> <td>External</td> <td>Yes</td> <td>The efficacy (or not) of ad spend must extend to traditional / offline as well as the more easily measurable online variant, otherwise you don't know where all of your marketing £s / $s / €s are going</td> </tr><tr> <td>17</td><td>Post-booking interactions</td> <td>Internal</td> <td>Yes</td> <td>ecommerce data source, but savvy businesses are now looking at post-booking amendments, cancellation rates etc. to identify patterns that can feed back into the search experience</td> </tr><tr> <td>18</td><td>Customer Relationship Management (CRM)</td> <td>Internal</td> <td>Yes</td> <td>Both pre and post travel - it's key to have a good view of what the customer experiences <i>on holiday </i>and feed that back into what holidays are sold going forward. Is that picture of the pool misleading - change it! If the service is great, promote it more!</td> </tr></tbody></table><br /><span class="Apple-style-span" style="font-size: x-small;">Table 1. A proposed taxonomy on data sources that impact and influence leisure travel ecommerce.</span><br /><br />Two important characteristics of data are whether you control it or not (and hence can change it if you need to) and whether it is sourced from an internal system or an external system (and thus how trustworthy / accurate the data is and whether it is unique to you or if other business entities can see it too). We have added these two characteristics to the table above for clarity.<br /><br />What should be obvious to the reader is that a holistic picture of ecommerce performance requires multiple data sources, some of which traditionally would not be seen as impacting the effectiveness of a leisure travel ecommerce system. Gone are the days of simply looking at the web logs to see how effective (or leaky) the conversion funnel is! In fact, there are probably some sources that I've inadvertently omitted, and indeed as new systems come on stream, new sources will be added to this table / taxonomy.<br /><br />Finally, it's interesting from a <i>barrier to entry</i> perspective to note that only the well-placed tour operator or OTA actually has the wherewithal and access to collate data from all of the sources noted in the table. Other new entrants simply do not have access to many of the sources listed. <b>The data itself is now a valuable commodity (and is increasing in value), and an asset that leisure travel businesses would do well to guard jealously</b>.<br /><br /><span class="Apple-style-span" style="font-size: large;">What we need - Systems and Data working together</span><br /><br />At present, I contend that the average tour operator / OTA is collecting some, but not all of the data sources identified, and that no tour operator or OTA has yet constructed a system that provides a holistic, joined-up view of the data back to the business function to inform decision-making activities. Why not? Because it's not easy to do! The IT estate behind these data sources is fragmented (core res system, yielding system, multiple content management systems, external systems, separate booking repositories / agency management systems, Google Analytics, Google AdWords, Excel spreadsheets), often owned by different companies and wasn't designed to provide with the kind of view that is now needed. Ominously, new entrants into the space do not have a lot of the legacy baggage that incumbents do, meaning their <i>velocity of implementation and ongoing change</i> creates a hard-to-ignore imperative for all sellers of leisure travel to innovate quickly and learn from their data, or be left behind.<br /><br />The technical challenge is four-fold:<br /><br />1. <i>Collection and storage</i> - gather and store as much data as possible for each data source in the table, with that data being as clean and structured as possible (and in the real world, every data set will have some noise to it)<br /><br />2. <i>Build a holistic, joined-up data set</i> - identify ways to link the data sources together - version number, unique keys, foreign keys, link backs, tagging etc. The more your data sources are joined up, the more holistic a view of the business you are building (and can provide back to the business). Conversely, disconnected data sets (data islands) are of much less value to the business and introduce the risk of an incomplete / inaccurate view of what's really happening now being used to influence what's going to happen next<br /><br />3. <i>Answering the questions</i> - provide a mechanism to answer questions over this corpus of data in near real-time to allow the business to modify its behaviour and focus to maximise profits, yield and margin<br /><br />4. <i>Suggesting the questions</i> - once the above three points have been implemented to a mature and repeatable level, the final logical step is for the data function to actually suggest areas of improvement and further exploration based on emergent patterns in the data, using techniques such as artificial neural network and <a href="http://en.wikipedia.org/wiki/Self-organizing_map">self-organising maps</a> (SOM) analysis<br /><br /><br /><span class="Apple-style-span" style="font-size: large;">Putting it all together - a suggested framework</span><br /><br />There are many ways to construct a view over the data sources identified in the previous section. And in fact, multiple views are encouraged depending on the goal of the business. Here however, a hybrid of time and business function is selected in order to select a reasonable framework to hold the data. This framework is depicted in the following diagram.<br /><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://lh5.googleusercontent.com/-IFdfP0daRcI/TXPmNFVZdZI/AAAAAAAAABU/Y9ehYVYbzds/s1600/big-data.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="544" src="https://lh5.googleusercontent.com/-IFdfP0daRcI/TXPmNFVZdZI/AAAAAAAAABU/Y9ehYVYbzds/s640/big-data.png" width="640" /></a></div><br /><br /><span class="Apple-style-span" style="font-size: x-small;">Figure 1. High-level schematic of the big data system for leisure travel ecommerce.</span><br /><br /><span class="Apple-style-span" style="font-size: large;">A concrete implementation of the framework</span><br /><br />The question naturally arises - how would this system be constructed, not just initially but also maintained and extended going forward?<br /><br />Some natural candidates already exist, chief among them <a href="http://cassandra.apache.org/">Cassandra</a> and <a href="http://hadoop.apache.org/">Hadoop</a>. In the author's opinion, a hybrid architecture of Cassandra's data storage and innate simplicity and high availability, coupled with the MapReduce framework from Hadoop offers the best blend of performance, scalability, availability / resilience, querying and extensibility. A separate follow-on instalment to this article is warranted to provide a detailed technical treatise on the underpinnings of the system outlined here.<br /><br /><span class="Apple-style-span" style="font-size: large;">Conclusion</span><br /><br />The dominant data sources that impact the effectiveness of a leisure travel ecommerce strategy are identified, named and classified. Developing this classification further, a model is used to create a framework to house the data sources and a concrete implementation suggested.<br /><br />About the author: <a href="http://humphreysheil.blogspot.com/2003/12/what-i-do.html">Humphrey</a> is the Chief Technology Officer for <a href="http://www.comtec-group.com/">Comtec Group</a>, a company that specializes in leisure travel technology.<br /><div><br /></div>Humphrey Sheilhttps://plus.google.com/118284397577501285021noreply@blogger.com1tag:blogger.com,1999:blog-9518639.post-34936820241944801052011-03-02T20:44:00.000+00:002011-03-02T20:44:30.450+00:00JDK 7 preview and JEE 7 planningWe got two interesting developments in Java land this week:<br /><br />1. Oracle released the <a href="http://blogs.sun.com/mr/entry/jdk7_preview">developer preview of the Java 7 Development Kit (JDK)</a><br /><br />2. Oracle have started <a href="http://www.theregister.co.uk/2011/03/02/java_floats_java_ee_cloud_roadmap/">talking publically about what JEE 7</a> (and beyond - JEE 8) will look like in Q3 2012 and Q4 2013.<br /><br />(1) has been a long time coming and it's good to see the log jam moving. Simply shipping JDK 7 is good in its own right but it also means that the team will move onto working on JDK 8, which contains some key language features omitted from JDK 7 so that the team could JGIOTFD (Just Get It Out The (reader exercise to complete the acronym)).<br /><br />(2) looks to be Oracle really making the JEE stack cloud-based / cloud-friendly <i>by default</i> rather than a technology stack that merely facilitates cloud computing. This dynamic should see Oracle formalising exactly what constitutes "JEE in the cloud" via a JSR and thus wresting that intellectual responsibility back from Google's App Engine platform, which is pretty much the de facto standard for "JEE in the cloud" at present.<br /><br />Looking beyond JEE 7, JEE 8 looks to be embracing Big Data / NoSQL systems like Hadoop and Cassandra, although we can expect to have seen significant consolidation in this space by 2013, making the integration and platform support task easier to accomplish.<br /><br />All in all, two nice moves, and good news for the Java eco system / economy. You might or might not like Oracle, but they are getting stuff out the door in a way that Sun kind of forgot how to do.Humphrey Sheilhttps://plus.google.com/118284397577501285021noreply@blogger.com0tag:blogger.com,1999:blog-9518639.post-56148690229915820122011-02-22T08:38:00.000+00:002011-02-22T08:38:19.815+00:00Oracle Certified Enterprise Architect - JEE 6 refresh updateThe JEE 6 SCEA exam / certification<br /><br />Following on from my earlier post requesting input into my next post, here's an often-requested update: what's happening with the <a href="http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=326&amp;p_org_id=44&amp;lang=US">Oracle Certified Master, Java EE 5 Enterprise Architect exam / certification</a> update to JEE 6 standard?<br /><br />In a nutshell, here's where it is (covering each of the three parts in turn):<br /><br />Parts two and three of the exam (the practical elements) will remain very similar to how they operate today - these elements test your ability to design and document (part two) a solution to a well-defined business problem using the JEE platform and then challenge you (part three) to self-critique and justify key design decisions taken, especially on how non-functional requirements will be adequately satisfied. Parts two and three are pretty much independent of the current JEE revision, because the candidate is given a good degree of latitude in how you use JEE to solve the problem. Were you to use J2EE 1.4 features let's say, then the examiner is going to question the logic of that decision closely, but that's about it. Writing Ruby code and then having it compile to Java bytecodes at runtime using JRuby is also not recommended (don't laugh, someone did ask..)!<br /><br />Part one of the exam (the multiple-choice exam) **will** change for JEE 6 - it has to because part one is more tightly coupled to a specific JEE revision - currently JEE 5 (with ~5% of J2EE 1.4 content).<br /><br />The last time we revised part one, ~ten architects got together in Broomfield, CO for a week to design and critique the corpus of questions used. After that, Sun Microsystems (as they were then), brought in some external testing folks to benchmark the exam and to critique the overall marking strategy we intended to employ. That was an intense week and overall a fairly involved process, because you want to write difficult, tricky questions that will challenge an architect but at the same time, be fair. Part one of the architect exam is also not allowed to test your ability to memorize APIs or specifications - that is the primary task for the lower certifications. You very quickly find that a lot of difficult / tricky questions in JEE revolve around the APIs and specifications!<br /><br />I think with the benefit of hindsight we erred on the side of fairness over toughness. I think we'll look to toughen up the questions for JEE 6.<br /><br />I don't expect Oracle to reconvene the team of architects to do this refresh - the last refresh of the exam was a major refresh whereas we would consider this refresh to be more minor. Therefore the time taken to update should be shorter. Once the part one refresh is scheduled in, I'll post again on this topic. For now, the JEE 5 architect exam remains the most current and up to date architect exam you can take.<br /><div><br /></div>Humphrey Sheilhttps://plus.google.com/118284397577501285021noreply@blogger.com8tag:blogger.com,1999:blog-9518639.post-88594940462639612452011-01-23T20:49:00.000+00:002011-01-23T20:49:50.398+00:00What would you like to read about next?I've been pondering what next to write about and thought - why not ask the readers?&nbsp;So here's your chance!<br /><br />Turns out that most people visiting / watching this blog fall into four camps (in no order of priority):<br /><br />* Want to know more about Enterprise Java architecture / software architecture in general<br /><br />*&nbsp;Want to know more about&nbsp;the Oracle Certified Enterprise Architect exam for the Java platform (I'm a co-author of the <a href="http://humphreysheil.blogspot.com/2010_01_01_archive.html">study guide for this exam</a> as well as a co-lead assessor)<br /><br />* Want to know more about .NET (especially <a href="http://humphreysheil.blogspot.com/2010_08_01_archive.html">running Umbraco on Windows Azure</a> and / or MVC 3)<br /><br />* Want to know more about <a href="http://humphreysheil.blogspot.com/2011_01_01_archive.html">ecommerce tracking</a> (measuring, then improving online conversions)<br /><br />At least, that's what the web tracking software gods say! There's a great mix of visitors too from all corners of the globe, but the next post will be in english I'm afraid.<br /><br />So if there's a specific topic relating to the categories above that you'd really like to see covered, drop me a note at <a href="mailto:hsheilblog@gmail.com">hsheilblog@gmail.com</a> and I'll do my best to - and may the best suggestion win!Humphrey Sheilhttps://plus.google.com/118284397577501285021noreply@blogger.com0tag:blogger.com,1999:blog-9518639.post-79896379384313594042011-01-09T21:27:00.001+00:002011-01-09T21:29:16.459+00:00Ecommerce: online conversion - simple model and toolsetReaders of this blog can wax lyrical on how to build a great&nbsp;B2C ecommerce site - either in JEE or .NET. First we get the technology stack right, then frameworks using that technology stack, comprehensive functional and technical specs, testing plans, coding standards + reviews with daily scrum meetings, hardware / cloud estimation and then load / penetration testing - this is bread and butter to the software architect.<br /><br />What a lot of software architects don't understand (or underestimate) is what needs to happen to their site <i>after</i> it goes live. After the go-live of a B2C ecommerce site, a whole other team (which is fairly non-technical) takes it over. This team is really exercised by and focused on three core goals:<br /><br />1. Get qualified visitors to the site as cost-effectively as possible<br /><br />2. Enable those visitors to find the product they want quickly and easily<br /><br />3. Convert the visitor into a customer - convince them to buy on your site<br /><br />These goals are completely measurable in monetary terms, and hence you will find senior management taking a serious interest in them as well.<br /><br />I work in leisure travel, and there are some very specific nuances to achieving these goals in my industry sector (every industry sector will have their own nuances). But there is also a generic model to be found and some very useful (and free!) tools that you can use to put the model in place.<br /><br />Turns out the model is pretty simple. Essentially it consists of three components:<br /><br />1. <b>Analytics</b> - where we measure what's happening on our target site - how is the user interacting with the site and can we infer what they do and don't like based on measuring and studying those interactions<br /><br />2. <b>Hypothesis testing (aka A/B and / or multivariate testing)</b> - Analytics will give us lots of data to generate ideas on how to improve interactions, therefore we need a mechanism to test out hypotheses in a semi-automated way (if I change X, I bet the conversion rate will increase by Y%)<br /><br />3. <b>Efficient prospect capture</b> - we want the best native SEO score possible on all of the search engines and when we spend money on ad campaigns, we want the best return for that investment.<br /><br />So that's the high-level model - it's pretty simple.<br /><br />Many companies (and especially Google), make an awful lot of money around online ecommerce. And that's where the "free!" I noted above comes in. It makes sense for Google to give away the tools enabling Analytics (1) and Hypothesis testing (2) for free, as they make so much revenue on selling ad campaigns in Efficient Prospect Capture (3). Unkind souls might claim that if you spend any kind of money with Google AdWords at all, then you're not really getting (1) or (2) for free, but you won't find a nefarious cheap shot like that on this blog.<br /><br />Let's look at how we can implement the model then:<br /><br />1. Analytics - use&nbsp;<a href="http://www.google.com/analytics/">Google Analytics</a>. <a href="http://www.advanced-web-metrics.com/blog/">Brian Clifton's book</a> is an excellent treatise on the application, and the <a href="http://www.google.com/support/conversionuniversity/bin/request.py?hl=en-uk&amp;contact_type=indexSplash&amp;rd=1">online training videos</a> are of a high standard as well. It's well worth having a couple of developers on your team get Analytics certified to understand what the tool can do - it really is very powerful<br /><br />2. Hypothesis (A/B, multivariate) testing - use&nbsp;<a href="http://www.google.com/websiteoptimizer">Google Website Optimizer</a>. There's less information about this tool, I guess because it's a bit simpler than Analytics, but <a href="http://www.google.com/support/websiteoptimizer/">a good overview is available</a>. Being able to change content and see the impact on the fly is a key part of the model - that's why we use a CMS like Umbraco!<br /><br />3. Efficient prospect capture - SEO, SEO and more SEO. <a href="http://www.artofseobook.com/">The Art of SEO</a> is a great read. My opinion here is that as long as you're doing a great job on your own SEO, you should begrudge a search engine every penny. By using tagging in conjunction with Google Analytics (make sure you associate your AdWords account with your Analytics account to get all this done for you automagically), you can continually check that your ROI on ad campaigns is worth the spend, and stop buying terms that don't make money.<br /><br />And that's pretty much it. A three-component generic model for online ecommerce, followed by the simplest (with zero cost) way to implement that model for your B2C site. I intimated that each industry sector has its own quirks and foibles above and beyond this base model, and I'll focus on the leisure travel industry in more detail in a future post or two. For now, enjoy!<br /><div><br /></div>Humphrey Sheilhttps://plus.google.com/118284397577501285021noreply@blogger.com0tag:blogger.com,1999:blog-9518639.post-1103132236876927112010-12-28T17:33:00.003+00:002012-03-21T16:21:18.842+00:00What I do<div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-HJ98vyQGlls/TWGnZZztIAI/AAAAAAAAABQ/k7BG6td8j20/s1600/WTM+2007+photos+026.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="200" src="http://4.bp.blogspot.com/-HJ98vyQGlls/TWGnZZztIAI/AAAAAAAAABQ/k7BG6td8j20/s200/WTM+2007+photos+026.jpg" width="200" /></a></div><br /><br />(I can be contacted by <a href="mailto:hsheilblog@gmail.com">email</a>&nbsp;or <a href="http://twitter.com/#!/humphreysheil">twitter</a>).<br /><br />I design and build software solutions that address business needs in the simplest possible way. I'm comfortable operating at the nexus of technology and commerce - bridging the gap between the software / hardware teams and the business drivers and key stakeholders, right up to board level.<br /><br />Currently I'm the&nbsp;Chief Technology Officer for <a href="http://www.eysys.com/">Eysys</a>&nbsp;(<a href="http://www.eysys.com/careers.html">we're hiring</a> by the way!).&nbsp;At Eysys we're using big data combined with machine learning to build a next generation ecommerce platform, with baked-in intelligence to optimise conversion and make efficient use of marketing spend.<br /><br />In a previous life, I was the Head of Data Engineering and Infrastructure at the Thomas Cook Online Travel Agency, using Master Data Management and big data analysis to drive platform conversion and performance.<br /><br />Previous to that (sheesh!), I was the Chief Technology Officer for <a href="http://www.comtec-group.com/">Comtec Group</a> - building end to end systems for clients in the leisure travel industry, primarily in the UK and US. I led the definition and construction of our travel suite, from fast loading of &nbsp;inventory (e.g. Hotel, Air, Transfers etc.) through GDS selection and with a particular focus on ecommerce. In the ecommerce world we helped our customers to&nbsp;measure and increase online conversion rates, optimize PPC spend, increase SEO scores and overall consumer engagement. We leveraged analytics, A/B with multivariate testing and&nbsp;personalization techniques,&nbsp;to name just a few tools and techniques in the kit bag.<br /><br />Before Comtec I worked for a <a href="http://www.advancedcomputersoftware.com/abs/">financial services company</a> as a software architect and before that again I worked as a consultant for a well-known <a href="http://www.sapient.com/">business and IT consulting company</a>.<br /><br />In 2000, I became an external examiner and subject matter expert for the <a href="http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=326">Java Enterprise Architect</a> accreditation from Sun Microsystems - now Oracle. I have presented at JavaOne and written numerous articles on many different aspects of software engineering. In 2010, I co-authored the <a href="http://www.amazon.com/Certified-Enterprise-Architect-Study-Guide/dp/0131482033/ref=sr_1_1?ie=UTF8&amp;s=books&amp;qid=1293563898&amp;sr=8-1">definitive official study guide to the SCEA exam</a> itself.<br /><br />I am deeply rooted in Computer Science - I have a particular interest in distributed systems and hold a B.Sc (1998 - First Class Honours) and M.Sc (2002) in Computer Science from <a href="http://www.ucd.ie/">University College Dublin</a>.&nbsp;<a href="http://portal.acm.org/citation.cfm?id=957352&amp;dl=ACM&amp;coll=portal">My M.Sc. thesis</a> focused on building a high-throughput grid-like compute engine using Java and Artificial Neural Networks to solve a well-known bioinformatics problem (protein secondary structure prediction).<br /><div><br /></div>Humphrey Sheilhttps://plus.google.com/118284397577501285021noreply@blogger.com1tag:blogger.com,1999:blog-9518639.post-3201279130368769062010-12-03T00:06:00.000+00:002010-12-03T00:06:14.081+00:00Early Xmas Cloud presents from Microsoft, Google..Just about 48 hours apart, Microsoft and Google have released significant updates for their Azure and App Engine cloud offerings just in time for Christmas.<br /><br />The <a href="http://googleappengine.blogspot.com/2010/12/happy-holidays-from-app-engine-team-140.html">1.4.0 App Engine SDK</a> addresses some long-criticised weaknesses, in particular not being able to keep an instance ready to rock and roll at all times plus the ability to execute long-running requests (&gt; ten seconds). The ole App Engine has been getting a bit of a kicking recently in the blogosphere so this is a timely release (assuming the unplanned outages have been sorted out in parallel with this). There's nothing in the release notes about a more SQL-like persistence store like <a href="http://www.microsoft.com/en-us/sqlazure/default.aspx">SQL Azure</a>, so you still need to wrap your head around Google's Datastore and the pros and cons it gives you.<br /><br />The <a href="http://blogs.msdn.com/b/windowsazure/archive/2010/11/29/just-released-windows-azure-sdk-1-3-and-the-new-windows-azure-management-portal.aspx">1.3 Azure SDK</a>&nbsp;also addresses some weaknesses in Azure, in particular now allowing developers to actually RDP onto their Azure boxen in the cloud, a really big improvement on the current state of affairs (basically you get a headless box with non-straightforward access to log files via the Windows Azure Diagnostics service).<br /><br />It's interesting how these SDK releases are solidifying the differences between these two cloud offerings - Google are zeroing in on providing a PaaS model, where you have to code in a supported programming language (currently either Java or Python -&nbsp;wonder when they <a href="http://golang.org/doc/devel/roadmap.html">will support Google Go</a>?)&nbsp;against a locked-down set of APIs, where Microsoft are moving more towards an IaaS model where you do what you like cos it's more or less your box. Both approaches have their strengths and weaknesses, the overall ecosystem is stronger for having both.Humphrey Sheilhttps://plus.google.com/118284397577501285021noreply@blogger.com0tag:blogger.com,1999:blog-9518639.post-17565810505049901752010-09-27T10:39:00.002+01:002010-09-27T10:41:38.727+01:00The curious case of Oracle, the JDK and plan B (aka the prune juice plan)<div><br /><br />Mark Reinhold (Chief Architect of the Java Platform Group at Oracle), posted a Plan A and B approach (just like a classic A/B ecommerce conversion test eh?!) for the JDK roadmap in advance of the annual Java love fest that is JavaOne in San Francisco last week. For me, this was the biggest item I was looking for - the time gap between JDK 6 and 7 has been ridiculous.<br /><br />From his <a href="http://blogs.sun.com/mr/entry/rethinking_jdk7">"Re-thinking JDK 7"</a> post, the options proposed are:<br /><br />&lt;snip&gt;<br /><br />Plan A:<span class="Apple-tab-span" style="white-space: pre;"> </span>JDK 7 (as currently defined)<span class="Apple-tab-span" style="white-space: pre;"> </span>Mid 2012<br /><br />Plan B:<span class="Apple-tab-span" style="white-space: pre;"> </span>JDK 7 (minus Lambda, Jigsaw, and part of Coin)<span class="Apple-tab-span" style="white-space: pre;"> </span>Mid 2011<br /><span class="Apple-tab-span" style="white-space: pre;"> </span>JDK 8 (Lambda, Jigsaw, the rest of Coin, ++)<span class="Apple-tab-span" style="white-space: pre;"> </span>Late 2012<br /><br />&lt;/snip&gt;<br /><br />I am <b><i>firmly&nbsp;</i></b>in favour of the option eventually selected - option B. It's clear that the JDK has a huge feature log jam. Selecting option B is like giving the JDK release schedule a big dose of prune juice - you know something's gonna start moving.<br /><br />So to understand what Plan B means for you as a Java architect, I suggest that it can be broken down into these four steps.<br /><br />1. Read the negative comment to a further post by Mark announcing the decision - this comment represents why you would be unhappy with Plan B. I reproduce it here for the lazy reader (not you, the other guy):<br /><br />"<i>Hi Mark,</i><br /><i><br /></i><br /><i>To me, "JDK 7 minus Lambda, Jigsaw and part of Coin" doesn't sound much like "Getting Java moving again" :-(</i><br /><i><br /></i><br /><i>This schedule is very disappointing.</i><br /><i><br /></i><br /><i>Posted by Cedric on September 08, 2010 at 10:06 AM PDT</i>"<br /><br />2. Read the response to the negative comment to understand what Plan B entails. Again, reproduced here:<br /><br />"<i>JDK</i><i> 7 - (Lambda + Jigsaw + part of Coin) = Most of Coin + </i>NIO<i>.2 (</i>JSR<i> 203) +</i><br /><i>InvokeDynamic (JSR 292) + "JSR 166y" (fork/join, etc.) + most everything else</i><br /><i>on the current feature list (http://openjdk.java.net/projects/jdk7/features/) +</i><br /><i>possibly a few additional features TBD.</i><br /><i><br /></i><br /><i>Posted by Mark Reinhold on September 08, 2010 at 10:26 AM PDT</i>"<br /><br />The TBD bit is a tad ambiguous - let's ignore it by assuming nothing major is going to get in now, given the sheer volume of regression and platform testing needed before a JDK hits gold / GA status.<br /><br />3. So now you know Project Coin is the biggie for JDK 7 - therefore you need to download presentation for same from this year's <a href="http://blogs.sun.com/darcy/resource/JavaOne/J1_2010-ProjectCoin.pdf">JavaOne 2010 session on Coin</a> (119 slides, but a lot of these are just slides bitching about how hard it is to do, seminal slides are 10 and 23 - 66). Try-with-resources (Automatic Resource Management) looks great - equivalent to C#'s using keyword. Enhanced exception handling will enable better code as well.<br /><br />4. [Optional, for the dedicated reader] Some more light bedtime reading - follow the links from the <a href="http://openjdk.java.net/projects/jdk7/features/">JDK 7 roadmap</a>, especially for Project Lambda (closures) and Jigsaw (modular Java). This will then get JDK 8 on your forward-looking radar.<br /><br />Now the **real** question is what will JEE 7 look like?!<br /><div><br /></div></div>Humphrey Sheilhttps://plus.google.com/118284397577501285021noreply@blogger.com0tag:blogger.com,1999:blog-9518639.post-7550919592557728902010-08-29T14:50:00.015+01:002010-08-30T01:10:29.572+01:00Umbraco CMS - complete install on Windows Azure (the Microsoft cloud)<div>We use the Umbraco CMS a lot at work - it's widely regarded as one of (if not the) best CMSs out there in the .NET world. We've also done quite a bit of R&amp;D work on Microsoft Azure cloud offering and this blog post shares a bit of that knowledge (all of the other guides out there appear to focus on getting the Umbraco database running on SQL Azure, but not how to get the Umbraco server-side application itself up and running on Azure). The cool thing is that Umbraco comes up quite nicely on Azure, with only config changes needed (no code changes).</div><div><br /></div><div>So, first let's review the toolset / platforms I used:</div><div><br /></div><div>* <a href="http://umbraco.codeplex.com/releases/view/51165">Umbraco 4.5.2</a>, built for .NET 3.5 </div><div>* Latest Windows Azure Guest OS (1.5 - Release 201006-01)</div><div>* Visual Studio 2010 Professional</div><div>* <a href="http://www.microsoft.com/downloads/details.aspx?FamilyID=2274a0a8-5d37-4eac-b50a-e197dc340f6f&amp;displaylang=en">Azure SDK 1.2</a> </div><div>* SQL Express 2008 Management Studio</div><div>* .NET 3.5 sp1</div><div><br /></div><div><br /></div><div>Step one is simply to get Umbraco running happily in VS 2010 as a regular ASP.NET project. The steps to achieve this are <a href="http://our.umbraco.org/wiki/codegarden-2009/open-space-minutes/working-in-visual-studio-when-developing-umbraco-solutions/working-in-visual-studio-when-developing-umbraco-solutions-(2nd-way)">well documented here</a>. Test your work by firing up Umbraco locally, accessing the admin console and generating a bit of content (XSLTs / Macros / Documents etc.) before progressing further. (The key to working efficiently with Azure is to always have a working case to fall back on, instead of wondering what bit of your project is not cloud-friendly).</div><div><br /></div><div>Then <a href="http://blogs.msdn.com/b/jnak/archive/2010/02/08/migrating-an-existing-asp-net-app-to-run-on-windows-azure.aspx">use these steps</a> to make your Umbraco project "Azure-aware" . Again, test your installation by deploying to the Azure Dev Compute and Storage Fabric on your local machine and testing that Umbraco works as it should before going to production. The Azure Dev environment is by no means perfect (see below) or a true synonym for Azure Production, but it's a good check nonetheless.</div><div><br /></div><div>Now we need to use the <a href="http://sqlazuremw.codeplex.com/">SQL Azure Migration Wizard tool</a> to migrate the Umbraco SQL Express database. I used v3.3.6 (which worked fine with SQL Express contrary to some of the comments on the site) to convert the Umbraco database to its SQL Azure equivalent - the only thing the migration tool has to change is add a clustered index on one of the tables (dbo.umbracoUserLogins) as follows - everything else migrates over to SQL Azure easily:</div><div><br /></div><br /><br /><div><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;">CREATE CLUSTERED INDEX [ci_azure_fixup_dbo_umbracoUserLogins] ON [dbo].[umbracoUserLogins] </span></span></div><div><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;">(</span></span></div><div><span class="Apple-tab-span" style="white-space: pre;"><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;"> </span></span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;">[userID]</span></span></div><div><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;">)WITH (IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF)</span></span></div><div><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;">GO</span></span></div><div><br /></div><div>Then create a new database in SQL Azure and re-play the script generated by AzureMW into it to create the db schema and standing data that Umbraco expects. To connect to it, you'll replace a line like this in the Umbraco web.config:</div><div><br /></div><div><div><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;"> &lt;add key="umbracoDbDSN" value="server=.\SQLExpress;database=umbraco452;user id=xxx;password=xxx" /&gt;</span></span></div></div><div><br /></div><div><br /></div><div><add key="umbracoDbDSN" value="server=.\SQLExpress;database=umbraco452;user id=xxx;password=xxx"></add></div><div></div><div>with a line like this:</div><div><br /></div><div><div><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;"> &lt;add key="umbracoDbDSN" value="server=tcp:&lt;&lt;youraccountname&gt;&gt;.database.windows.net;database=umbraco;user id=&lt;&lt;youruser&gt;&gt;@&lt;&lt;youraccount&gt;&gt;;password=&lt;&lt;yourpassword&gt;&gt;" /&gt;</span></span></div></div><div><br /></div><div>So we now have the Umbraco database running in SQL Azure, and the Umbraco codebase itself wrapped using an Azure WebRole and deployed to Azure as a package. If we do this using the Visual Studio tool set, we get:</div><div><br /></div><div><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;">19:27:18 - Preparing...</span></span></div><div><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;">19:27:19 - Connecting...</span></span></div><div><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;">19:27:19 - Uploading...</span></span></div><div><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;">19:29:48 - Creating...</span></span></div><div><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;">19:31:12 - Starting...</span></span></div><div><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;">19:31:52 - Initializing...</span></span></div><div><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;">19:31:52 - Instance 0 of role umbraco452_net35 is initializing</span></span></div><div><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;">19:38:35 - Instance 0 of role umbraco452_net35 is busy</span></span></div><div><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;">19:40:15 - Instance 0 of role umbraco452_net35 is ready</span></span></div><div><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;">19:40:16 - Complete.</span></span></div><div><br /></div><div>Note the total time taken - Azure is deploying a new VM image for you when it does this, it's not just deploying a web app to IIS, so the time taken is always ~ 13 minutes, give or take. I wish it was quicker..</div><div><br /></div><div><br /></div><div><br /></div><div><span class="Apple-style-span" style="font-size: large;"><span class="Apple-style-span">Final comments</span></span></div><div><br /></div><div><div>If you deploy and it takes longer than ~13 minutes, then double check the common Azure gotchas. In my experience they are:</div><div><br /></div><div>1. Missing assemblies in production - so your project runs fine on the Dev Fabric and just hangs in Production on deploy - for Umbraco you need to make sure that Copy Local is set to true for cms.dll, businesslogic.dll and of course umbraco.dll so that they get packaged up.</div><div><br /></div><div>2. Forgetting to change the default value of DiagnosticsConnectionString in ServiceConfiguration.cscfg (by default it wants to persist to local storage which is inaccessible in production - you'll need to use an Azure storage service and update the connection string to match, e.g. your ServiceConfiguration.cscfg should look something like this:</div><div><br /></div><div><div></div><div><serviceconfiguration servicename="UmbracoCloudService" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration"></serviceconfiguration></div><div><role name="umbraco452_net35"></role></div><div><instances count="1"></instances></div><div><configurationsettings></configurationsettings></div><div><div><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;">&lt;?xml version="1.0"?&gt;</span></span></div><div><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;">&lt;ServiceConfiguration serviceName="UmbracoCloudService" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration"&gt;</span></span></div><div><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;">&lt;Role name="umbraco452_net35"&gt;</span></span></div><div><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;">&lt;Instances count="1" /&gt;</span></span></div><div><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;">&lt;ConfigurationSettings&gt;</span></span></div><div><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;">&lt;Setting name="DiagnosticsConnectionString" value="DefaultEndpointsProtocol=https;AccountName=travelinkce;AccountKey=youraccountkey/&gt;</span></span></div><div><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;">&lt;/ConfigurationSettings&gt;</span></span></div><div><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;">&lt;/Role&gt;</span></span></div><div><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;">&lt;/ServiceConfiguration&gt;</span></span></div></div></div></div><div><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;"><br /></span></span><br /><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;">You also need to run Umbraco in full-trust mode, otherwise you will get a security exception when Umbraco tries to read files that are not inside its own "local store" as defined by the .NET CAS (Code Access Security) sub system running on the production Azure VM. In other words, you need the&nbsp;enableNativeCodeExecution property set to true in your ServiceDefinition.csdef like so:</span></span><br /><br />&lt;?xml version="1.0" encoding="utf-8"?&gt;<br /><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;"></span></span><br /><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;">&lt;ServiceDefinition name="UmbracoCloudService" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"&gt;<br />&nbsp;&nbsp;&lt;WebRole name="umbraco452_net35" enableNativeCodeExecution="true"&gt;<br />&nbsp;&nbsp; &nbsp;&lt;InputEndpoints&gt;<br />&nbsp;&nbsp; &nbsp; &nbsp;&lt;InputEndpoint name="HttpIn" protocol="http" port="80" /&gt;<br />&nbsp;&nbsp; &nbsp;&lt;/InputEndpoints&gt;<br />&nbsp;&nbsp; &nbsp;&lt;ConfigurationSettings&gt;<br />&nbsp;&nbsp; &nbsp; &nbsp;&lt;Setting name="DiagnosticsConnectionString" /&gt;<br />&nbsp;&nbsp; &nbsp;&lt;/ConfigurationSettings&gt;<br />&nbsp;&nbsp;&lt;/WebRole&gt;<br />&lt;/ServiceDefinition&gt;</span></span><br /><br /><br />The Azure development tools (Fabric etc.) are quite immature in my opinion - very slow to start up (circa one minute) and simply crash when you've done something wrong rather than give a meaningful error message and then exit (for example, when trying to access a local SQL Server Express database (which is wrong - fair enough), the loadbalancer simply crashed with a System.Net.Sockets.SocketException{"An existing connection was forcibly closed by the remote host"}. I have the same criticism of the Azure production system - do a search to see how many people spin their wheels waiting for their roles to deploy with no feedback as to what is going / has gone wrong. Azure badly needs more dev-friendly logging output.</div><div><br /></div><div>I couldn't get the .NET 4.0 build of Umbraco to work (and it should, .NET 4.0 is now supported on Azure). The problem appears to lie in missing sections in the machine.config file on my Azure machine that I haven't had the time or inclination to dig into yet.</div><div><br /></div><div>You'll also find that the following directories do not get packaged up into your Azure deployment package by default: xslt, css, scripts, masterpages. To get around this quickly, I just put an empty file in each directory to force their inclusion in the build. If these directories are missing, you will be unable to create content in Umbraco.</div><div><br /></div><div><br /></div><div><span class="Apple-style-span" style="font-size: large;"><span class="Apple-style-span">Exercises for the reader</span></span></div><div><br /></div><div>* Convert the default InProc session state used by Umbraco to SQLServer mode (otherwise you will have a problem once you scale out beyond one instance on Azure). Starting point is this article - http://blogs.msdn.com/b/sqlazure/archive/2010/08/04/10046103.aspx, but google for errata to the script - the original script supplied does not work out of the box.</div><div><br /></div><div>* Use an Azure XDrive or similar to store content in one place and cluster Umbraco.</div><div><br /></div>Humphrey Sheilhttps://plus.google.com/118284397577501285021noreply@blogger.com8tag:blogger.com,1999:blog-9518639.post-43845050386950849492010-08-18T22:36:00.004+01:002010-08-18T22:59:16.602+01:00Using Ninject as your Dependency Injection container in ASP.NET MVC 3<a href="http://weblogs.asp.net/scottgu/archive/2010/07/27/introducing-asp-net-mvc-3-preview-1.aspx">MVC 3 Preview 1</a> has been available for a few weeks now from Microsoft, with Preview 2 scheduled for release sometime next month.<br /><br />As a web development framework, MVC 3 is pretty cool - simple to set up and start using, with a terse, clean syntax courtesy of the new Razor view engine. Coupled with Entity Framework 4 (supporting both code-first generation of database schemas and wrapping existing database schemas), MVC 3 + EF 4 has the makings of a very good web development stack.<br /><br />If you're interested in using Ninject as the Dependency Injection (DI) container in MVC 3, then you'll find the code below interesting - I couldn't find this anywhere else on the web so ended up writing it. It's the required implementation of the System.Web.Mvc.IMvcServiceLocator that gets instantiated and used in the Application_Start method in Global.asax.cs.<div><br /></div><div>Using DI with MVC 3 makes a lot of sense - we use it to decouple concrete implementations from the interface that we code against so that we can quickly swap in alternate implementations, e.g. a quick, self-contained in-memory database for unit testing using Moq or similar.</div><div><br /></div><div><a href="http://bradwilson.typepad.com/blog/2010/07/service-location-pt2-controllers.html">This link</a> from Brad Wilson shows how to set up Microsoft Unity as the dependency injection container and <a href="http://www.viddler.com/explore/mvcconf/videos/4/">this presentation</a> from Phil Haack gives a fleeting, tantalising glimpse of how the Ninject equivalent might look but there's nowhere to get the complete code you need to get it working!<br /><br />So I put the two together in order to use Ninject as my DI container. Here's the code (with zero comments as per my normal coding standard):<br /><br /><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;">using System.Web.Mvc;<br />using System;<br />using System.Collections.Generic;<br />using Ninject;<br /><br />namespace AdminApp.Models<br />{<br /><br /> public class NinjectMvcServiceLocator : IMvcServiceLocator<br /> {<br /> public IKernel Kernel { get; private set; }<br /><br /> public NinjectMvcServiceLocator(IKernel kernel)<br /> {<br /> Kernel = kernel;<br /> }<br /><br /> public object GetService(Type serviceType)<br /> {<br /> try<br /> {<br /> return Kernel.Get(serviceType);<br /> }<br /> catch (Ninject.ActivationException e)<br /> {<br /> throw new System.Web.Mvc.ActivationException("PAK", e);<br /> }<br /> }<br /><br /><br /> public IEnumerable&lt;tservice&gt; GetAllInstances&lt;tservice&gt;()<br /> {<br /> try<br /> {<br /> return Kernel.GetAll&lt;tservice&gt;();<br /> }<br /> catch (Ninject.ActivationException e)<br /> {<br /> throw new System.Web.Mvc.ActivationException("PAK", e);<br /> }<br /> }<br /><br /> public IEnumerable&lt;object&gt; GetAllInstances(Type serviceType)<br /> {<br /> try<br /> {<br /> return Kernel.GetAll(serviceType);<br /> }<br /> catch (Ninject.ActivationException e)<br /> {<br /> throw new System.Web.Mvc.ActivationException("PAK", e);<br /> }<br /> }<br /><br /> public TService GetInstance&lt;tservice&gt;()<br /> {<br /> try<br /> {<br /> return Kernel.Get&lt;tservice&gt;();<br /> }<br /> catch (Ninject.ActivationException e)<br /> {<br /> throw new System.Web.Mvc.ActivationException("PAK", e);<br /> }<br /> }<br /><br /> public TService GetInstance&lt;tservice&gt;(string key)<br /> {<br /> try<br /> {<br /> return Kernel.Get&lt;tservice&gt;(key);<br /> }<br /> catch (Ninject.ActivationException e)<br /> {<br /> throw new System.Web.Mvc.ActivationException("PAK", e);<br /> }<br /> }<br /><br /> public object GetInstance(Type serviceType)<br /> {<br /> try<br /> {<br /> return Kernel.Get(serviceType);<br /> }<br /> catch (Ninject.ActivationException e)<br /> {<br /> throw new System.Web.Mvc.ActivationException("PAK", e);<br /> }<br /> }<br /><br /> public object GetInstance(Type serviceType, string key)<br /> {<br /> try<br /> {<br /> return Kernel.Get(serviceType, key);<br /> }<br /> catch (Ninject.ActivationException e)<br /> {<br /> throw new System.Web.Mvc.ActivationException("PAK", e);<br /> }<br /> }<br /><br /><br /> public void Release(object instance)<br /> {<br /> try<br /> {<br /> Kernel.Release(instance);<br /> }<br /> catch (Ninject.ActivationException e)<br /> {<br /> throw new System.Web.Mvc.ActivationException("PAK", e);<br /> }<br /> }<br /><br /><br /><br /> }<br />}<br /></span></span><br /><br /><br /><br /><br />And here's how to instantiate and use it in Global.asax.cs:<br /><br /><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;"> var kernel = new StandardKernel(new NinjectRegistrationModule());<br /> var locator = new NinjectMvcServiceLocator(kernel);<br /> MvcServiceLocator.SetCurrent(locator);<br /></span></span><br /><br />Finally, here's a sample NinjectRegistrationModule which maps the implementation I want onto the generic interface that my code consumes:<br /><br /><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;">using Ninject.Modules;<br />using AdminApp.Controllers;<br /><br />namespace AdminApp<br />{<br />class NinjectRegistrationModule : NinjectModule<br /> {<br /> public override void Load()<br /> {<br /></span></span><ispecialrepository><dbspecialrepository><span class="Apple-style-span"><span class="Apple-style-span" style="font-size: small;"><div>Bind&lt;ISpecialRepository&gt;().To&lt;DbSpecialRepository&gt;().InRequestScope();</div>}<br /> }<br />}<br /></span></span><br /></dbspecialrepository></ispecialrepository></div>Humphrey Sheilhttps://plus.google.com/118284397577501285021noreply@blogger.com2tag:blogger.com,1999:blog-9518639.post-15501125767694180042010-07-23T14:50:00.003+01:002010-07-23T14:53:36.252+01:00Effect of the SCEA study guide on the examThe SCEA study guide book - especially chapter nine - is already having an effect on the exam. And that effect is interesting, mostly positive but with some negatives as well.<br /><br />In general, it is fair to say that the overall standard of submissions has improved, and a lot of submissions clearly contain cues from chapter nine of the book - naming conventions, diagram layout, adoption of the server A and B spec approach for the deployment diagram - it's all there in a lot of submissions.<br /><br />The book has made some of the submissions more anodyne / bland / standardized, which in turn makes me a little sentimental for the past. There's nothing like trying to traverse a crazy class diagram late at night for keeping your brain sharp!<br /><br />In my opinion, a small but not insignificant percentage of candidates (a bit less than 10%) actually end up submitting a **worse** assignment under the influence of the book, and for a very interesting reason. If you buy the book and read it and aren't an architect, then you will have an incomplete understanding of the concepts covered within it. By extension, when you apply the book material to your submission, there is a very good chance that you will make mistakes that are pretty glaring. So the book will make your submission worse, not better.<br /><br />As a corollary, if you buy the book and really get the material, your application of that new-found material on top of your already substantial knowledge and skills will result in a strong submission.<br /><br />In summary then, the book is not a magic book.<br /><br />The interesting medium / long-term question is whether or not the exam should always have a pass rate of X% and a fail rate of Y% or if it is acceptable to have X approach 100% as a result of the book (that's not happening but clearly it could).Humphrey Sheilhttps://plus.google.com/118284397577501285021noreply@blogger.com0tag:blogger.com,1999:blog-9518639.post-56729839385946996512010-03-27T16:42:00.002+00:002010-03-27T16:47:16.476+00:00Book - feedback so farThe book has just gone back to the printers for a second run. Apparently the first print run (a few thousand I think?) was chewed up by Amazon and direct pre-orders. It's fantastic for that many people to have the book and I really hope it helps you in preparing for the exam.<br /><br />So, the feedback so far: the reviews on Amazon (both .com and .co.uk) are for the old book, not the new one. Amazon just copied the reviews across (the last one was written two years before the new book published).<br /><br />So all I've got to go on are comments that I've received directly. Broadly speaking, reviewers fall into two camps: <br /><br />1. Those who like the ~200 page guide / map to a much larger body of research material (happy);<br /><br />2. Those who want / expect to find all of the revision material in one book (not so happy).<br /><br />Our goal was always to write a book that did not replicate the reams of material that exist for the JEE platform. We simply saw no point in doing that. Instead, we wanted to write a book that the candidate could use to:<br /><br />1. Construct a revision schedule for Part One;<br /><br />2. Understand how to approach Part Two - constructing your own solution for a given business problem using the JEE platform;<br /><br />3. Prepare the candidate for Part Three - defending your Part Two submission and explaining how you solution satisfies various NFRs (non-functional requirements).<br /><br />Broadly speaking, I think we've hit the goals we set. There is an errata list that will be sent to the publisher for the second print and will be published here as well for the purchasers of the first run.Humphrey Sheilhttps://plus.google.com/118284397577501285021noreply@blogger.com7tag:blogger.com,1999:blog-9518639.post-33024758963722154582010-01-26T20:53:00.006+00:002010-12-29T13:30:51.139+00:00SCEA book publication and shipment dates<div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/_L4dR-T6tIoY/TRs312u-JKI/AAAAAAAAABI/t7Hu_i7wKwg/s1600/9780131482036.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://1.bp.blogspot.com/_L4dR-T6tIoY/TRs312u-JKI/AAAAAAAAABI/t7Hu_i7wKwg/s320/9780131482036.jpg" width="240" /></a></div><br /><br />The book has gone to the printers! It comes off the press on Monday February 1st and gets to Pearson's warehouse on February 4th. From there it usually takes a week to get to Amazon (in the US). Here's the <a href="http://www.amazon.com/Certified-Enterprise-Architect-Study-Guide/dp/0131482033/ref=sr_1_1?ie=UTF8&amp;s=books&amp;qid=1293629160&amp;sr=8-1">Amazon US</a> and <a href="http://www.amazon.co.uk/Certified-Enterprise-Architect-Study-Guide/dp/0131482033/ref=sr_1_2?ie=UTF8&amp;s=books&amp;qid=1293629204&amp;sr=8-2">Amazon UK</a> links. It's also available for the <a href="http://www.amazon.co.uk/Certified-Enterprise-Architect-Study-Guide/dp/B00371V81Y/ref=sr_1_1?ie=UTF8&amp;s=digital-text&amp;qid=1293629204&amp;sr=8-1">Kindle</a>.<br /><br />People who placed pre-orders for hard copy editions will receive their shipment first - shipped direct from Pearson's warehouse next week.<br /><br />As far as the online edition goes, the Rough Cut disappears after the final update (which matches the printed book) and it then becomes part of the regular Safari Library and is accessible to all subscribers.<br /><br />If these dates change, I'll put out another update. It will be fantastic to see the book finally out there!<br /><br /><div class="separator" style="clear: both; text-align: center;"></div>Humphrey Sheilhttps://plus.google.com/118284397577501285021noreply@blogger.com9tag:blogger.com,1999:blog-9518639.post-38339810751766076572009-11-29T21:52:00.002+00:002009-11-29T21:54:33.543+00:00Book - chapter nine available for downloadI've put a PDF copy of chapter nine up on www.box.net for download <a href="http://www.box.net/shared/2s55mfkogg">here</a>. Chapter nine is two things - a seminal chapter in terms of the exam content it covers (Parts II and III of the SCEA exam) and also one that Safari Rough Cuts (SRC) keep missing out on in updating it. The version of the book on SRC is a lot older than this version. I'll keep this download link live until SRC is updated with the latest version. Enjoy.Humphrey Sheilhttps://plus.google.com/118284397577501285021noreply@blogger.com11