tag:blogger.com,1999:blog-63629442018-09-22T02:10:30.868+01:00Keith BloomKeith Bloomhttp://www.blogger.com/profile/02976390053858341358noreply@blogger.comBlogger39125tag:blogger.com,1999:blog-6362944.post-14317421610925474682016-09-19T16:38:00.000+01:002016-09-19T16:44:32.402+01:00React: Controlling state to accelerate developmentI first started looking at the <a href="https://facebook.github.io/react/">React</a> JavaScript library from Facebook in early 2014. I had previously worked on sites using MVC frameworks like Angular and Backbone and at first it was hard to see what problem React was trying to solve. The other frameworks had an API for templating and even HTTP calls, but React could only render HTML in the browser. When I dug into the design it soon became clear what the goal of React is. It controls the amount of state required to render UI elements and it allows developers to create components with a single responsibility that can be safely composed into a whole system. The following is a high level overview of how React achieves this.<br /><br /><h3>How React controls state</h3>The core principle of React is to control and encapsulate the state of the user interface. React components do this by exposing two objects, Props and State. <br /><br /><h4>Props</h4>Props are supplied by a parent component to the children it creates. The props object is set at the time the component is created and it never changes, therefore it is immutable. This enforces one-way data binding so that an element, for example an input control on a form, can be populated with a value held in the props object.<br /><br />However, when the user changes the value in the input field it will not change the value from the props object that it is bound to. The advantage of this is that values in the props object can be safely used anywhere in the UI and we know that they will not change the state of the application. The only place they can be changed is in the component that owns the original state. Components that only use props are stateless which means their behaviour is derived from the inputs given to them.<br /><br /><h4>State</h4>State is an object of a component and it is private to the component. The state can only be shared with child components by passing the values as props. The children can not change the state, only render it. If a child does need to mutate the state of the parent, the parent component provides a callback function in the props object. This allows the parent to decide how and when it will mutate the state. This builds upon the one-way data binding model into a one way data flow model for the application. With React, the user interface is composed of mainly stateless components with some stateful parent components controlling how state changes. With this model, the data flows down the tree of components and messages flow back up the tree in the form of callback functions to trigger state changes.<br /><br /><h3>How React applies functional programming</h3>The design of React is heavily influenced by principles common to functional programming. In fact the first prototype of React was created in <a href="https://en.wikipedia.org/wiki/OCaml">OCaml</a>. Stateless components are essentially pure functions and the encapsulation of the mutation of state is a core tenant of most functional programming languages. The building of a user interface by composing smaller components together in React is the same as function composition in functional programming. I am pleased to see the principles of functional programming forming the basis for a very popular library, as I believe that they lead to safer software that is quicker to develop.<br /><br /><h3>Conclusion</h3>React brought the principles of functional programming to the JavaScript ecosystem. It also captures a view that I have that software should be developed so that state is never changed in more than one location. I have seen many applications become brittle and impossible to extend because state is being changed in a number of places. The change becomes difficult because all the points of mutation must be found and checked to see that they still work with the change being made. A single point of mutation makes this problem go away.<br /><br />React has now reached a point of maturity where the API has stabilized as have the supporting tools like <a href="https://github.com/reactjs/redux">Redux</a> and <a href="https://github.com/facebook/react-devtools">react-devtools</a>. It is the tool I now choose to build new web applications. It forces me to think up front about all the components I need for the user interface and which ones will hold state and which ones can be stateless. For more information look at Pete Hunt’s <a href="https://facebook.github.io/react/docs/thinking-in-react.html">great post</a> or try React using the new <a href="https://github.com/facebookincubator/create-react-app">create react app</a> tool.<img src="http://feeds.feedburner.com/~r/KeithBloom/~4/G_zvTmxMMDU" height="1" width="1" alt=""/>Keith Bloomhttp://www.blogger.com/profile/02976390053858341358noreply@blogger.com0http://keithbloom.blogspot.com/2016/09/react-controlling-state.htmltag:blogger.com,1999:blog-6362944.post-80018751356842413752014-05-23T13:47:00.002+01:002014-05-23T13:47:56.621+01:00CQS talk at Brighton Alt.NetIn March I gave a talk at the <a href="http://brightonalt.net/MainPage.ashx">Brighton Alt.Net</a> meeting about applying the Command and Query Separation pattern to application design. This is a technique that I have been using for sometime to help me break up systems with bloated controllers or manager classes that are doing to much.<br /><br /><iframe allowfullscreen="" frameborder="0" height="281" mozallowfullscreen="" src="//player.vimeo.com/video/88442197" webkitallowfullscreen="" width="500"></iframe> <br /><a href="http://vimeo.com/88442197">CQS Talk Brighton Alt.Net</a> from <a href="http://vimeo.com/user5933710">Keith Bloom</a> on <a href="https://vimeo.com/">Vimeo</a>. <br />In the talk a mention a few resource: <br /><ul><li><a href="https://github.com/mhinze/ShortBus">Short Bus</a> by Matt Hinze&nbsp;</li><li>Jimmy Bogard's <a href="http://lostechies.com/jimmybogard/2013/12/19/put-your-controllers-on-a-diet-posts-and-commands/">Put your controllers on a diet</a> series</li><li>The <a href="http://en.wikipedia.org/wiki/Command-query_separation">Command-query separation principal</a> by Bertrand Meyer</li></ul>The code from the talk is available up on <a href="https://github.com/keithbloom/CommandsQueries">github</a>.<img src="http://feeds.feedburner.com/~r/KeithBloom/~4/ktwFY4x-Htw" height="1" width="1" alt=""/>Keith Bloomhttp://www.blogger.com/profile/02976390053858341358noreply@blogger.com0http://keithbloom.blogspot.com/2014/05/cqs-talk-at-brighton-altnet.htmltag:blogger.com,1999:blog-6362944.post-6029617414078515532013-12-02T13:45:00.000+00:002013-12-02T13:48:15.808+00:00F# in Finance conference<p>On Monday 25th November I attended the <a href="http://blogs.msdn.com/b/fsharpteam/archive/2013/11/13/the-quot-f-in-finance-quot-conference-london-edition-25-november-2013.aspx">F# in Finance</a> conference at the Microsoft offices in London. I was drawn to this single day conference as I have been learning about functional programming for some time now. I am also interested in the finance sector as it seems paradoxical to me. On the one hand it appears to have ageing IT systems and an ardent use of Excel.&nbsp; This seems like a bizarre way to run any business let alone a financial institution. On the other hand they can be at the forefront of innovation in software development. Indeed it is arguably the biggest commercial adopter of functional programming so far. So, I was keen to hear how this industry was changing and to learn if there were any lessons I could use in my own programming.</p><p>The day consisted of 10 talks, an ambitious goal for a single day. The most interesting theme that I picked up on was how productive many of the speakers felt when writing F# compared to C#. <a href="https://twitter.com/jonharrop">Jon Harrop</a> and <a href="https://twitter.com/ptrelford">Phil Trelford</a> both talked about how modelling complex domains was vastly simpler in a functional language than in an object oriented one. Phil explained how the <a href="http://www.trayport.com/en/products/joule">energy trading system</a> he maintains has a domain model which is just a single, two hundred line file. If this were to be implemented in an object oriented language the model would span hundreds of classes. </p><p>From the discussion about domain modelling it appears that functional languages are better at separating the data from the behaviour. This is still abstract in my mind so I have much to learn. What is more concrete for me are the language features that help productivity. When asked in a panel session, the speakers said that a lack of <a href="http://qconlondon.com/london-2009/presentation/Null+References:+The+Billion+Dollar+Mistake">null values</a>, <a href="http://en.wikipedia.org/wiki/Immutable_object">immutability</a> and the built in <a href="http://en.wikipedia.org/wiki/Actor_model">actor model</a> are the main benefits when using F#. A lack of null values and immutability seem like an obvious gain. Null reference errors are a very common error in most systems. Mutated state is also a source of pernicious bugs. A rogue branch of code can create havoc to a well tested system if it alters some piece of state. The actor model is a higher level construct also aimed at limiting state changes in a system and in F# it is called the <a href="http://en.wikibooks.org/wiki/F_Sharp_Programming/MailboxProcessor">MailboxProcessor</a>.</p><p>F# in Finance was a fantastic day of very focused presentations from some superb presenters. Functional programming is a clear fit for the finance sector where the domain can often be modelled in algebraic terms. Given that this is a sector where any competitive edge means vast profits I am sure the uptake of functional programming will only increase. It is good to see F# and, consequently, the CLR gaining a foothold. Thanks to the presenters I now have a clearer understanding of the advantages of functional programming and will be investigating further to see how I can improve my programming skills.</p><img src="http://feeds.feedburner.com/~r/KeithBloom/~4/rmMHgbslE-I" height="1" width="1" alt=""/>Keith Bloomhttp://www.blogger.com/profile/02976390053858341358noreply@blogger.com1http://keithbloom.blogspot.com/2013/12/fsharp-in-finance-conference.htmltag:blogger.com,1999:blog-6362944.post-59198579854432519532013-11-27T09:52:00.000+00:002013-11-27T09:52:19.545+00:00Functional JavaScript book review<p><img style="float: right; display: inline" align="right" src="http://ecx.images-amazon.com/images/I/51uISP4NgAL.jpg" width="195" height="248"></p><p>I was very excited to receive my copy of <a href="http://www.amazon.co.uk/Functional-JavaScript-Introducing-Programming-Underscore-js/dp/1449360726/">Functional JavaScript</a> by <a href="http://blog.fogus.me/">Michael Fogus</a> as I am interested in, and have views on, both Functional programming and JavaScript. My view&nbsp; of the functional programming community is that it is full of very clever people who are focused on creating software which is robust and malleable. This is probably because the concepts behind functional programming are hard to understand. It is also because it has a closer relationship to various branches of mathematics. My opinion of JavaScript is that it is the most ubiquitous programming language we have ever known. It is a language with some good features, but it has to be handled with care. The need for care is even greater when using it to program the DOM as this is a very complex API. </p><p>The use of functional programming in JavaScript is not a new idea, indeed it has many influences from Lisp and Scheme. But it is very good to see someone write a book exploring the topic. The style of the book is very conversational and each chapter moves up through the complex layers of functional programming. </p><p>At the beginning the focus is on higher order functions (functions taking in other functions as parameters) all the way to flow based programming and a brief overview of monadic programming. This structure demonstrates very well how functions can be composed together to create bigger programs. Functions written in each chapter re-appear in later ones to be part of a bigger whole. </p><p>I have read this book once and I am working my way through it again. It is rich with ideas for any JavaScript programmer.&nbsp; The concepts of functional programming certainly stretched my imperative programmers mind. Stretched as it was, I enjoyed seeing Michael Fogus take an imperative process and re-implement it as a series of functions composed together. Functional JavaScript is a very enjoyable read and I would recommend you pick up a copy.</p><img src="http://feeds.feedburner.com/~r/KeithBloom/~4/W5Uqu8ySH3U" height="1" width="1" alt=""/>Keith Bloomhttp://www.blogger.com/profile/02976390053858341358noreply@blogger.com0http://keithbloom.blogspot.com/2013/11/functional-javascript-book-review.htmltag:blogger.com,1999:blog-6362944.post-10669928697739857942013-05-16T10:15:00.000+01:002013-05-16T10:20:04.274+01:00Investigating ASP.Net MVC: Extending validation with IValidatableObject<h3>Introduction</h3>Frameworks are an essential part of programming. They help developers achieve complex tasks by presenting them with a simplified API over a more complex system. In my experience, it is possible to use a framework and be productive without giving too much thought to how it works. <br><br>However, I like to understand how things work. I am interested in the choices made by the framework designers. I feel that by knowing how they are built my ability to code improves and I can work with the framework more efficiently. <br><br>In this blog post I begin my investigation of the ASP.Net MVC framework. I will start by examining one part of the framework, the model binding process. How this works and how it can be extended. I will look at how the choices made by the framework designers influence the code I write and my understanding of the framework.<br><br><h3>How flexible is the framework</h3>The framework designer has a tricky balancing act. A good framework is simple to understand, hides the system it is abstracting and allows for easy extension. The extension points are the API and, to create them, the framework designers have several tools to choose from. The most common are, composition, inheritance, and events. The choice they make will have a big influence on the code I end up writing. <br><br>The ASP.Net MVC framework is an abstraction over HTTP requests and respsonses. It includes all three types of extension mechanisms. It has been designed to create HTML applications where the server is responsible for creating the markup which will be sent to the client. This is different from frameworks where the browser creates markup using a set of web services. The generation of HTML on the server was a guiding principle of the original design and has had the most influence on the API. <br><br><h3>Model binding, deep within the framework </h3>I am focusing on the model binding process which takes raw HTTP requests and creates real types which can be passed to controller actions. To understand its purpose I must first understand what ASP.Net MVC does when it handles a request: <ul> <li>When a HTTP request is made the routing engine picks it up and loads the relevant controller</li> <li>The controller examines the request and decides which action will handle it</li> <li>When the action has been identified the controller will delegate to the model binder to create the parameters for the action method from the request data</li> <li>When the model binder has created the objects for the action method, it checks they are valid. If they are, any validation errors are added to the controllers ModelState object</li></ul>Now I understand the flow of data through the framework, I can use it in my dummy application. This application allows people to tell me their favourite food so that I can keep some statistics on the favourite foods of the world. Unfortunately, now and then, someone types in "House" to try and skew the results. My task then is to add validation to the application to prevent this. <br><br>So far my application consists of a form, a view model object which will represent the input and a controller to handle the request <br><br><a href="http://3.bp.blogspot.com/-I5prT8HgynE/UZPCTBPTMDI/AAAAAAAAGv4/fjynNDikycM/s1600/FoodForm.png" imageanchor="1" ><img border="0" src="http://3.bp.blogspot.com/-I5prT8HgynE/UZPCTBPTMDI/AAAAAAAAGv4/fjynNDikycM/s320/FoodForm.png" /></a><br><br><script src="https://gist.github.com/keithbloom/5571384.js?file=example1.cs"></script> My controller action checks the validity of the input and will either update the statistics or return the form where MVC will display the errors for me. My FoodViewModel class will never fail validation though as the framework has no knowledge of what I consider an invalid request. To achieve that I have to implement some form of validation. One solution is to add the validation logic to controller action <br><br><script src="https://gist.github.com/keithbloom/5571384.js?file=example2.cs"></script> My controller now checks the form data to see if anyone has entered house as their favourite food. If present, I add a my error to the ModelState collection which also sets the validity of the ModelState to false. My controller will now detect invalid requests. <br><br>The controller code above demonstrates a common mistake I see in MVC applications. Here the controller is doing too much work and the code is failing to use the extensions available in the framework. Instead, the FoodViewModel can be extended to work with the model binding process to handle the validation in a more elegant and focused manner. <br><br><h3>Extending the validation process</h3>There are two ways that I can augment my FoodViewModel with validation rules. Simple validation can be achieved by decorating properties with attributes like <code>[Required]</code> or <code>[StringLength]</code>. The model binder will detect these and assert the rules accordingly. <br><br>For more complex validation the framework designers chose composition as a way for my code to participate in validation and created the IValidatableObject interface. <br><br><script src="https://gist.github.com/keithbloom/5571384.js?file=example3.cs"></script> This has a method called Validate which accepts a ValidationContext and returns an enumerable of ValidationResults. To show how this works I have updated FoodViewModel to implement the interface. <br><br><script src="https://gist.github.com/keithbloom/5571384.js?file=example4.cs"></script> It implements the interface by defining the Validate method so that when the model binder runs it can ask my object to validate itself. If the FavouriteFood property contains the word "House" it returns an error message. <br><br><h3>Coding to a contract</h3>The IValidatableObject interface is a contract between the model binder and my view model which allows them to work together. The FoodViewModel is declaring that it can behave as an IValidatableObject. This allows the model binder to ask if it is valid. <br><br>For the model binder this is a powerful tool. By defining this interface the model binder achieves two things, it can open itself up to the outside world and it can delegate the job of validation to someone else. This code demonstrates how the model binder can implement this <br><br><script src="https://gist.github.com/keithbloom/5571384.js?file=example5.cs"></script> To mimic the process used by the model binder I use reflection to create an instance of the FoodViewModel and then cast it to an instance of IValidatableObject. If the cast succeeds I call the Validate method (to keep the example simple I pass in null for the validation context). Any errors that are returned I store in my error collection. Finally, I output all the messages to the console. <br><br>This code shows the power and simplicity of composition. The example code is focused on managing the process of collecting errors from other objects. It does not have any knowledge of how to validate an object but it uses a known contract to collect the results. The process of validation has been extracted and put in the IValidatableObject interface. This allows other code to extend the process by supplying their own implementations for the validation process. When this happens the two processes create a single process which does more than they could independently. This is the goal of composition, combining many simple objects to create a more complex one. <br><br><h3>Conclusion</h3>I feel that too often developers fail to think about the way a framework is intended to be used or what decisions have been made to abstract the lower level system. A typical indication of a lack of thinking is an application which recreates existing parts of the framework. Exploring the code and the API of a framework helps me to avoid this. I also expand my knowledge of how to use it efficiently and how to design my own code. <br><br>Examining the model binder process has given me a greater knowledge of how ASP.Net MVC takes a HTTP request and generates an object for a controller action. Understanding this complex process allows me to work with the framework so that I can extend my code in the simplest way possible to achieve the goal of validation. <br><br>I also gain knowledge by studying how composition is used in a complex process. I am now able to apply this powerful design pattern to my own code. I feel that studying existing code is an excellent way to expand my knowledge and, to be honest, I find it fun to learn how things work.<img src="http://feeds.feedburner.com/~r/KeithBloom/~4/1m03xCZafmU" height="1" width="1" alt=""/>Keith Bloomhttp://www.blogger.com/profile/02976390053858341358noreply@blogger.com1http://keithbloom.blogspot.com/2013/05/investigating-mvc-extending-validation.htmltag:blogger.com,1999:blog-6362944.post-11306854746396953212012-09-23T17:44:00.000+01:002012-09-23T17:44:12.642+01:00SQL Baseline has joined the ChuckNorris FrameworkI am very pleased to say that <a href="https://twitter.com/ferventcoder">Rob</a> and <a href="https://twitter.com/drusellers">Dru</a> have added my SQL Baseline tool to the <a href="https://github.com/chucknorris">Chuck Norris Framework</a>. As part of SQL Baseline’s inauguration it has been renamed as PoweruP to fit alongside the likes of <a href="https://github.com/chucknorris/roundhouse">RoundhousE</a>, <a href="https://github.com/chucknorris/dropkick">DropkicK</a> and <a href="https://github.com/chucknorris/warmup">WarmuP</a>. The project has been moved over and can be found <a href="https://github.com/chucknorris/powerup">here</a>. <br /><br />I created PoweruP to help me configure RoundhousE to manage a number of existing databases. This is not an easy task and can be a barrier which stops people trying out RoundhousE as is shown by this conversation <br /><br /><blockquote class="twitter-tweet" data-in-reply-to="226041032867991552"><p>@<a href="https://twitter.com/dantup">dantup</a> @<a href="https://twitter.com/monkeyonahill">monkeyonahill</a> @<a href="https://twitter.com/davidfowl">davidfowl</a> @<a href="https://twitter.com/jabbr">jabbr</a> and intro @<a href="https://twitter.com/keith_bloom">keith_bloom</a>'s sql-baseline. <a href="https://t.co/5UbgTi7W" title="https://github.com/chucknorris/sql-baseline">github.com/chucknorris/sq…</a> (FYI, name change pending) :D</p>&mdash; Rob Reynolds (@ferventcoder) <a href="https://twitter.com/ferventcoder/status/226052907265585152" data-datetime="2012-07-19T20:36:27+00:00">July 19, 2012</a></blockquote><script src="//platform.twitter.com/widgets.js" charset="utf-8"></script><br />This is a shame because once RoundhousE is setup it greatly increases development speed, it is simple to maintain and brings database development inline with application coding. What can stop people using it, is the need to extract all the stored procedures, views, functions, etc, from the database. With one command PoweruP will scaffold a new RoundhousE project from an existing database. Plus It will create the scripts and put them in the default RoundhousE folder structure. For a more detailed explanation see this post. <br /><br />I am very pleased for PoweruP to be part of the Chuck Norris framework. I hope it will help more development teams to get started using RoundhousE because it is the best tool I have found for managing changes to the database schema.<img src="http://feeds.feedburner.com/~r/KeithBloom/~4/vaiBLK2SeW0" height="1" width="1" alt=""/>Keith Bloomhttp://www.blogger.com/profile/02976390053858341358noreply@blogger.com2http://keithbloom.blogspot.com/2012/09/sql-baseline-has-joined-chucknorris.htmltag:blogger.com,1999:blog-6362944.post-79237839919221319512012-09-18T20:02:00.000+01:002013-05-13T21:26:33.156+01:00Using 0MQ to communicate between threadsIn this post I show how <a href="http://www.zeromq.org/">0MQ</a> can help with concurrency in a multithreaded program. To do this, I explore what concurrency means and why it is important. I then focus on in-process concurrency and threaded programming, a topic which is notoriously tricky to do well due to the need to share some kind of state between threads. I explore why this is and how this is typically tackled. I will then show how communication between threads can be achieved without sharing any state using 0MQ. Finally I propose that by constructing our multi-threaded applications using the 0MQ model, that this leads us to more succinct and simpler code. <br /><br />All code can be found in this <a href="https://github.com/keithbloom/blogposts-zeromq">github</a> project <br /><br /><h3>What is a concurrent program?</h3>The word concurrent means more than one thing working together to achieve a common goal. In computing this means doing one of two things; something which is computationally expensive, like encoding a video file, or something that requires some sort of IO, like retrieving the size of a number of web pages. <br /><br />The opportunity to employ concurrency has exploded with the arrival of multicore processors and the rise of hosted processing platforms like Amazon EC2 and Windows Azure. These two changes represent the two ends of the concurrency spectrum. To achieve concurrency on a multicore processor we create threads within our application and manage how they will share state. Whereas achieving concurrency using something like EC2 is network based and requires the use of a communication channel like TCP. When communicating over the network, state is handled by passing messages. <br /><br />0MQ recognises that the best way to create a concurrent program is to pass messages and not to share state. Whether it is two threads running within a process or thousands of processes running across the internet, 0MQ uses the same model of sockets and messaging to create very stable and scalable applications. <br /><br /><h3>Multiple threads shared state and locks</h3>In .Net any program that must do more than one task at a time must create a thread. Threads are a way for Windows to abstract the management of many different streams of execution. Each thread gets it’s own stack and set of registers. The OS will then handle which thread is to be executed at one time. <br /><br />The problem with threads is that when they have to communicate with each other the typical way is to share some value in memory. This can cause data corruption as more than one thread could be accessing the data at one time, so the application has to manage access to the shared data. This is done by locking the shared data, ensuring that only one thread can manipulate it at any one time. This mechanism adds complexity to an application as it must include the locking logic. It also has an effect on performance. <br /><br /><h3>0MQ multiple threads and no shared state</h3>0MQ makes threaded programming simpler by swapping shared state for messaging. To demonstrate this I have created a simple program which calculates the size of a directory by adding up the size of each file it has. <br /><br />As we are using 0MQ we have to understand some of the concepts it uses. The first concept is static and dynamic components. Static components are pieces of infrastructure that we can always expect to be there. They usually own an endpoint which can be bound to. Dynamic components come and go and generally bind to endpoints. The next concept is the types of sockets provided by 0MQ. The implementation we’ll be looking at uses two types of sockets, PUSH and PULL. The PUSH socket is designed to distribute the work fairly to all connected clients, whilst the PULL socket collects results evenly from the workers. Using these socket types prevents one thread from being flooded with tasks or left idle waiting for it’s result to be taken. <br /><br />Finally the 0MQ guide has a number of patterns for composing an application depending on the type of work being done. The example below calculates the size of a directory by getting the size of each file and adding them together. To achieve this task in 0MQ, a good choice is the task ventilator pattern.<br /><br />&nbsp; <a href="https://lh3.googleusercontent.com/o59q0mzbG5LK1u6QRqq-hxNUEOzJSEc1TrAxKTvkdgDf8g1N-jw9wEo_m08YmKRM3mHMZSM1DWpoPieyjw64aEKgX0x-jTOXyxoN5oswQw4cinUWt-2s" imageanchor="1"><img border="0" height="400" src="https://lh3.googleusercontent.com/o59q0mzbG5LK1u6QRqq-hxNUEOzJSEc1TrAxKTvkdgDf8g1N-jw9wEo_m08YmKRM3mHMZSM1DWpoPieyjw64aEKgX0x-jTOXyxoN5oswQw4cinUWt-2s" width="338" /></a><br /><br /> In the diagram each box is a component in our application and components communicate with each other using 0MQ sockets. There are two static components in this application, the Ventilator and the Sink. There will only be one instance of each in the application and they will run on the same thread. There is one dynamic component, the Worker. There can be any number of workers and each one runs on it’s own thread. <br /><br />To calculate the size of the directory, the Ventilator is given a list of files from the directory. It sends the name of each one out on it’s message queue. <br /><br /><script src="https://gist.github.com/3691448.js?file=Ventilator.cs"></script>When the Sink is started, it is given a number of files to count the size of, in this instance we pass in the length of the array that we passed to the Ventilator. The Sink then pulls in the results from each of the workers and increments the running total for the size of the directory. When it has finished it returns the total size of the files found. <br /><br /><script src="https://gist.github.com/3691448.js?file=Sink.cs"></script>The Worker connects to the Ventilator and Sink end points and sits in an endless loop. <br /><br /><script src="https://gist.github.com/3691448.js?file=Ventilator.cs"></script>When a message arrives from the Ventilator it triggers an event which causes the Worker to read the file from the disk to find its size. When the operation completes the Worker publishes the size to the Sink’s end point. <br /><br /><script src="https://gist.github.com/3691448.js?file=WorkerWork.cs"></script>All the components are brought together in the controlling program. We create a 0MQ context which will be shared with all the components. This is an important point when using 0MQ with threads, there must be a single context and it must be shared amongst all the threads. We then create instances of the Ventilator and Sink passing in the context. <br /><br /><script src="https://gist.github.com/3691448.js?file=ControllerInit.cs"></script>Next we create five workers each on their own thread, again passing in the 0MQ context. <br /><br /><script src="https://gist.github.com/3691448.js?file=ControllerWorkerSetup.cs"></script>We do the work by building an array of files from our directory and passing this to the Ventilator. We tell the Sink how many results to expect and wait for the result to be returned. <br /><br /><script src="https://gist.github.com/3691448.js?file=ControllerDoWork.cs"></script>When we have the final number we print it on the console. At no point in the process did any thread have to update a shared value. <br /><br /><h3>Conclusion</h3>In this post I investigated the programming challenges faced when dealing with concurrency, focusing on those specific to threaded concurrency. I have shown how 0MQ approaches this problem with the view that concurrency should never involve sharing state and communication is best handled by passing messages between processes. To demonstrate how this works I created a simple program to calculate the size of a directory and used the 0MQ task ventilator pattern to structure the program. By following this pattern the software is broken down into very specific parts to perform a job. All knowledge of how to read the size of a file is held in the worker. If we discover a better way to read the size of the file this component can be changed without any impact on the rest of the program. This isolation is a consequence of only allowing communication between the key components over a message channel. Therefore the code is simpler as each component does only one job. <br /><br />All code can be found in this <a href="https://github.com/keithbloom/blogposts-zeromq">github</a> project<img src="http://feeds.feedburner.com/~r/KeithBloom/~4/TFrRsbs2nPg" height="1" width="1" alt=""/>Keith Bloomhttp://www.blogger.com/profile/02976390053858341358noreply@blogger.com2http://keithbloom.blogspot.com/2012/09/using-0mq-to-communicate-between-threads.htmltag:blogger.com,1999:blog-6362944.post-28609478445872600992012-08-19T13:24:00.001+01:002016-09-21T15:26:53.149+01:000MQ Introduction<h3>What is 0MQ?</h3> 0MQ is a very simple library that is used for managing the communication between different processes. It is a way of using enterprise messaging patterns without the need for an enterprise messaging server. By removing the server and using the socket API, a level of complexity is removed which leads to a simpler model good for concurrent programming.<br /><br /><h3>History</h3> 0MQ has its roots firmly in the world of financial services. Originally, there were two vendors, TIBCO and IBM, which each had their own protocols for enterprise messaging. This made it hard for banks to intercommunicate. In 2003 the London office of JP Morgan created the first draft of <a href="http://www.amqp.org/">Advanced Message Queue Protocol</a> (AMQP) which was an attempt to create a standard communication protocol for messaging systems.<br /><br /> In 2005 iMatix were contracted by JP Morgan to create a message broker based on the new specification and they produced <a href="http://www.openamq.org/">OpenAMQ</a>. The new standard was received well by others in the financial services and new members were added to the working group. However, the complexity of AMQP grew and led to iMatix leaving the working group. In 2008 Pieter Hintjens of iMatix wrote <a href="http://www.imatix.com/articles:whats-wrong-with-amqp">What is wrong with AMQP and how to fix it</a>. Here Hitchens applauds early versions of the specification for being concise and simple to implement but then criticised later versions for the complexity. He argues that any specification that is too complex will fail. It is also clear that the experience iMatix had developing OpenAMQ gave them good insight into a new way of supporting high speed messaging. This experience led them to conclude that the way to simplify messaging was to remove the server that hosted the queues for the clients. This led to the development of 0MQ.<br /><br /><h3>Not a message bus</h3>In traditional enterprise messaging there is a server which hosts the queues and roots the messages. If you are using IBM this maybe WebSphere, a Microsoft shop would use MSMQ, whilst others may use RabbitMQ. All of theses solutions involve some software being installed on a server. Clients then bind to the queues they host to process messages.<br /><br />0MQ is different in that it does not have a central server component, it is just a software library. For network communications you write the server and client components using the 0MQ API. Internally 0MQ uses TCP sockets to create the connection. For a lot of scenarios this is removing a redundant step in the process. Take the example of a time server on the network whose job it is to respond to requests for current time. With an enterprise service bus my time server would bind to a queue on the central exchange. Any client that wanted to know the time would send a request to that queue and wait for a response. In this operation the central server is not adding much to the task. Using 0MQ this same server can be created very easily<br /><br /> <script src="https://gist.github.com/3394540.js?file=TimeClient.cs"></script> The client that requests data from this service is<br /> <script src="https://gist.github.com/3394540.js?file=TimeServer.cs"></script> <h3>Applications that 0MQ is good for</h3> By combining messaging patterns with socket based communication 0MQ is very good for concurrent programming. Concurrent programming can be across a network, within a machine or within a process. 0MQ uses the same patterns for all of these.<br /><br /> In the example above we created a server by binding to a tcp port <code>timeServer.Bind("tcp://*:5555");</code> for a service hosted on a network. To host this in process or one a machine we just change the binding type: <ul><li>A way to connect processes on any machine and pass messages between them:<br /> <code>Bind(“tcp://*:5555);</code></li><li>A way to connect processes within a machine and pass messages between them:<br /> <code>Bind(“ipc//:5555);</code></li><li>A way to create threads in a process and pass messages between them:<br /> <code>Bind(“inproc://myservice”);</code></li></ul><h3>Conclusion</h3>0MQ is a very easy library to start using as it involves including a couple of DLL’s in your project and does not need any other infrastructure to support it. It has good abstractions and it is easy to create a variety of messaging queues.<br /><br /> Where I think 0MQ is really powerful is when it is applied to multi threaded programming. This is because 0MQ uses the same model for threaded programming as that used for concurrent programming across networks. Both of these pass messages to communicate instead of sharing state and this avoidance of shared state between threads leads to more reliable and simpler programs. I shall explain this fully in my next blog post. <img src="http://feeds.feedburner.com/~r/KeithBloom/~4/IM8vdIbYgkU" height="1" width="1" alt=""/>Keith Bloomhttp://www.blogger.com/profile/02976390053858341358noreply@blogger.com4http://keithbloom.blogspot.com/2012/08/0mq-introduction.htmltag:blogger.com,1999:blog-6362944.post-67652988185029701872012-05-16T09:14:00.000+01:002012-05-16T09:14:07.624+01:00Roundhouse with a legacy databaseWhen putting an existing database schema into source control, often the biggest hurdle is how to start. In this post I will demonstrate how Roundhouse can be added to a normal Visual Studio solution. It can then be used to manage the database in the same place as you manage the code. This then brings the benefits modern development practices like <a href="http://en.wikipedia.org/wiki/Test-driven_development">Test Driven Development</a> and <a href="http://martinfowler.com/articles/continuousIntegration.html">Continuous Integration</a> to database development. <br /><br />I have created a fictitious scenario to help explain how to do this. I have been asked to create a simple interface for the Adventure Works database, apparently the old Visual Basic version is showing it’s age. <br /><br />I've put all the code examples in a <a href="https://github.com/keithbloom/roundhouse-walkthrough">repository</a> on github. <br /><br /><h3>Structure</h3>As I am starting a new project I have time to plan a good structure. I know that I will have source code, external tools and database backups for Roundhouse (more on these later). So I decide upon this tree: <br /><br /><img src="http://lh4.googleusercontent.com/ecCB7aTic3mb99yM7KDQ346BV1EgzTw3SPu9_fdCJOWg9ij4Z8-OP4xmbQzFCpXRtiLIQ98DJTRiU2l8m7eI0wfxEz0wM-ChaVvSiiRbcNl6oDQwjps" /><br /><br />With the folder structure complete, I can create the solution and add a project for the database schema. Here I am using a class library and have named it AdventureWorks.Database. I have also added two folders, db and deployment for the Roundhouse files. <br /><br /><img src="http://lh5.googleusercontent.com/FZiUU4zSbknPY4R_JoDi0sh_cLprBSBEgt-7SL4kOtPxI2FX5wpqIEOU8MIt61Vp7EzECcBRbvtqgegqeqXv7ikCXR3wlNgtO81EzVhPvH6HhJ1Mw48" /><br /><br />Now to populate the db folder with the entities from the AdventureWorks and put them in to the relevant folders. My previous client had many legacy database with thousands of entities so I created sql-baseline to automate the process. Running sql-baseline requires three parameters, the server name, database name and place to put the scripts: <br /><br /> <script src="https://gist.github.com/2708333.js?file=snippet1.bat"></script>Sql-baseline has extracted the sprocs, views and functions and put them in the correct folders. The steps to add the files to Visual Studio are: <br /><ul> <li>Click the “Show all file” icon in Solution Explorer</li> <li>Expand the db folder</li> <li>Select the sprocs, views and functions folder</li> <li>Right click the selected folders and select “Include in project”</li></ul><br />With the structure of the project setup, it is time to concentrate on working with Roundhouse. <br /><br /><h3>The workflow</h3>To support this process I have created a dos batch file, called LOCAL.DBDeployment.bat, to run Roundhouse. This is in the deployment folder. <br /><br /><script src="https://gist.github.com/2708333.js?file=snippet2.bat"></script>The batch file executes the Roundhouse command line runner which is in the Tools folder. It sets the run mode to restore. <br /><br />In a team environment this file would be omitted from source control allowing each developer to modify the variables. When executed it restores the database using the backup in the DatabaseBackups folder at the top of the repository tree. <br /><br />To keep development as quick as possible I add an External Tool to Visual Studio with the following settings <br /><br />The full command text is <code>$(ProjectDir)\deployment\LOCAL.DBDeployment.bat</code>. Running the migrations is now a menu click. I can also map a keyboard shortcut to make it quicker. <br /><br />Following theses steps and conventions Roundhouse is integrated into the development process. I have found that being able to run the process from Visual Studio keeps the focus on development and keeps context switching to a minimum. <br /><br /><h3>Roundhouse modes</h3>I have configured Roundhouse to run in what I called “Restore mode”. Roundhouse has three options for running the migration scripts: <br /><ul> <li>Drop and Create - Roundhouse will drop the database and recreate it from the scripts</li> <li>Restore - Roundhouse will restore the database from a backup and then run the scripts</li> <li>Normal - Roundhouse will run the scripts against the database</li></ul><br />Drop and Create is typically used when developing a new database where the schema is developing rapidly but is not a good choice for an existing schema. In this situation we need confidence that the deployment to production will work and to achieve that we must develop using the current schema. This is why I use a backup of the production schema and have Roundhouse restore this before running the migration. In later posts I will explain how restore mode is used at every step in the deployment pipeline which rigorously tests the changes. Normal mode is used when the changes are finally deployed to the production database. When this is complete a new backup is taken and this is used as the new baseline during development. <br /><br /><h3>Conclusion</h3>There are many benefits of adding database development to source control. <br /><br />For the development process it keeps all of the work together. When I relied on SQL scripts I would work on a feature in Visual Studio by adding tests and refactoring the code. This would be checked in and the code for the feature would evolve. A side task would be to maintain the migration scripts, deploying them and testing they work. With the database in source control each commit includes changes to the whole system; the source code and the database schema. Anyone viewing the commit will see the complete set of changes. For the person doing the work the development workflow is simpler. By including the database you can bring the benefits of test driven development to the database changes and they become part of the “Red, Green, Refactor” TDD cycle. <br /><br />For the deployment process it constantly tests the database migration scripts. Previously this would be a manual task whereby each developer would test a single database script before the deployment. To test the script before deployment would be done on a shared server with a recent copy of the production schema. This method was prone to errors as another developer could have made a change to the production schema which the script was not tested against. By storing the changes in source control they are integrated during development so every developer is aware of them instantly. This brings the benefits of Continuous Integration to database development. <br /><br />In my next post I will explain how Roundhouse can be added to the Continuous Integration process.<img src="http://feeds.feedburner.com/~r/KeithBloom/~4/fAqS0d_y-Vc" height="1" width="1" alt=""/>Keith Bloomhttp://www.blogger.com/profile/02976390053858341358noreply@blogger.com3http://keithbloom.blogspot.com/2012/05/roundhouse-with-legacy-database.htmltag:blogger.com,1999:blog-6362944.post-9547561180684603352012-02-08T18:56:00.000+00:002012-11-09T12:58:43.738+00:00sql-baseline: a bootstraper for RoundhousE<a href="https://github.com/keithbloom/roundhouse">Roundhouse</a> is a great tool for putting the database in to source control. When Roundhouse is used on a project, changes to the schema become an after thought; every check-in updates the schema on the build server; every check-out updates the schema on all the developers PCs. When a deployment is run, the schema changes are applied automatically. <br /><br />But the barrier to entry for controlling an existing database is high. Roundhouse needs all the stored procedures, views, etc as files in a folder structure. Each script must be able to create or alter the entity. For teams supporting databases with thousands of procedures and views this work will stop evaluating Roundhouse. This is a shame as they are missing out on a superb tool. So I have created something to help get you started. <br /><br /><h3> What does it do?</h3>My current client falls in to the category of having thousands of procedures and views which had to be scripted. So I created a tool to extract them from SQL Server. <br /><br /><a href="https://github.com/chucknorris/powerup/">https://github.com/chucknorris/powerup/</a><br /><br />Sql-baseline is run with three options; server name, database name and location for the files:<br /><br /><code>.\sqlbaseline.exe -s:"(local)" -d:AdventureWorks -o:C:\Db\Adventure</code><br /><br />It will generate a script for all of the stored procedures and views and put them in the default Roundhouse folders: <br /><pre>\db<br /> \sprocs<br /> \views<br /></pre> Each script is created with the <a href="http://www.google.com/url?q=https%3A%2F%2Fgithub.com%2Fchucknorris%2Froundhouse%2Fwiki%2FAnytimescripts&amp;sa=D&amp;sntz=1&amp;usg=AFQjCNGaHjNCRV5BPZrkumBXhH14ly7DBw">Create if not Exists / Alter</a> template which will check for the entity first and if it is missing create it. The files are then ready be run by Roundhouse.<br />I have a short list of updates which are coming soon <br /><ol><li>Export functions and indexes</li><li>Create all of the default Roundhouse folders</li><li>New option to specify which entities to create</li><li>Accept a set of custom folders</li></ol><h3> Conclusion</h3>When I decided to trial Roundhouse my first hurdle was extracting the procedures and other entities in to the correct format. Using sql-baseline has enabled me to put three of my clients databases in to source control. I hope it helps others to start using Roundhouse and controlling changes to the database.<img src="http://feeds.feedburner.com/~r/KeithBloom/~4/5Wxsr-VTrGw" height="1" width="1" alt=""/>Keith Bloomhttp://www.blogger.com/profile/02976390053858341358noreply@blogger.com6http://keithbloom.blogspot.com/2012/02/sql-baseline-bootstraper-for-rounhouse.htmltag:blogger.com,1999:blog-6362944.post-21290701167862887402012-02-07T13:21:00.000+00:002012-02-07T13:21:28.793+00:00Tools for migrating the databaseIn my last post <a href="http://keithbloom.blogspot.com/2012/01/continuous-integration-for-database.html">controlling the database</a> I showed that putting the database schema in source control brings an easier work-flow for developers and more reliable deployments. In this post I will look at the tools available to automate the process. They fall in to two types, schema diff tools and migration tools. <br /><br /><h3> Schema diff tools</h3>Schema diffs tools work by comparing each object in source database to those in a target database. From this comparison, they produce a script. This script will update the target to match the source and includes adding and dropping columns in tables. Two examples of schema diff tools are Red Gate <a href="http://www.red-gate.com/products/sql-development/sql-compare/">SQL Compare</a> and Microsoft’s <a href="http://msdn.microsoft.com/en-us/library/xee70aty.aspx">Visual Studio Database project</a>. <br /><br /><h3> Visual Studio Database project</h3>The Visual Studio Database project adds the database to Visual Studio. The database schema then lives in the same solution as the application code. Visual Studio has a schema diff tool which can import an existing database. When it is finished there will be a file in the project for every object in the schema. When a change is made to one of the objects the project can be deployed to a target database to bring it up to date. Visual Studio creates an in memory representation of the database so it can support refactoring of database objects. This is clever and powerful. It provides developers with a lot of support and confidence when making changes to the schema. <br /><br />I have worked on a project which used the Visual Studio Database. The database was new and it worked well overall. The use of the Visual Studio Database project made sure that all developers had a current copy of the schema and deployments were simpler. There are some downsides though. <br /><br /><h3> Merge conflicts</h3>As every table, key and index is represented by an individual file, it also needs an entry in the project file. This caused a merge conflict for the project file which was either very hard or impossible to resolve. The team’s solution was to put and exclusive lock on the project file so only one person could update. <br /><br /><h3> Failed migrations</h3>The second problem is due to the nature of a schema diff tool. They compare two schemas and produce a script to bring one in line with the other. As this is machine generated it does not examine how to move those schemas in-line with one another. For example, if a field changed from NULL to NOT NULL, what happens to the existing empty fields? When the Visual Studio Database Project encounters this, it stops the migration. The onus is then on each developer to find a work around. The Visual Studio Database project has a mechanism to avoid this in a series of scripts that are always run before or after a deployment. As these scripts are run all the time they can become very large and difficult to manage. <br /><br /><h3> Cost</h3>My final comment about the Visual Studio Database project is the cost. It is only available in the Premium and Ultimate editions. At the time of writing Amazon is selling <a href="http://www.google.com/url?q=http%3A%2F%2Fwww.amazon.co.uk%2FMicrosoft-Visual-Studio-Premium-Renewal%2Fdp%2FB0038KVCYW%2Fref%3Dsr_1_1%3Fs%3Dsoftware%26ie%3DUTF8%26qid%3D1325270035%26sr%3D1-1&amp;sa=D&amp;sntz=1&amp;usg=AFQjCNH-Vj7-2J_T2mRtxK8TXfQNbYQBvA%22">Premium</a> for £2,112 and <a href="http://www.google.com/url?q=http%3A%2F%2Fwww.amazon.co.uk%2FMicrosoft-Visual-Studio-2010-Ultimate%2Fdp%2FB0038KNER0%2Fref%3Dsr_1_1%3Fs%3Dsoftware%26ie%3DUTF8%26qid%3D1325270135%26sr%3D1-1&amp;sa=D&amp;sntz=1&amp;usg=AFQjCNE1oN-bg9wq1BhYdF-RHKVKkhoO5w">Ultimate</a> for £10,792. <br /><br /><h3> Migrations tools</h3>Schema migration tools use a series of scripts to manage the changes to a database. When the migration is run against the target database it applies all the changes which have been created since the last run. These tools do not extract changes from the database. Instead the developer must write a script to migrate the schema. Two examples are <a href="http://code.google.com/p/roundhouse/">RoundhousE</a>, <a href="http://rubyonrails.org/">Ruby on Rails</a> <a href="http://guides.rubyonrails.org/migrations.html">migrations</a> and <a href="https://bitbucket.org/headspringlabs/tarantino/wiki/Home">Tarrantino</a>. RoundhousE RoundhousE is an open source migration tool. It does not integrate with Visual Studio or provide the refactoring abilities of the Visual Studio Database project. But then it takes a different approach which removes the need for the heavy weight tooling. How does it do this? RoundhousE is based on SQL scripts kept in a set of directories. The directories are split in to two main groups; any time scripts and one time only scripts. <br /><br />Any time scripts are procedures, views, functions and indexes. If the entity is missing from the database Roundhouse will run the script to create it. If the entity has changed it will run the script to update it. <br /><br />One time scripts are what make RoundhousE a migration tool. They contain the SQL to change the schema for a particular iteration. Each file is prefixed with a number and they are run sequentially. <br /><br /><h3> Migrations</h3>To demonstrate this I have a simple example application to track students. The first task is to make a screen which captures the students name. As part of this task I create a table, but instead of creating the table on my local instance of SQL Server I add this script to the RoundHousE “up” folder, which is the default location for the one time scripts <br /><br /><code>File called 0001_Make_student_table.sql <br /><br />CREATE TABLE Student Id int NOT NULL, Name varchar(50) </code><br /><br />The file name is important as RoundhousE will run the scripts in numerical order. Running RoundhousE against the local database executes the script and creates the table. <br /><br />The task is compete and so the changes are committed to source control triggering a new build. The build server also starts of RoundhousE against it’s own instance of SQL Server, before running the unit tests. RoundhousE runs through the scripts in the “up” folder and applies all those since the last time it was run. <br /><br />The next feature is to also capture the student’s email address. To meet the specification the student table needs a new column, so I add the following migration script: <br /><br /><code>File called 0002_Add_email_to_student_table.sql&nbsp;</code><br /><code><br /></code><br /><code>ALTER TABLE Student ADD email varchar(100) NULL </code><br /><br />I check in and the same process runs so the build server has also updated the Student table on it’s local database. <br /><br />When another developer pulls the latest changes form source control, all they have to do is run Roundhouse and it will run the two new scripts and their schema is also up to date. <br /><br />Deploying the changes follows the same process as the build server. The call to RoundhousE is added to the deploy script which updates the target schema. RoundhousE will also read a version number from a file and write this to the database. This is very useful, as it is possible to see that UAT is running version 1.0.44.1 while production is running 1.0.33.2. <br /><br /><h3> Conclusion</h3>Adding a tool to manage database changes makes development easier and deployments safer. With out it the developer has to remember to run any schema changes on the build server before committing the change, otherwise the tests will fail. Without a tool the person responsible for the deployment has to compile a large SQL script. As the various environments are managed by hand it maybe that UAT is out of step with Production. This means that the deployment script will have to be tested on a copy of the target environment before it is run. These complications lead to day long deployments. With a tool the changes to the database are all contained in the release package ready to be run. The tool will decide what needs to be updated since the last release. When a team works with this method for a while releases become more frequent because they are less work and more reliable.<img src="http://feeds.feedburner.com/~r/KeithBloom/~4/bCn23yTycG8" height="1" width="1" alt=""/>Keith Bloomhttp://www.blogger.com/profile/02976390053858341358noreply@blogger.com5http://keithbloom.blogspot.com/2012/02/tools-for-migrating-database.htmltag:blogger.com,1999:blog-6362944.post-14552409091391640852012-01-03T09:30:00.000+00:002012-01-03T09:55:24.727+00:00Continuous Integration for the Database Managing changes to the database is often an overlooked problem but it is one of the biggest issues a team can face. The traditional model of development whereby a <acronym title="Database Adminisrator">DBA</acronym> is the key holder of the schema is a thing of the past for many teams. Now each developer is responsible for making schema changes. Many of them will be working on local copies of the database, which is a good thing. However, it can fall apart when the change leaves the developers and moves towards production. <br /><br />In this post I will explore why this causes a problem and suggest a better solution. <br /><br /><h3>The change script</h3>Most changes to a database schema are handled using a simple script. The typical arrangement is a script which includes all the changes for a release. The developer will then apply this to each database involved in the deployment process. First to the build server, then to test and finally to production. When the release is finished the script is stored on a common share or maybe in source control.<br /><br />This method does not work well. When Continuous Integration is used correctly, every commit to the build server represents a version of the system which could be released. The check-in should include all changes that are required to deploy that version to any environment. This means including changes to the code base and to the database. If the change relies on a script being run independently, then it it is not atomic. If there is one thing that the DBMS has taught us, it is that all changes must be atomic. <br /><br /><h3>Add to source control</h3>The simple solution to this problem is to add the database schema to source control. This brings schema changes in-line with changes to the code base. For simple changes this may just add a column to the table. A more complex piece of work may involve a script which also migrates existing data. <br /><br /><h3>Deploy with build</h3>When using Continuous Integration, every check-in triggers a new build. The build server checks out the latest version, compiles it, runs the unit tests and reports back if the build passed or failed. In the shared scripts model the developer has to apply the changes to the build database before committing the changes. If not, then the build will fail. When the schema changes are part of the commit, applying them is managed by the build server.<br /><br />So now the change is atomic. If the build succeeds it has verified that the code, schema changes, and the migration are error free. This build can now be deployed to the next step in the deployment pipeline. If the build fails the developer has some bugs to fix.<br /><br /><h3>Deploy locally</h3>One of the biggest problems with the shared script method for managing schema changes is keeping all developers local databases up to date. Often this is achieved by a change script being emailed around the team. The onus is then on each developer to run all the scripts to keep their database up to date. If they don‚Äôt, then the next version of the build may not run on their machine. <br /><br />When all schema migrations are part of the build process there is no onus. When a developer pulls the latest version from source control they just run an update process to apply all the schema changes. In fact it could be the same process which is used to deploy the code base to the different environments. This way every developer is also exposed to the process which runs the deployment. <br /><br /><h3>Making the process work</h3> Like any new process it has to help the people using it. In this case the people who are responsible for maintaining the database schema. In some organisations this may be two separate groups; developers and database administrators (DBAs). <br /><br />Developers may not currently have to worry about applying their changes to production or <acronymn title="User Acceptance Testing">UAT</acronym>. They could be sending a script off to a DBA who then does the work for them. Asking them to be responsible for managing their changes to the schema is asking them to do more work.<br /><br /> DBAs are used to controlling the database schema and being the guardians of change. By automating the changes they loose some control and have to adapt to a new way of working. I would argue that the role of the DBA has changed though. The view that the application code base and the database schema are two separate entities is flawed and outdated. A change to the code is a change to the code, regardless of whether it is written in C# or SQL. Therefore the DBA must make their changes using the same mechanism as the developers. They have to check-in changes in to the source repository and monitor the state of the build.<br /><br /> For both groups it enforces more collaboration. Every check-in communicates to the whole team what someone is working on and how they have approached it. For people new to Continuous Integration this can be daunting as the notion of a failed build can be unsettling. So care must be taken to communicate the benefits. When I work with teams I point out that a failed build is not a bad thing. The team has discovered that this version will not work in production and now they can fix it.<br /><br /> Once the team has engaged with the process one way to loose this positive effort is to have a build process which causes more problems than it solves. If the new way of working introduces more friction for the developers on a daily basis then it will be abandoned. If everyone has to spend an hour every day updating their local database then they will skip it. As it has to work well and be simple to use, the choice of tool to manage the schema changes is important. The tools fall in to two categories, schema diff tools and migration tools. Schema diff tools compare a source with a target schema. They find the differences and create a script which applies the changes from the source to the target. Migration tools run a series of scripts held on the file system which encapsulate a single change. Once all the scripts have been applied to the target it is in-line with the current version in source control.<br /><br /> In my next blog post I will explore the merits of both types of tool and describe my experiences with them.<br /><br /><h3>Conclusion</h3>Traditionally source control is the home for application code with the database being maintained separately. Historically this could even have been by a different team. The growth in popularity of Continuous Integration and testing has shown how flawed this approach is. It has become clear that change to the database must be synchronised with changes to the code base. To achieve this the schema must be managed in source control.<br /><br />This is the goal but it is not easy to achieve. Special tools must be used and the team must embrace a different way of working. The pay off though is less regression errors, quicker development and more confidence that changes will work when deployed to production. <img src="http://feeds.feedburner.com/~r/KeithBloom/~4/SdO5Iy9DIZk" height="1" width="1" alt=""/>Keith Bloomhttp://www.blogger.com/profile/02976390053858341358noreply@blogger.com1http://keithbloom.blogspot.com/2012/01/continuous-integration-for-database.htmltag:blogger.com,1999:blog-6362944.post-77049520956827115252011-10-18T11:12:00.000+01:002011-12-28T08:38:19.724+00:00Applying SRP to WebFormsMost applications based on ASP.Net WebForms fall foul of good OO design practices because of the <a href="http://msdn.microsoft.com/en-us/library/ms178472.aspx">page life cycle</a> and the plethora of events exposed by the many <a href="http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.aspx">web controls</a>. One of the key principles of good OO design is the <a href="http://en.wikipedia.org/wiki/Single_responsibility_principle">Single Responsibility Principal</a> (SRP). I often find that this is either completely ignored or not used enough when an application is based on ASP.Net WebForms. SRP states that every object should have a single responsibility and that responsibility should be encapsulated by its class.<br /><br />With WebForms, business logic is written in the many event handlers which are part of the WebForms model. This is a hang over from the Visual Basic days when true client server applications were being built. Here the UI was a procedural wrapper over a set of Stored Procedures.<br /><br />SRP forces you to ask if the code you are writing belongs in that class. If it does not, then a new class is needed for the job. Following this practice leads to a well factored code base, full of objects doing one job. In the case of the WebForm it is now only responsible for building the UI and handling the HTTP requests and response. This avoids complex and overly long WebFoms that are hard to understand and difficult to debug.<br /><br /><h3>Violating SRP</h3> In this example, the Page is a form gathering contact details from the visitor. The visitor could have arrived from a marketing campaign. The tracking codes for the campaign are a comma list of key value pairs stored in a Cookie. The four values extracted from the cookie must be included when submitted to the process which writes to the database.<br /><br />A common implementation is to extract the values from the Cookie in the page load event and store them in a field on the form. When the event fires to save the form, the values are passed to the method which writes to the database.<br /><br /> <script src="https://gist.github.com/1270396.js?file=ViolatesSRP.aspx.cs"></script> <br />I feel this method of working is poor. Sure, it will work. You can extract the values and send them to the database. However, the Page should only be responsible for managing the incoming Request, the out going Response and building the UI. Also, what if there are many forms on the site which have to capture this information? The code will be duplicated in many places causing problems if the name of the cookie changes. Instead I prefer to hand the task of capturing data from the cookie to a couple of classes which encapsulate the process and return a single object for the marketing data.<br /><br /><h3>Applying SRP</h3><br /> <script src="https://gist.github.com/1270396.js?file=Default.aspx.cs"></script> <br />All that this page is responsible for is passing the CookieCollection to the MarketingTrackerBuilder object. It then stores this in a private field to be passed on to the database when the form is submitted.<br /><br /> <script src="https://gist.github.com/1270396.js?file=MarketingTracker.cs"></script> <br />This class is essentially a DTO. It has no other job than to store the four pieces of information about the campaign which brought the visitor to the website. It also implements an interface. We will see why that is useful later.<br /><br /> <script src="https://gist.github.com/1270396.js?file=MaketingTrackerBuilder.cs"></script><br />Most of the work is being done in the Builder. The class knows how to extract the fields from the cookie. It is also where the name of the cookie is defined. Keeping this information here means that if anything to do with reading the cookie changes, it will only change here. Often this kind of code is scattered around many WebForm pages. Then a change to the implementation requires a search and replace on the entire code base.<br /><br />When the Build method is called it first checks that the cookie exists. If the check fails it returns an instance of a NullMarketingTracker.<br /><br /><script src="https://gist.github.com/1270396.js?file=NullMarketingTracker.cs"></script><br />The NullMarketingTracker object is the reason for using the <code>IMarketingTracker</code> interface. We are now free to substitute the type returned as long as we code to the interface. If you review the code you can see that all references to the MarketingTracker have used the IMarketingTracker interface.<br /><br />Now when values are written to the database, there is no need to check for a null strings first.<br /><br /><h3>Summary</h3>The Single Responsibility Principal is a great way to think about structuring code. By applying this to the WebForms Page object I decided that its only responsibility is to deal with the incoming request and the outgoing response. By further applying it to the code which captures the cookie data, the final design is well structured and easily maintained. If the implementation changes then the change will not ripple through the code base.<br /><br />I find that this a good way to work and a great way to keep the code behind files readable and manageable.<img src="http://feeds.feedburner.com/~r/KeithBloom/~4/ENk72CW6k5o" height="1" width="1" alt=""/>Keith Bloomhttp://www.blogger.com/profile/02976390053858341358noreply@blogger.com0http://keithbloom.blogspot.com/2011/10/applying-srp-to-webforms.htmltag:blogger.com,1999:blog-6362944.post-7119306531953365352011-10-07T15:55:00.000+01:002011-12-28T08:41:47.726+00:00Removing ignored files from a git repositoryWhen I am using TFS, Visual Studio manages the files which should not be committed. So when I create a <a href="http://git-scm.com/">git</a> repository I often forget to add the .gitignore file. The first reminder I get about my oversight is when I see all the DLLs being added during the first commit.<br /><br />Today I decided to find out how to clean up the repository. First I added this <code>.gitignore</code> file to my repository:<br /><br /><script src="https://gist.github.com/1267102.js?file=.gitignore"></script><br />Then I searched the internet. The first hit from Google was this <a href="http://aralbalkan.com/2389">post</a> by Aral Balkan. The content and the comments provided me with all the information I needed to manage the git repository.<br /><br /><h3>Searching and cleaning the repository</h3>An instance of a git repository can be thought of as an isolated file system. As such commands can be run against it the same way as a normal file system. <br />The first command I needed was <code>git ls-files</code> which works in the same way as <code>ls</code>. The command <code>git ls-files -i -X .gitignore</code> lists all the files in the repository which would have been excluded had I remembered to set the .gitigonre.<br />Removing a file from git is done using the <code>git rm</code>. As git is a versioned file system there is the file on disk and a reference to that file in the index. The command <code>git rm --cached</code> will remove the reference from the index but leave the file on disk.<br /><br /><h3>A script to do that</h3>Manually removing each file from the index would take some time. It would also go against all of my computing instincts. The job needs a script.<br /><br /><script src="https://gist.github.com/1267102.js?file=git-remove-ignored-files.sh"></script><br />Here I simply loop round the results from <code>git ls-files</code> sending each one to <code>git rm</code>. I am sure there are many ways to achieve the same result but this method worked well for me. I am using git bash and Windows.<img src="http://feeds.feedburner.com/~r/KeithBloom/~4/GtR5OQphDV8" height="1" width="1" alt=""/>Keith Bloomhttp://www.blogger.com/profile/02976390053858341358noreply@blogger.com4http://keithbloom.blogspot.com/2011/10/removing-ignored-files-from-git.htmltag:blogger.com,1999:blog-6362944.post-39200009185987223992011-10-05T09:49:00.000+01:002011-12-28T08:36:08.314+00:00The wonderful backbone.jsI recently gave a <a href="http://www.vimeo.com/23948808">presentation</a> on <a href="http://documentcloud.github.com/backbone/">backbone.js</a> at the <a href="http://brightonalt.net/">Brighton Alt.net</a> meeting. During this talk I demonstrated how Backbone.js can be used to organise JavaScript code into manageable layers. It’s Models and Collections manage the storing and retrieving of data. Views provide a mechanism for arranging the UI in to manageable chunks. It also has an event bus which helps reduce coupling between functions. Altogether, backbone brings order to the often chaotic world of client side development.<br /><br />For the demonstration, I made a shopping list application which is available on <a href="https://github.com/keithbloom/BackboneDemo">github</a>. Included is a web service which is used to manage the shopping list. You will need to install <a href="http://nodejs.org/">node.js</a> to run the web service.<br /><br /><h3>A traditional view of MVC</h3>When first looking at backbone, I was thinking of an MVC framework in terms of the ASP.Net implementation. Here, the framework does not impose anything upon the model. The model is full of classes to capture the state and the behaviour of the system. For my shopping list it would contain types for an item, the list class, the price of the item and the state of the item. All of these would have methods which capture the behaviour. This model consists of a lot of small classes working together to define the system.<br /><br />The controller is responsible for incoming requests. It will then validate the request and process it. If it is a query it will gather the required data and return it. If it was a command it will find a handler and update the model.<br /><br />When complete it will load the correct view passing in the state required to render it.<br /><br />The view uses the passed in data to create the representation requested by the client. Typically this will be an HTML page. The view is where we think of the client running server based MVC framework. Mainly as this is where we put all the client JavaScript.<br /><br /><h3>Breaking with tradition</h3>In backbone.js, the model object is very simple. It does not model the behaviour of a system accurately in fact, there may only be one model object. Therefore, it is not a system for building fully featured domain models.<br /><br />What it does is apply the MVC pattern to browser development. Models, collections and views work together to create a wall. A wall which keeps all the AJAX code for dealing with data on one side, and all the code for building and rendering DOM elements on the other side. Without this boundary it is easy for JavaScript applications to have the same function calling a web service and updating the DOM. Over time this will lead to a system which is hard to maintain. By making a very clear separation between persistence code and UI code, backbone.js helps us to write better JavaScript.<br /><br /><h3>Coding the data side</h3>The first thing I did to find out how backbone can help my development was to create a model and a collection and point it at my web service.<br /><br />I created a model object to represent an item in my shopping list:<br /><br /><script src="https://gist.github.com/1325973.js?file=example1.js"></script><br />There are three things happening above:<br /><ol><li>I have created a model called ShoppingItem. This is told to use the Products collection in the constructor. It is also given some default values to be used by new instances.</li><li>Here I create the collection of shopping items. In this simple demo I only have to set the endpoint for my web service and set the model object for the collection.</li><li>Finally, I create a new instance of the collection.</li></ol><br />The page itself has no real content, just a title. By using the console window in Firebug I can create, edit and delete new items in my shopping list<br /><br />Here I can create a new item and when the save method is called, backbone sends a POST request to the service, creating the item.<br /><br /><script src="https://gist.github.com/1325973.js?file=example2.js"></script><br /><br />Running this code in the browser will show backbone first POSTing the new item to the service. Then issuing a PUT to update the State, and finally a DELETE with the Id to remove it. Internally backbone uses either jQuery or Zepto for communication.<br /><br /><h3>Collections</h3>In backbone, a Model has to belong to a collection, in fact, it is a rare application where a single entity exists in isolation. Here is the Collection for my shopping list:<br /><br /><script src="https://gist.github.com/1325973.js?file=example3.js"></script><br /><br />A very simple collection, it is told what type of Model it holds, and the URL for the web service to persist the objects to. The model object will use this URL when communicating with the web service. Finally it has method called toobuy which returns a list of all items in the collection where the state is “To buy”.<br /><br /><h3>Summary</h3>In this post I have created a shopping list in JavaScript. There is enough code here to run the application from the browser console where I can create, update and delete items from my list.<br /><br />This highlights one of the first advantages of using backbone.js. I have concentrated on how my application will interact with the service before creating any UI components.<br /><br />Look at the <a href="http://documentcloud.github.com/backbone/">backbone.js</a> site for more information and a growing list of <a href="http://documentcloud.github.com/backbone/#examples">examples</a>.<img src="http://feeds.feedburner.com/~r/KeithBloom/~4/MD3Bm3URbIU" height="1" width="1" alt=""/>Keith Bloomhttp://www.blogger.com/profile/02976390053858341358noreply@blogger.com0http://keithbloom.blogspot.com/2011/10/wonderful-backbonejs.htmltag:blogger.com,1999:blog-6362944.post-37128360401463936392011-09-14T15:38:00.000+01:002011-12-28T08:36:08.312+00:00Extending the JavaScript Array typeHere is how I created some expressive code by extending the basic types in JavaScript. This example extends the Array type whose content is often filtered or transposed. Through the use of function chaining complex operations can be very expressive and concise. I find this a great way to write code which is easy to follow.<br /><br />I wanted to create a cross domain cookie based on the current domain of a page and I was provided with a list of sub domains where this should apply. Interestingly, this list included the name ‘uk’ and ‘cn’ which are also root level domain names.<br /><br />Here are some example domains:<br /><ul><li>www.keithbloom.co.uk</li><li>landingpage.keithbloom.co.uk</li><li>uk.keithbloom.com</li><li>test.keithbloom.com</li></ul><br />Here is a list of sub domains which can be removed:<br /><ul><li>www</li><li>landingpage</li><li>uk</li></ul><br /> <h3>Extending the Array</h3>Arrays seemed the obvious choice to me. The domain string can be split on the full stop to create an array and the list of safe sub domains to be removed is already an array. Taking the first example, I end up with the following two arrays:<br /><br /><script src="https://gist.github.com/1325964.js?file=example1.js"></script> Now I wish to remove any items in the subDomains which form domainNameParts and return this result.<br /><br /><script src="https://gist.github.com/1325964.js?file=example2.js"></script>The subtract function loops over the input and compares each element with each element in the mask array. If it finds a match, the array splice function will remove it. It stops before it reaches the end of the input array though, to avoid removing any of the top level domains. Otherwise my example would return keithbloom.co - useless.<br /><br />I now have a working function which can be used to create the domain for my cookie:<br /><br /><script src="https://gist.github.com/1325964.js?file=example3.js"></script>In the example I use the array function join to re-build my string and append a leading period to it. (For cross domain cookies to work, they must start with a period).<br /><br />I found this cumbersome though and wanted a more more expressive method. Fortunately, JavaScript is a dynamic language so its internal types can be extended (a technique also know as Monkey Patching). I can add my subtract function to the Array objects prototype:<br /><br /><script src="https://gist.github.com/1325964.js?file=example4.js"></script>I now have a more expressive way to create my domain:<br /><script src="https://gist.github.com/1325964.js?file=example5.js"></script><br />The final statement fits on one line. More importantly though it is concise and reads like a sentence. Creating code which is readable is more maintainable.<br /><br /><h3>Pitfalls</h3>This technique is a great way to extend the language and provide an expressive method for writing code. It can be dangerous though. As JavaScript runs as part of a web page there could be other scripts also running on that page. I may find that one of those scripts is also adding a subtract function to the array prototype. If this is a script I have access to, I can rename it. If it is an external script I may have to use a new name.<br /><br />One way to avoid this is to prefix a namespace to my function:<br /><br /><script src="https://gist.github.com/1325964.js?file=example6.js"></script><h3>Summary</h3>Through the extension of the basic types in JavaScript it is possible to create expressive code. The Array type lends itself to this technique as they are often used as lists which we wish to manipulate in some way. Care must be taken though as we are changing code for all the programs being run in the session.<img src="http://feeds.feedburner.com/~r/KeithBloom/~4/Ys0-ktsA8Do" height="1" width="1" alt=""/>Keith Bloomhttp://www.blogger.com/profile/02976390053858341358noreply@blogger.com0http://keithbloom.blogspot.com/2011/09/extending-javascript-array-type.htmltag:blogger.com,1999:blog-6362944.post-10901848800748726602011-08-12T15:15:00.000+01:002011-10-30T14:30:11.098+00:00Reading listBooks have improved my knowledge about programming, creating users interfaces, and how software has a life after the first deliverable. I have also found there are many awful books which are a waste of time and money. This is a shame as I believe a good book will convey a topic in more depth than a series of blog posts or examples on the Internet. Here is my list of books which I value and can recommend to you.<br /><br /><h3><a href="http://www.amazon.co.uk/Programming-Pearls-2nd-Jon-Bentley/dp/0201657880">Programming Pearls, 2nd Edition, John Bently</a></h3><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-kUgQplgHFYo/Tq1ckbxyoiI/AAAAAAAADJE/qtN2DTLFSMI/s1600/programmingpearls.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="200" src="http://1.bp.blogspot.com/-kUgQplgHFYo/Tq1ckbxyoiI/AAAAAAAADJE/qtN2DTLFSMI/s200/programmingpearls.jpg" width="158" /></a></div>Originally a series of essays for the ACM. They form a superb book on the subject of how to write software. Most of the examples are in C and an analysis of how malloc works may no longer be relevant. For me though, this added to the interest as I got to think again about how memory is managed and data structures are implemented.<br /><br />Where Bently excels is by demonstrating how a problem can be thought through and analysed. The "Back of the envelope" chapter describes how to estimate the volume of water which flows down the Mississippi river. This is a master class in lateral thinking.<br /><br />A theme that runs through the examples is the creation of test harnesses to prove that the program being developed works. It is refreshing to see automated testing being focused on in a book which by computing standards is now considered a classic.<br /><br /><hr style="clear: both;" /> <h3><a href="http://www.amazon.co.uk/Mythical-Month-Essays-Software-Engineering/dp/0201835959/ref=sr_1_1?s=books&amp;ie=UTF8&amp;qid=1313097302&amp;sr=1-1">The Mythical Man Month and Other Essays on Software Engineering, Frederick Brooks</a></h3><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-MKIrJtpyorA/Tq1cjFaiHJI/AAAAAAAADIw/tXJQ2h12e6U/s1600/manmonth.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="200" src="http://2.bp.blogspot.com/-MKIrJtpyorA/Tq1cjFaiHJI/AAAAAAAADIw/tXJQ2h12e6U/s200/manmonth.jpg" width="134" /></a></div>Based on his experience of working on IBM’s OS/360 in the 1960s, Frederick Brooks argues against the idea that adding more developers to a team will accelerate the production of code. He demonstrates how new developers have to learn the code base and in fact decelerate development time as experienced people stop writing code to teach them.<br /><br />This is a great read for anyone who works as a developer as Brooks’ experiences with punch cards and rooms full of documentation for one system are relevant now.<br />This is an essential read though for any manager of a business who employs software developers within their company.<br /><br /><hr style="clear: both;" /> <h3><a href="http://www.amazon.co.uk/Refactoring-Improving-Design-Existing-Technology/dp/0201485672/ref=sr_1_1?ie=UTF8&amp;qid=1313150928&amp;sr=8-1">Refactoring, Martin Fowler</a></h3><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-ISb2AIuHiNM/Tq1clCHRAEI/AAAAAAAADJI/WWmYub2r2fM/s1600/refactoring.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="200" src="http://4.bp.blogspot.com/-ISb2AIuHiNM/Tq1clCHRAEI/AAAAAAAADJI/WWmYub2r2fM/s200/refactoring.jpg" width="150" /></a></div>This is the book which has most improved my understanding of object orientated coding. Before reading this book I was unsure about changing working code no matter what state it was in.<br /><br />Using a series of refactorings Martin Fowler shows how the design and quality of a code base can be improved by making many small changes. Changes which alter the code but not the behaviour of the system. This is made possible by having a good collection of tests that assert how the code being changed behaves.<br /><br />This is a book which I return to often. It is a book which has had a profound impact on software development. Most of the patterns described are now built in to development tools like ReSharper and CodeRush.<br /><br /><hr style="clear: both;" /><h3><a href="http://www.amazon.co.uk/Agile-Development-Rails-Pragmatic-Programmers/dp/0977616630/ref=sr_1_2?s=books&amp;ie=UTF8&amp;qid=1313151035&amp;sr=1-2">Agile Web Development with Rails 1.2, Dave Thomas</a></h3><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-mjqwuIti0xM/Tq1cdMIABQI/AAAAAAAADIM/im1rkwCkqfc/s1600/agilerails.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="200" src="http://3.bp.blogspot.com/-mjqwuIti0xM/Tq1cdMIABQI/AAAAAAAADIM/im1rkwCkqfc/s200/agilerails.jpg" width="166" /></a></div>The first section of this book shows how to build simple web applications and in doing so it introduces key aspects of the Rails framework. The second section is a detailed look at the framework with chapters dedicated to ActiveRecord, ActionView, ActionController and ActionMailer.<br /><br />This is the book I am currently reading as I am creating a Rails app in my spare time. It is woefully out of date as Rails is now at version 3 (with 3.1 soon to be finalised). So I read all the examples wondering what has changed.<br /><hr style="clear: both;" /><h3> <a href="http://www.amazon.co.uk/Design-patterns-elements-reusable-object-oriented/dp/0201633612/ref=sr_1_1?s=books&amp;ie=UTF8&amp;qid=1313151094&amp;sr=1-1">Design Patterns, Gamma, Helm, Johnson, Vlissidies</a></h3><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-dQp69imhLW8/Tq1chdjlOrI/AAAAAAAADIY/qzpiWBJbgDE/s1600/gof.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="200" src="http://1.bp.blogspot.com/-dQp69imhLW8/Tq1chdjlOrI/AAAAAAAADIY/qzpiWBJbgDE/s200/gof.jpg" width="151" /></a></div>A classic book about object orientated design and one of the first books to present a series of patterns for writing code. Based around a case study for building a document editor, the patterns are split in to three groups; Creation Patterns, Structural Patterns and Behavioural Patterns.<br /><br />Most of the code examples are in C++ and a few are in SmallTalk and whilst I only have distant memories of C++, I found the code examples interesting and readable.<br />Some of the patterns in this book are now considered anti-patterns (Singleton and maybe Template method) but most are well worth understanding. What these patterns also provide is a vocabulary for developers to use when discussing code. Often a solution to a problem can be articulated by citing one of these patterns.<br /> <hr style="clear: both;" /><h3> <a href="http://www.amazon.co.uk/JavaScript-Good-Parts-Douglas-Crockford/dp/0596517742/ref=sr_1_1?s=books&amp;ie=UTF8&amp;qid=1313151119&amp;sr=1-1">JavaScript: The Good Parts, Douglas Crockford</a></h3><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-LVVDRoBBhos/Tq1chw033OI/AAAAAAAADIk/2mayyETzlbE/s1600/javascriptgoodparts.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="200" src="http://1.bp.blogspot.com/-LVVDRoBBhos/Tq1chw033OI/AAAAAAAADIk/2mayyETzlbE/s200/javascriptgoodparts.jpg" width="151" /></a></div>What Refactoring did to improve my knowledge of statically typed, object orientated programming, “JavaScript: The Good Parts” equally did to improve my knowledge of dynamic, prototype programming.<br /><br />Douglas Crockford believes that some parts of the languages are great, some are bad and the rest are just ugly. Most of the book is spent explaining how the good parts can be used to form an expressive and flexible language. The remainder highlights the bad and the ugly which, if avoided, make the good parts even better.<br /><br />This book is so rich in content and so terse that I read it three times. I now understand the power of closures in JavaScript and how best to construct objects which are secure and extensible.<br /><br /><hr style="clear: both;" /><h3> <a href="http://www.amazon.co.uk/Enterprise-Application-Architecture-Addison-Wesley-Signature/dp/0321127420/ref=sr_1_1?s=books&amp;ie=UTF8&amp;qid=1313151144&amp;sr=1-1">Patterns of Enterprise Application Architecture, Martin Fowler</a></h3><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-CCMpJPiKl4U/Tq1cjZmSbgI/AAAAAAAADI8/8iTWr0pma3s/s1600/poeaa1.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="200" src="http://2.bp.blogspot.com/-CCMpJPiKl4U/Tq1cjZmSbgI/AAAAAAAADI8/8iTWr0pma3s/s200/poeaa1.jpg" width="159" /></a></div>PoEAA follows on from Martin Fowler’s Refactoring and he has assembled a set of patterns for writing software where the code base is organised in to layers of responsibility. The most common types of layers are the data layer and the presentation (or User Interface) layer.<br /><br />Once again I am impressed by the way that Martin Fowler manages to formalise patterns in software engineering and the impact that he has on the frameworks that I use. I read this book soon after using NHibernate the .Net Object Relational Mapping tool and it felt like I was reading the specification for NHibernate. The same is true for Active Record in Ruby on Rails, and many of the Model View Controller (MVC) frameworks that exist. I must add that I do not think that Martin Fowler was the first to discover these patterns. For example Trygve Reenskaug created the MVC pattern while working at Xerox Parc. But what Martin Fowler has is the ability to collate and present the patterns so they become accessible and readable to all. He also draws upon the experience of many so the pattern is applicable to the time.<br /><br /><hr style="clear: both;" /><h3> <a href="http://www.amazon.co.uk/Little-Schemer-Daniel-P-Friedman/dp/0262560992/ref=sr_1_1?s=books&amp;ie=UTF8&amp;qid=1313151170&amp;sr=1-1">The Little Schemer, 4th Edition, Friedman and Felleisen</a></h3><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-fD5tB6pAH6w/Tq1cilSBgzI/AAAAAAAADIo/W2AF28vO6lM/s1600/littleschemer.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="200" src="http://4.bp.blogspot.com/-fD5tB6pAH6w/Tq1cilSBgzI/AAAAAAAADIo/W2AF28vO6lM/s200/littleschemer.jpg" width="151" /></a></div>The Little Schemer is the most unique and challenging book on programming that I have ever read. But then it is about recursion, a topic which can twist even the nimblest brain.<br /><br />Scheme is a dialect of LISP so it is a language where all data structures are lists and functions are also data. The Little Schemer builds up through its narrative “Ten Commandments” for writing idiomatic and valid Scheme programs. At first this is easy to follow as the recursion is shallow and mainly focused upon creating functions and safely processing the lists. The later chapters are much harder as the recursion gets deeper and functions start generating functions. This builds to the final masterpiece, the applicative-order Y combinator.<br /><br />I enjoyed this book. It was challenging, more challenging than Dante’s the Divine Comedy. However, it opened my mind to a world of functional programming that I am just starting to explore. I will be downloading a Scheme at some point so I can work through the code and further my understanding.<br /><br /><hr style="clear: both;" /><h3> <a href="http://www.amazon.co.uk/Domain-Specific-Languages-Addison-Wesley-Signature/dp/0321712943/ref=sr_1_1?s=books&amp;ie=UTF8&amp;qid=1313151195&amp;sr=1-1">Domain Specific Languages, Martin Fowler</a></h3><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-Nu9N3wBO_UA/Tq1cg8Hez9I/AAAAAAAADIU/T0U4uvhnUt8/s1600/dsl.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="200" src="http://4.bp.blogspot.com/-Nu9N3wBO_UA/Tq1cg8Hez9I/AAAAAAAADIU/T0U4uvhnUt8/s200/dsl.jpg" width="160" /></a></div>The final book on this list form Martin Fowler and his most recent. This has his usual style of a detailed example demonstrating the application of the patterns to follow. The topic this time is how to write Domain Specific Languages (DSLs). The focus is how DSLs can help to configure complex applications, like the main example called “Miss Grants Controller”. This is a complicated state machine which can be configured to open a door only when the correct sequence doors have been opened and lights switched on.<br /><br />This book is a small study on computer language design. It covers; lexing, syntactic analysis, the specification of grammer using BNF and the role of Abstract Syntax Trees to name but a few. As I have not previously studied language design or the writing of compilers, this was a great introduction to the topic.<br /><br />For me the best chapters came towards the end. Here Martin Fowler presents some alternative models of computation. They are alternative because they are not imperative computation which is the most common. They relate to DSLs as they are often harder to configure and their operation can not be immediately understood through just reading the code. So DSLs are a very useful tool to simplify the programming of these programming models. Of the four presented I was especially interested in the “Decision Table” and the “Production Rules” models as both of these solve problems I often encounter at work.<br /><br /><hr style="clear: both;" /><h3> <a href="http://www.amazon.co.uk/Designing-Web-Standards-Jeffrey-Zeldman/dp/0735712018/ref=sr_1_2?s=books&amp;ie=UTF8&amp;qid=1313151221&amp;sr=1-2">Designing with web standards, Jeffery Zeldman</a></h3><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-8xNETME5HcI/Tq1clgrcQ7I/AAAAAAAADJU/TcsY_54qzcA/s1600/webstandards.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="200" src="http://1.bp.blogspot.com/-8xNETME5HcI/Tq1clgrcQ7I/AAAAAAAADJU/TcsY_54qzcA/s200/webstandards.jpg" width="155" /></a></div>This is one of the few books I have read that completely changed how I thought about working. Prior to reading Designing with web standards I created HTML pages using tables to layout the page. I remember being very pleased with a site I made for Comet. I managed to make an image of a vacuum cleaner break the gird just as the designer had planned. To do that needed four nested tables and the image had to be cut in to several pieces. It was hard work, and it was wrong, as I found out when I read Designing with web standards. I then understood the idea that the HTML is a document. And this document is description of the content. The CSS is the presentation and the JavaScript adds any extra frills if the client supports it.<br /><br />I am sure of one thing. That back in 2003, when Jeffery Zeldman published this book, I was not the only person making web sites this way. But we all soon stopped. I have read other books since which have helped me to understand more about the detail. But it is this book which changed my thinking on the topic.<img src="http://feeds.feedburner.com/~r/KeithBloom/~4/X-7Wo6g4C4o" height="1" width="1" alt=""/>Keith Bloomhttp://www.blogger.com/profile/02976390053858341358noreply@blogger.com0http://keithbloom.blogspot.com/2011/08/reading-list.htmltag:blogger.com,1999:blog-6362944.post-83745513011981077822011-03-08T10:34:00.000+00:002011-10-30T14:09:18.483+00:00Adding the sequence number to a LINQ queryLINQ queries are a powerful way to keep your code expressive and by using differed execution, they are quick. But what if you need the value of the index which LINQ used whilst building the projection? I had this issue and found the solution was to use the Select method which accepts a <code>Func&lt;TSource, int, TResult&gt;</code> for the selector.<br /><br />With a for loop this is simple to accomplish as the index value is available in each iteration as it is controlling the loop:<br /> <script src="https://gist.github.com/1325736.js?file=example1.cs"></script> This code prints this to the console:<br /><br /><script src="https://gist.github.com/1325736.js?file=example2.cs"></script><br />Creating a LINQ query with the indexer in the projection is not as obvious. My first attempt was to use the Count() property of the line parameter in the projection.<br /><br /> <script src="https://gist.github.com/1325736.js?file=example3.cs"></script> <br />But when run to the console the problem became apparent<br /><br /><script src="https://gist.github.com/1325736.js?file=example4.cs"></script><br />In the projection, line.Count() is returning the string length of each line in the array. First attempts are always a good way to discover how something could work <br /><br />Fortunately the LINQ Select Method has two overloads. They both iterate over an <code>IEnumerable&lt;T&gt;</code> but it is the delegate for the selector which differs. The code above uses the first delegate which should be <code>Func&lt;TSource, TResult&gt;</code>. The second overload expects a <code>Func&lt;TSource, int, TResult&gt;</code>. Here, the parameter used for the int32 is assigned the current value for the index of the sequence.<br /><br />The first code example can now be changed to something more expressive<br /><br /><script src="https://gist.github.com/1325736.js?file=example5.cs"></script><br />Running this code displays the following in the console<br /><br /><script src="https://gist.github.com/1325736.js?file=example6.cs"></script><br />Many of the LINQ methods have this overload for the selector. Using it I have been able to continue using LINQ to specify how I want to transform the array. This means my code is more expressive. Plus, the LINQ query is quick as the projection will only be populated when the foreach loop is executed to display the result.<img src="http://feeds.feedburner.com/~r/KeithBloom/~4/Hju65j53C2M" height="1" width="1" alt=""/>Keith Bloomhttp://www.blogger.com/profile/02976390053858341358noreply@blogger.com1http://keithbloom.blogspot.com/2011/03/adding-sequence-number-to-linq-query.htmltag:blogger.com,1999:blog-6362944.post-82412883156729052352011-02-12T08:46:00.000+00:002011-10-30T15:02:11.185+00:00IIS application pool and domain indentitesFollow these steps to specify and non standard identity for an IIS application pool. For this example I will use the account domain\WebUser<br /><ol><li>In Administrative tools open the Local Security Policy program. And find the Log on as service policy in Local Policies, User Rights Assignment. Click properties and add the user domain\WebUser</li><li>Open Windows explorer and go to C:\Windows\Temp. Open the Sharing and Security and add the user to the security tab. Grant the user enough rights to read and write files</li><li>Open a command prompt and change to c:\windows\microsoft.net\Framework\v2.0.50722. Run aspnet_regiis.exe -GA domain\WebUser</li><li>In IIS open the properties of the application pool and go to the identity tab. Click Configurable and enter the username and password.</li></ol><img src="http://feeds.feedburner.com/~r/KeithBloom/~4/kZ-qbDqKvjE" height="1" width="1" alt=""/>Keith Bloomhttp://www.blogger.com/profile/02976390053858341358noreply@blogger.com0http://keithbloom.blogspot.com/2010/02/iis-application-pool-and-domain.htmltag:blogger.com,1999:blog-6362944.post-17208621120220000312011-01-10T09:05:00.000+00:002011-10-30T15:02:40.898+00:00IIS 6 and the HTTP 401.3 errorI love it when I find a new tool to use, I love it even more when it is really useful and saves me hours of work. Recently, I had the opportunity to try out ProcMon. This is what happened.<br /><br />Our test web server started returning HTTP 401.3 errors. The cause was quick to find; the permissions on the root website folder had been changed and the IIS accounts were missing. So the fix appeared simple, re-apply the permissions and they will cascade all the way down the tree. I added the local IUSER account but it failed to fix the problem. I spent several hours with the <a href="http://support.microsoft.com/kb/812614">MSDN documents</a> to make sure I had the correct users and groups applied, but to no avail. I could not find a way to return the server to normal operation.<br /><br ./><h3>Finding the problem</h3>The next day I felt resolved to find the problem, no more hacking around throwing users at a dialog box. For help I turned to <a href="http://http//technet.microsoft.com/en-us/sysinternals/bb896645">Process Monitor</a> (ProcMon), part of the <a href="http://technet.microsoft.com/en-us/sysinternals/default">SysInternals </a>suite of tools. ProcMon is a superb tool for these situations. It collects all activity on the machine, showing a list file, registry and network activity. Importantly for me, it also records the result of the operation.<br /><br />I fired it up, attempted to load a web page from my browser, and then stopped the trace. Tracing all the activity on a server will produce a metric ton of data; a one minute trace on my PC generates ~300,000 events. For this reason ProcMon has good filtering. You can pick from a list of events and limit by a text value. I chose to filter the list by Result, only showing those which returned ACCESS DENIED.<br /><br /><a href="http://2.bp.blogspot.com/-RuYfQgnfwEs/Tq0EtPCBCLI/AAAAAAAADIE/5XwTm7GVr5Y/s1600/ProcMon.PNG" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-RuYfQgnfwEs/Tq0EtPCBCLI/AAAAAAAADIE/5XwTm7GVr5Y/s1600/ProcMon.PNG" /></a><br /><br />With the filter applied there was only one event in the list; the IUSER account was trying to access the file from my browser request. Upon checking permissions on the actual file I found that they were different to the parent. All of the IIS accounts were missing. I forced the permissions down the tree and IIS started serving pages again.<br /><br /><h3>Not just any tool but the right tool</h3>ProcMon is the star here, without it I would have found the problem but with a lot of guess work and a great deal of time. With ProcMon I could see exactly what was happening when IIS tried to serve the page. Being able to see what happens at the core of a system is essential to fault finding and having the right tool is infinitely time saving.<img src="http://feeds.feedburner.com/~r/KeithBloom/~4/tOC-xeagQKw" height="1" width="1" alt=""/>Keith Bloomhttp://www.blogger.com/profile/02976390053858341358noreply@blogger.com0http://keithbloom.blogspot.com/2011/01/iis-6-and-http-4013-error.htmltag:blogger.com,1999:blog-6362944.post-23694347378188597692010-09-21T22:04:00.000+01:002011-10-30T08:01:51.830+00:00NAnt and the .Net 4 error<a href="http://nant.sourceforge.net/">NAnt</a> threw this odd error today. <br /><br /><script src="https://gist.github.com/1325687.js?file=example1.cs"></script>The reason for it being odd was the version of the .Net framework being used in the call: .Net 4.0, instead of 3.5. &nbsp;While I have 4.0 installed, I was trying to build a 3.5 project.<br /><br />A little searching and I soon had a good guide of how to <a href="http://blog.diegocadenas.com/2010/05/upgrading-your-build-server-to-use-net.html">build 4.0 projects</a> using NAnt, but this wasn't the solution to my problem. <br /><br /><h3>Why 4.0?</h3>Now it is possible to tell NAnt which framework to use, the <code>-t:</code> switch from the command line or in the main NAnt.exe.config file. I tried, and failed. NAnt threw the same error. <br /><br />I wanted to stop NAnt using .Net 4.0, but how? Well, near the bottom of the config file is a list of supported Frameworks, of which 4.0 is one.<br /><br /><script src="https://gist.github.com/1325687.js?file=example2.cs"></script>Removing the entry for 4.0 fixed the problem. Certainly this is a quick fix as NAnt has a problem with .Net 4.0 on my PC, but not a problem to be fixed today.<br /><br /><h3>Update</h3>Thanks to Rawdon for mentioning the path to the configuration file. You can find this in <code>%InstallLocation%\NAnt\bin\NAnt.exe.config</code><br /><br />Also from the comments. Paul Stewart has found the cause of the problem and written it up in this <a href="http://www.byteblocks.com/post/2011/01/19/Nant-Build-Error-SystemSecurityPermissionsFileIOPermission.aspx">post</a>.<img src="http://feeds.feedburner.com/~r/KeithBloom/~4/kCUSpeU02Ic" height="1" width="1" alt=""/>Keith Bloomhttp://www.blogger.com/profile/02976390053858341358noreply@blogger.com9http://keithbloom.blogspot.com/2010/09/nant-and-net-4-error.htmltag:blogger.com,1999:blog-6362944.post-5566086998649195752010-09-13T12:50:00.000+01:002011-10-30T07:56:16.027+00:00TFS and MSBuild propertiesMy current client uses <a href="http://en.wikipedia.org/wiki/Team_Foundation_Server">TFS</a> as build server and <a href="http://en.wikipedia.org/wiki/MSBuild">MSBuild</a> for deployment scripts. Whenever I have to change the scripts I spend time searching for the various properties which they expose. So this is my reminder, to kick start the process next time.<br /><h3>MSBuild</h3>There are many different ways of using MSBuild but the list of <a href="http://msdn.microsoft.com/en-us/library/ms164309%28VS.90%29.aspx">Reserved Properties</a> is always a good start. An example property is “$MSBuildProjectFile”, which returns the directory where the project file is located. From here I can often navigate using relative paths to the various places I need to go.<br /><br />Any talk of MSBuild would not be complete without mentioning these extension libraries; <a href="http://sdctasks.codeplex.com/">SDC Tasks</a> and <a href="http://msbuildtasks.tigris.org/">MSBuildTasks</a>. They provide a hosts of extras from re-writing XML files to creating web sites.<br /><h3>TFS</h3>The build process (Team Build) is built upon MSBuild and, like all processes which are managed with MSBuild, it provides a set of extensions for querying the environment and managing the process. The first port of call is the <a href="http://msdn.microsoft.com/en-us/library/ms400688%28VS.90%29.aspx">MSDN page</a>. After reading through this which is lengthy, I would recommend having a look at Martin Woodward’s post; <a href="http://www.woodwardweb.com/vsts/30_useful_team.html">useful Team Build properties</a>.<img src="http://feeds.feedburner.com/~r/KeithBloom/~4/M8-r4ggNhzE" height="1" width="1" alt=""/>Keith Bloomhttp://www.blogger.com/profile/02976390053858341358noreply@blogger.com0http://keithbloom.blogspot.com/2010/09/tfs-and-msbuild-properties.htmltag:blogger.com,1999:blog-6362944.post-24497735480280485562010-05-16T08:49:00.000+01:002011-10-30T07:51:30.168+00:00Picasa 3 OS X and a shared image databaseAt home I use Macs and for viewing photos I use Picasa. It is the best tool I have found for the task as; it is happy to let you choose where to put the images and if I make corrections it doesn't change the originals.<br /><br /><h3>Central library</h3>One annoyance though is that each computer has it's own library locally, so when I add new pictures both libraries have to scan the watched folders. &nbsp;So I spent an hour hacking around to see if I can move the local database to a share on my home server.<br /><br /><h3>Symlinks</h3>Picasa 3 keeps the image database in:<br /><br /><script src="https://gist.github.com/1325672.js?file=example1.sh"></script><br />I found the path after looking in the Preferences -&gt; Network page. &nbsp;Now I moved in to a terminal window to;<br /><ul><li>Create a PicasaDb on my home server</li><li>Copy the local database to the folder on the share</li><li>Rename my original database</li><li>Create a symlink to the database now living on the server share</li></ul><br /><script src="https://gist.github.com/1325672.js?file=example2.sh"></script><br />With all this in place I fired up Picasa and it still worked, first hurdle over. &nbsp;I then edited an image and still no problems. I check the timestamps on the database files on the server and they had been updated plus, Picasa had not created a new database locally.<br /><br />Next time I will attempt the same on the other computer to find out if Picasa will happily share a database.<img src="http://feeds.feedburner.com/~r/KeithBloom/~4/V4PHplK0uC4" height="1" width="1" alt=""/>Keith Bloomhttp://www.blogger.com/profile/02976390053858341358noreply@blogger.com5http://keithbloom.blogspot.com/2010/05/picasa-3-os-x-and-shared-image-database.htmltag:blogger.com,1999:blog-6362944.post-44087566933640289592010-01-06T09:43:00.000+00:002011-12-28T08:40:26.071+00:00Reset Remote Desktop ConnectionsBeing able to connect to the desktop of another server is an essential part of most developers working day. Whether it is to configure IIS or to kick of a deployment, starting up the Remote Desktop Connection client is often the quickest way to complete the task.<br /><br />Unfortunately the basic configuration only allows for three connections at any one time. Also, if someone just closes the client, their connection is not cleared. It will hang around in a disconnected state until someone connects to the physical machine to clear any unused connections.<br /><br />There are two DOS commands which solve this problem. The oddly named; <code>QWINSTA</code> and <code>RWINSTA</code>.<br /><pre><br />qwinsta /server:SuperServer<br /></pre>The displayed list will include the session Id which you can use with the next command, rwinsta, to reset the sessions. Type the following to reset session 1 on SuperServer:<br /><pre><br />rwinsta 1 /server:SuperServer<br /></pre>Be sure to pick sessions with the state “Disc” as connections marked as “Active” my really be active.<br /><br />Often the simplest tools yield the biggest gains. The discovery of these two commands has saved many hours of work. I no longer have to go through the IT support process and have an engineer go to the physical machine.<img src="http://feeds.feedburner.com/~r/KeithBloom/~4/Gz--V3xq46U" height="1" width="1" alt=""/>Keith Bloomhttp://www.blogger.com/profile/02976390053858341358noreply@blogger.com0http://keithbloom.blogspot.com/2010/01/reset-remote-desktop-connections.htmltag:blogger.com,1999:blog-6362944.post-8824373885507641852009-12-01T13:00:00.000+00:002011-10-30T07:42:52.853+00:00PowerShell and the event logOne of the strengths of PowerShell is the easy access to <a href="http://msdn.microsoft.com/en-us/library/aa394582(VS.85).aspx">WMI</a> it provides at the command line. Before PowerShell, accessing WMI involved doing all the work from within VBScript and processing the results using the facilities available in the scripting language. PowerShell on the other hand is a built on top of the .Net framework so the manipulation of the results is far easier. I now find myself stepping away from the desktop and opening the console for a lot more tasks, I always believe that you should tell the machine what you want it to do rather than doing it yourself.<br /><br />To demonstrate this the code example below will<br /><ul><li>Query the application event log of a remote server</li><li>Order the log entries by the date they occurred</li><li>Return the first 5 results from the set</li></ul><br />The cmdlet Get-WmiObject is the gateway to WMI and allowed me to complete the first step with this simple command<br /><br /><script src="https://gist.github.com/1325659.js?file=example1.ps"></script>As the results from the WMI query are stored in an array, I’m now free to manipulate the result set further using the commands available in PowerShell. Completing items two and three on my list only requires this command<br /><br /><script src="https://gist.github.com/1325659.js?file=example2.ps"></script><br />The big win here is being able to run a query on a remote server but manipulate the result set on my local machine. WMI has a large <a href="http://msdn.microsoft.com/en-us/library/aa394570(VS.85).aspx">set of providers</a> which are now only a query away from my console.<img src="http://feeds.feedburner.com/~r/KeithBloom/~4/omPCmP6DGYI" height="1" width="1" alt=""/>Keith Bloomhttp://www.blogger.com/profile/02976390053858341358noreply@blogger.com0http://keithbloom.blogspot.com/2009/12/powershell-and-event-log.html