tag:blogger.com,1999:blog-5589343447323430312Thu, 21 Mar 2019 09:17:49 +0000SitecoreDeveloper ToolsTDDTestingAgileC#SOLRSVN.NetMSBUILDMongoDBSitecore7proxiesAliasesBooksConfigDMSGitGithubJavaScriptPowershellSymposiumTips & TricksWeb.ConfigclonesGrowing Object Oriented Software Guided by TestsNUnitSocial ConnectedVisual StudioXSLTweirdnesswildcardsAsp.NetAspConfAzureBDDBloggingCachingClean CodeClient EditorCodeGenComponentsCustom Item GeneratorDOSDebuggingDependency InjectionDesign PatternsFacebookHexoItem BucketsLINQLuceneMVCMochaNode.jsOffice CoreOpen SourcePair ProgrammingPomodoro TechniqueRobotsSIMSQLScrumSearchSeleniumShared SourceSpecFlowSpelunkingStack OverflowT4TDSTeamsVS CodeValidationWindowscustomizationdata reusegutterxDBExplorationsinto Sitecore and .Nethttp://www.dansolovay.com/noreply@blogger.com (Dan Solovay)Blogger84125tag:blogger.com,1999:blog-5589343447323430312.post-5566619471493119969Mon, 10 Jul 2017 11:15:00 +00002017-07-11T06:12:56.374-04:00Developer ToolsPowershellSitecoreTDSUsing Powershell to find duplicate IDs in serialized itemsOn my current project I was troubleshooting a failing integration test that makes use of the Sitecore FakeDB Serialization module, which can declarative load a branch of content directly from the file system. The setup method for this test was reporting a duplicate key exception, so I used PowerShell to identify the culprit.<br /><a name='more'></a>The first step was to extract the IDs from the serialized *.item files. &nbsp;In the powershell prompt, I navigated to the root of the TDS directory, and used this command to get at the item IDs:<br /><br />&gt;Get-ChildItem -recurse | Select-String '^id:' | More<br /><br />This looked like it was giving me the desired raw output:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-aoqVeupXtV8/WWNdTFbGoaI/AAAAAAAAD3Q/euT-0TDNK086vtxgpcwrT1W7f5IkUjT4QCLcBGAs/s1600/2017-07-09_6-44-52.bmp" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="187" data-original-width="1165" height="102" src="https://2.bp.blogspot.com/-aoqVeupXtV8/WWNdTFbGoaI/AAAAAAAAD3Q/euT-0TDNK086vtxgpcwrT1W7f5IkUjT4QCLcBGAs/s640/2017-07-09_6-44-52.bmp" width="640" /></a></div><br /><br />But I wasn't 100% sure what was what. For example, what was the "3" doing before the ":id:"? &nbsp;To get a closer look at the data returned, I piped this into Format-Table:<br /><br />&gt;Get-ChildItem -recurse | Select-String '^id:' | Format-Table | More<br /><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-mnUEkXT86CE/WWNdTF-4j7I/AAAAAAAAD3M/ocj1fiOH8jY2sE3jhrS96WU4mUsP1gpuACLcBGAs/s1600/2017-07-08_7-14-17.bmp" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="224" data-original-width="1422" height="100" src="https://3.bp.blogspot.com/-mnUEkXT86CE/WWNdTF-4j7I/AAAAAAAAD3M/ocj1fiOH8jY2sE3jhrS96WU4mUsP1gpuACLcBGAs/s640/2017-07-08_7-14-17.bmp" width="640" /></a></div><br />This shows what fields I have to work with. &nbsp;So to identify my duplicates, I needed to group by the "Line" field, which contained the Sitecore ID, and find those with a count greater than 1. &nbsp;A quick Google search turned up an <a href="http://windowsitpro.com/powershell/exploring-powershells-group-object-cmdlet" target="_blank">article</a> on how to do Group operations in PowerShell, and the previous output showed &nbsp;needed to group by the "Line" field. So now I had this:<br /><br /><br />&gt;Get-ChildItem -recurse | Select-String '^id:' | Group Line | More<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-HQzwgjcd7f8/WWNeRCj_AgI/AAAAAAAAD3U/CpQ_GH3xfPI28FRzV22wisrSeZ6kNUeeACLcBGAs/s1600/2017-07-10_6-59-29.bmp" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="205" data-original-width="1446" height="90" src="https://3.bp.blogspot.com/-HQzwgjcd7f8/WWNeRCj_AgI/AAAAAAAAD3U/CpQ_GH3xfPI28FRzV22wisrSeZ6kNUeeACLcBGAs/s640/2017-07-10_6-59-29.bmp" width="640" /></a></div><br />Following the example of the article I cited above, I used this to identify the duplicates:<br /><br />Get-ChildItem -recurse | Select-String '^id:' | Group Line | Sort Count -Descending | Select -First 5<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-K5gFKTgIdtQ/WWNgMBV7VJI/AAAAAAAAD3g/G2BtlDkDusswSStirxKz7B7pM1tYupymwCLcBGAs/s1600/2017-07-10_7-01-32.bmp" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="229" data-original-width="1444" height="100" src="https://2.bp.blogspot.com/-K5gFKTgIdtQ/WWNgMBV7VJI/AAAAAAAAD3g/G2BtlDkDusswSStirxKz7B7pM1tYupymwCLcBGAs/s640/2017-07-10_7-01-32.bmp" width="640" /></a></div><br />In my case I saw several IDs with a count of two, so this command gave me the information I needed. &nbsp;A more universal approach would be to filter the results to counts of two or above, which you can do with this:<br /><br />Get-ChildItem -recurse | Select-String '^id:' | Group Line | Where {$_.Count -gt 1}<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-cthioQFveis/WWNfyj9rCtI/AAAAAAAAD3c/BY74YXjcFbAkKNXXm-bZYSQO2eIjqdrTACLcBGAs/s1600/2017-07-10_7-06-07.bmp" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"></a><a href="https://3.bp.blogspot.com/-TIZaD_IuKn0/WWNgMQOtytI/AAAAAAAAD3k/Dj2M9R6bIUIkdustGtdkqr-LIa8kv4omQCLcBGAs/s1600/2017-07-10_7-06-07.bmp" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="310" data-original-width="1450" height="136" src="https://3.bp.blogspot.com/-TIZaD_IuKn0/WWNgMQOtytI/AAAAAAAAD3k/Dj2M9R6bIUIkdustGtdkqr-LIa8kv4omQCLcBGAs/s640/2017-07-10_7-06-07.bmp" width="640" /></a></div><br />Or with Powershell 3.0 and up, you can get rid of the curly braces:<br /><br />Get-ChildItem -recurse | Select-String '^id:' | Group Line | Where Count -gt 1<br /><br />Finally, I should mention that most of the commands above have shorter versions, which speed typing at the cost of legibility:<br /><br /><table><tbody><tr><td>Get-ChildItem</td><td>gci</td></tr><tr><td>Select-String</td><td>sls</td></tr><tr><td>Where</td><td>?</td><td></td></tr></tbody></table><br />So the search could have been written as below:<br /><br />gci -recurse | sls '^id:' | group line | ? count -gt 1<br /><br />Finally, in addition to Format-Table, Format-List (which writes out each property of each object returned) and Format-Wide (which writes out a single property of each object, in a multi-column format) are useful as you do discovery of how your query is working. &nbsp;Finally, Out-GridView sends results to a window that allows sorting, filtering, and selecting columns.<br /><br />These articles were helpful as I figured out how to query with Powershell:<br /><br /><ul><li><a href="http://windowsitpro.com/powershell/exploring-powershells-group-object-cmdlet" target="_blank">Exploring PowerShell's Group-Object cmdlet</a></li><li><a href="http://windowsitpro.com/powershell/powershell-basics-sorting-measuring-objects" target="_blank">PowerShell Basics: Sorting and Measuring Objects</a></li></ul><div>And to learn about FakeDB's Serialization feature:</div><div><ul><li><a href="https://github.com/sergeyshushlyapin/Sitecore.FakeDb/wiki/FakeDb-Serialization">FakeDB Wiki: FakeDb Serialization</a></li><li><a href="http://hermanussen.eu/sitecore/wordpress/2014/09/unit-testing-with-sitecore-fakedb-and-deserialized-data/">Knifecore: Unit testing with Sitecore.FakeDb and deserialized data</a></li></ul></div><br /><br /><br />http://www.dansolovay.com/2017/07/using-powershell-to-find-duplicate-ids.htmlnoreply@blogger.com (Dan Solovay)0tag:blogger.com,1999:blog-5589343447323430312.post-4905226128806150436Sat, 22 Apr 2017 14:09:00 +00002017-06-29T10:53:49.245-04:00Clean CodeGrowing Object Oriented Software Guided by TestsTestingReadable TestsThere's a great little <a href="https://www.safaribooksonline.com/library/view/growing-object-oriented-software/9780321574442/ch22.html">chapter</a> in <i>Growing Object Oriented Software Guided by Tests</i> on how to write helper methods for building complex test data. The specific techniques are somewhat less relevant in a post <a href="https://github.com/AutoFixture/AutoFixture">AutoFixture</a> world, but the chapter also makes a powerful case that test code should read as English prose.<br /><a name='more'></a><br />The chapter walks through a couple of refactor steps of some code to construct test data for logic working with orders, starting with something like (I'm somewhat summarizing and C#-ing the original Java examples):<br /><br /><pre>var order = new Order(<br /> new Customer(<br /> new Address(...</pre><br />This code is extracted into a builder class like this:<br /><br /><pre>var order = OrderBuilder()<br /> .fromCustomer(<br /> new CustomerBuilder()<br /> .withAddress(...</pre><br />Then refactored to: <br /><br /><pre>var order = anOrder()<br /> .fromCustomer(<br /> aCustomer()<br /> .withAddress(</pre><br />And this builder is finally used in a helper method like this:<br /><br /><pre>havingReceived(anOrder()<br /> .withLine("DearstalkerHat", 1)<br /> .withLine("Tweed Cape", 1);<br /> ...<br /></pre><br />Freeman and Pryce note:<br /><blockquote class="tr_bq"><i>We started with a test that looked procedural, extracted some of its behavior into builder objects, and ended up with a declarative description of what the feature does. We're nudging the test code towards the sort of language we could use when discussing the feature with someone else, even someone non-technical; we push everything else into supporting code. &nbsp;... We use test data builders to reduce duplication and make the test code more expressive. It's another technique that reflects our obsession with the language of code, driven by the principle that code is there to be read.</i></blockquote>I recently had a chance to put this into practice while working on a test of a Sitecore RenderField pipeline processor, to document required behavior for Experience Editor mode. &nbsp;I found a very helpful <a href="http://jockstothecore.com/how-to-test-automation/" target="_blank">post</a>&nbsp;by Dmitry Harnitski that documents how to create Editor mode with Sitecore.FakeDB, and verified that the syntax in the post worked, enabling me to create a failing test showing I had not yet implemented the Experience Editor logic. To get my test to pass, I just had to add a check on Experience Editor mode. &nbsp;(Red: Editor mode should suppress a behavior, and my test showed it didn't. &nbsp;Green: I added a check on IsInEditMode to the production code.) &nbsp;There was not a lot to do with the "Refactor" step in the production code, but in the test code, I refactored the code from the blog post:<br /><br /><pre>var fakeSiteContext = new Sitecore.FakeDb.Sites.FakeSiteContext(<br />new Sitecore.Collections.StringDictionary<br />{<br /> {"enableWebEdit", "true"},<br /> {"masterDatabase", "master"},<br />});<br /> <br />using (new Sitecore.Sites.SiteContextSwitcher(fakeSiteContext))<br />{ <br /> Sitecore.Context.Site.SetDisplayMode(DisplayMode.Edit, DisplayModeDuration.Remember);<br /> ... <br /></pre><br />to:<br /><br /><pre>using(aSiteWithEditModeEnabled())<br />{<br /> SwitchToEditMode();<br /></pre><br />This is a small change, just gathering code into methods, but giving the methods names that make sense in the context of the test make the test much more expressive of its intent, and greatly raises the signal to noise ratio for anyone reading the test. "Code is there to be read."<br /><br />http://www.dansolovay.com/2017/04/readable-tests.htmlnoreply@blogger.com (Dan Solovay)0tag:blogger.com,1999:blog-5589343447323430312.post-8838134735711486728Mon, 02 Jan 2017 06:23:00 +00002017-04-23T14:35:36.228-04:00BDDDeveloper ToolsGrowing Object Oriented Software Guided by TestsSeleniumSpecFlowTestingFirst steps with SpecFlow and SeleniumI've always been a Unit Testing guy, but reading <a href="https://www.amazon.com/Growing-Object-Oriented-Software-Guided-Tests/dp/0321503627" target="_blank">Growing Object Oriented Software Guided by Tests </a>really brought home the role an acceptance test outer structure can play in an iterative development process. <br /><a name='more'></a>The core of the book focuses on building an "auction sniper" (an automated bidding tool), and the very first step (a Sprint Zero task) is to set up an acceptance test harness that can simulate connecting to an auction and showing that the sniper lost. &nbsp;Just showing "Lost Auction" in a label is enough to get the test to pass, and then the rest of the book builds functionality up around that. &nbsp;This made a strong impression on &nbsp;me. My own practice has been to start the development process with controller tests, and not have anything that directly interacts with a browser; but this book got me wondering what a browser test exoskeleton would look like. &nbsp;Since I don't develop with Java and Swing, the tooling in Freeman &amp; Pryce's book wasn't applicable, but SpecFlow (the C# Given/When/Then test generator) &nbsp;and the browser test tool Selenium seemed like a logical combination. Since I've been hacking a lot on the Sitecore Instance Manager recently, I thought this would be a logical place to try out some Selenium BDD (behavior driven development, the technical term for the Given/When/Then style). &nbsp;I've created a branch on my SIM fork called "<a href="https://github.com/dsolovay/Sitecore-Instance-Manager/tree/selenium-bdd" target="_blank">selenium-bdd</a>" where you can follow my progress. Since it's all NuGet driven, you should be able to install SpecFlow and try this out yourself.<br /><div><br /></div><div>To get started, you need to install the Visual Studio plugin "SpecFlow for VisualStudio 2015". &nbsp;I also added the NuGet package SpecRun.SpecFlow (or SpecFlow.NUnit, see update at bottom). &nbsp;With these tools in place, you will now be able to add "SpecFlow feature" files through the Add New Item right click option:&nbsp;</div><div class="separator" style="clear: both; text-align: center;"></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-YurwgzZAx90/WGnvuNxfPtI/AAAAAAAADxc/lj_DMofrQh0gkJYvv8P1IrR3StdtrEr5ACLcB/s1600/2017-01-02_0-02-37.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="85" src="https://4.bp.blogspot.com/-YurwgzZAx90/WGnvuNxfPtI/AAAAAAAADxc/lj_DMofrQh0gkJYvv8P1IrR3StdtrEr5ACLcB/s320/2017-01-02_0-02-37.jpg" width="320" /></a></div><div><br /></div><div><br /></div><div>This creates a file that allows you to specify a story, and a number of Given, When, Then statements to describe it precisely. The Given clauses give preconditions, the When clauses an action, and Then the expected result. You can have multiple clauses of each type, or use And or But, which is equivalent. &nbsp;Here is the Feature file I ended up with to describe the SIM "install" command:</div><div><br /></div><div><pre style="background: #1e1e1e; color: gainsboro; font-family: Consolas; font-size: 16;"><span style="color: #569cd6;">Feature</span>:&nbsp;SIM&nbsp;Command&nbsp;Line<br /><span style="font-style: italic;"> In&nbsp;order&nbsp;to&nbsp;work&nbsp;better&nbsp;with&nbsp;Sitecore<br /> As&nbsp;a&nbsp;developer<br /> I&nbsp;want&nbsp;a&nbsp;command&nbsp;line&nbsp;to&nbsp;work&nbsp;with&nbsp;Sitecore&nbsp;instances<br /></span><br /><span style="color: #5f95fa; font-style: italic;">@SIMCMD<br /></span><span style="color: #569cd6;">Scenario</span>:&nbsp;Create&nbsp;instance<br /> <span style="color: #569cd6;">Given&nbsp;</span>No&nbsp;Sitecore&nbsp;instance&nbsp;named&nbsp;'<span style="color: #646464; font-style: italic;">TestExample</span>'&nbsp;exists<br /> <span style="color: #569cd6;">When&nbsp;</span>I&nbsp;create&nbsp;'<span style="color: #646464; font-style: italic;">TestExample</span>'&nbsp;with&nbsp;the&nbsp;command&nbsp;tool<br /> <span style="color: #569cd6;">Then&nbsp;</span>I&nbsp;can&nbsp;navigate&nbsp;to&nbsp;'<span style="color: #646464; font-style: italic;">TestExample</span>'<br /> <span style="color: #569cd6;">Then&nbsp;</span>I&nbsp;see&nbsp;the&nbsp;Sitecore&nbsp;Welcome&nbsp;page<br /> <span style="color: #569cd6;">Then&nbsp;</span>Delete&nbsp;'<span style="color: #646464; font-style: italic;">TestExample</span>'<br /><br /></pre></div><div><br /></div><div>The header information listing the feature and the story is just documentation. The real action is with the Given/When/Then, which automatically generate a Feature.cs file that allows the Visual Studio test runner to treat this as a test. &nbsp;Incidentally, putting the word 'TestExample' in single quotes helped the tooling understand that this was a parameterized value.&nbsp;</div><div><br /></div><div>The final piece of the puzzle is to create C# meanings for all of these rules. You can right click on the above file and select "Generate Step Definitions" which will bring up a little wizard to create these rules and write them to a C# file; these are called "rule bindings". &nbsp;(I will refer you to the SpecFlow Getting Started guide for this:&nbsp;<a href="http://specflow.org/getting-started/">http://specflow.org/getting-started/</a>) &nbsp;</div><div><br /></div><div>There are a couple of nice details here. If you modify or add a rule (e.g change "Delete 'TestExample' to Remove 'TestExample'), it will show up in purple, indicating that the rule doesn't exist in the Rule Bindings cs file. In this case, you will probably want to "Copy Rule to clipboard", so that you don't overwrite the rule bindings you've already written.&nbsp;</div><div><br /></div><div>The power of this technique is that you consolidate conditions like "Given a user has logged on" to a single place, allowing business users to read, and perhaps even write, the tests. If something changes, like the process of logging on, or the way to verify that an item is in a cart, &nbsp;you only need to make this change in one place, in the rule binding for the affected rule. &nbsp;This allows you to have a large number of tests with a finite and maintainable set of bindings.&nbsp;</div><div><br /></div><div>Since this scenario involved checking the existence of a web page, I decided to use the SeleniumWebDriver to implement the bindings. &nbsp;Selenium can automate all the major browsers, but for my purposes Chrome was sufficient; this required also installing the Chromium.ChormeDriver NuGet package, which downloads the ChromeDriver.exe file. &nbsp;Selenium can be thought of as a shim layer that provides a consistent developer UI for all the browsers.</div><div><br /></div><div>This is the bindings file I ended up with:</div><div><pre style="background: #1e1e1e; color: gainsboro; font-family: Consolas; font-size: 16;"><pre style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; font-family: Consolas;"><span style="color: #569cd6;">using</span>&nbsp;System;<br /><span style="color: #569cd6;">using</span>&nbsp;System<span style="color: #b4b4b4;">.</span>Diagnostics;<br /><span style="color: #569cd6;">using</span>&nbsp;System<span style="color: #b4b4b4;">.</span>Linq;<br /><span style="color: #569cd6;">using</span>&nbsp;Microsoft<span style="color: #b4b4b4;">.</span>VisualStudio<span style="color: #b4b4b4;">.</span>TestTools<span style="color: #b4b4b4;">.</span>UnitTesting;<br /><span style="color: #569cd6;">using</span>&nbsp;OpenQA<span style="color: #b4b4b4;">.</span>Selenium;<br /><span style="color: #569cd6;">using</span>&nbsp;OpenQA<span style="color: #b4b4b4;">.</span>Selenium<span style="color: #b4b4b4;">.</span>Chrome;<br /><span style="color: #569cd6;">using</span>&nbsp;TechTalk<span style="color: #b4b4b4;">.</span>SpecFlow;<br /> <br /><span style="color: #569cd6;">namespace</span>&nbsp;SIM<span style="color: #b4b4b4;">.</span>Specs<br />{<br />&nbsp;&nbsp;&nbsp;&nbsp;[<span style="color: #4ec9b0;">Binding</span>]<br />&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #569cd6;">public</span>&nbsp;<span style="color: #569cd6;">class</span>&nbsp;<span style="color: #4ec9b0;">CommandLineSteps</span>:<span style="color: #b8d7a3;">IDisposable</span><br />&nbsp;&nbsp;&nbsp;&nbsp;{<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #4ec9b0;">ChromeDriver</span>&nbsp;driver&nbsp;<span style="color: #b4b4b4;">=</span>&nbsp;<span style="color: #569cd6;">new</span>&nbsp;<span style="color: #4ec9b0;">ChromeDriver</span>();<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #569cd6;">public</span>&nbsp;<span style="color: #569cd6;">void</span>&nbsp;Dispose()<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #569cd6;">if</span>&nbsp;(driver&nbsp;<span style="color: #b4b4b4;">!=</span>&nbsp;<span style="color: #569cd6;">null</span>)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;driver<span style="color: #b4b4b4;">.</span>Dispose();<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;driver&nbsp;<span style="color: #b4b4b4;">=</span>&nbsp;<span style="color: #569cd6;">null</span>;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}<br /> <br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[<span style="color: #4ec9b0;">Given</span>(<span style="color: #d69d85;">@"No&nbsp;Sitecore&nbsp;instance&nbsp;named&nbsp;'(.*)'&nbsp;exists"</span>)]<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #569cd6;">public</span>&nbsp;<span style="color: #569cd6;">void</span>&nbsp;GivenNoSitecoreInstanceNamedExists(<span style="color: #569cd6;">string</span>&nbsp;siteName)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ThenDelete(siteName);<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #4ec9b0;">Assert</span><span style="color: #b4b4b4;">.</span>IsFalse(SiteFound(siteName));<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}<br />&nbsp;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[<span style="color: #4ec9b0;">When</span>(<span style="color: #d69d85;">@"I&nbsp;create&nbsp;'(.*)'&nbsp;with&nbsp;the&nbsp;command&nbsp;tool"</span>)]<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #569cd6;">public</span>&nbsp;<span style="color: #569cd6;">void</span>&nbsp;WhenICreateWithTheCommandTool(<span style="color: #569cd6;">string</span>&nbsp;siteName)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;RunSimCommand(<span style="color: #d69d85;">$"install&nbsp;--name&nbsp;</span>{siteName}<span style="color: #d69d85;">"</span>);<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}<br /> <br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[<span style="color: #4ec9b0;">Then</span>(<span style="color: #d69d85;">@"I&nbsp;can&nbsp;navigate&nbsp;to&nbsp;'(.*)'"</span>)]<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #569cd6;">public</span>&nbsp;<span style="color: #569cd6;">void</span>&nbsp;ThenICanNavigateTo(<span style="color: #569cd6;">string</span>&nbsp;siteName)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #4ec9b0;">Assert</span><span style="color: #b4b4b4;">.</span>IsTrue(SiteFound(siteName));<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}<br /> <br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[<span style="color: #4ec9b0;">Then</span>(<span style="color: #d69d85;">@"I&nbsp;see&nbsp;the&nbsp;Sitecore&nbsp;Welcome&nbsp;page"</span>)]<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #569cd6;">public</span>&nbsp;<span style="color: #569cd6;">void</span>&nbsp;ThenISeeTheSitecoreWelcomePage()<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #b8d7a3;">IWebElement</span>&nbsp;element&nbsp;<span style="color: #b4b4b4;">=</span>&nbsp;driver<span style="color: #b4b4b4;">.</span>FindElement(<span style="color: #4ec9b0;">By</span><span style="color: #b4b4b4;">.</span>TagName(<span style="color: #d69d85;">"h1"</span>));<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #4ec9b0;">Assert</span><span style="color: #b4b4b4;">.</span>AreEqual(<span style="color: #d69d85;">"Sitecore&nbsp;Experience&nbsp;Platform"</span>,&nbsp;element<span style="color: #b4b4b4;">.</span>Text);<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}<br /> <br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[<span style="color: #4ec9b0;">Then</span>(<span style="color: #d69d85;">@"Delete&nbsp;'(.*)'"</span>)]<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #569cd6;">public</span>&nbsp;<span style="color: #569cd6;">void</span>&nbsp;ThenDelete(<span style="color: #569cd6;">string</span>&nbsp;siteName)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;RunSimCommand(<span style="color: #d69d85;">$"delete&nbsp;--name&nbsp;</span>{siteName}<span style="color: #d69d85;">"</span>);<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #4ec9b0;">Assert</span><span style="color: #b4b4b4;">.</span>IsFalse(SiteFound(siteName));<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}<br /> <br />&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #9b9b9b;">#region</span>&nbsp;Private&nbsp;Methods<br />&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #569cd6;">private</span>&nbsp;<span style="color: #569cd6;">bool</span>&nbsp;SiteFound(<span style="color: #569cd6;">string</span>&nbsp;siteName)<br />&nbsp;&nbsp;&nbsp;&nbsp;{<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;driver<span style="color: #b4b4b4;">.</span>Navigate()<span style="color: #b4b4b4;">.</span>GoToUrl(<span style="color: #d69d85;">$"http://</span>{siteName}<span style="color: #d69d85;">/"</span>);<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #569cd6;">bool</span>&nbsp;nameNotResolved&nbsp;<span style="color: #b4b4b4;">=</span>&nbsp;driver<span style="color: #b4b4b4;">.</span>PageSource<span style="color: #b4b4b4;">.</span>Contains(<span style="color: #d69d85;">"ERR_NAME_NOT_RESOLVED"</span>);<br /> <br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #57a64a;">//&nbsp;HACK&nbsp;There&nbsp;is&nbsp;a&nbsp;moment&nbsp;in&nbsp;the&nbsp;test&nbsp;execution&nbsp;where&nbsp;IIS&nbsp;handles&nbsp;the&nbsp;page&nbsp;not&nbsp;found,&nbsp;rather&nbsp;than&nbsp;chrome.</span><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #569cd6;">bool</span>&nbsp;iisPage&nbsp;<span style="color: #b4b4b4;">=</span>&nbsp;driver<span style="color: #b4b4b4;">.</span>FindElementsByTagName(<span style="color: #d69d85;">"a"</span>)<span style="color: #b4b4b4;">.</span>Any(<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;e&nbsp;<span style="color: #b4b4b4;">=&gt;</span>&nbsp;(e<span style="color: #b4b4b4;">.</span>GetAttribute(<span style="color: #d69d85;">"href"</span>)&nbsp;<span style="color: #b4b4b4;">??</span>&nbsp;<span style="color: #d69d85;">""</span>)<span style="color: #b4b4b4;">.</span>Contains(<span style="color: #d69d85;">"go.microsoft.com/fwlink"</span>));<br /> <br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #569cd6;">return</span>&nbsp;<span style="color: #b4b4b4;">!</span>nameNotResolved&nbsp;<span style="color: #b4b4b4;">&amp;&amp;</span>&nbsp;<span style="color: #b4b4b4;">!</span>iisPage;<br />&nbsp;&nbsp;&nbsp;&nbsp;}<br /> <br />&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #569cd6;">private</span>&nbsp;<span style="color: #569cd6;">static</span>&nbsp;<span style="color: #569cd6;">void</span>&nbsp;RunSimCommand(<span style="color: #569cd6;">string</span>&nbsp;arguments)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #4ec9b0;">Process</span>&nbsp;p&nbsp;<span style="color: #b4b4b4;">=</span>&nbsp;<span style="color: #569cd6;">new</span>&nbsp;<span style="color: #4ec9b0;">Process</span>();<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;p<span style="color: #b4b4b4;">.</span>StartInfo<span style="color: #b4b4b4;">.</span>UseShellExecute&nbsp;<span style="color: #b4b4b4;">=</span>&nbsp;<span style="color: #569cd6;">false</span>;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;p<span style="color: #b4b4b4;">.</span>StartInfo<span style="color: #b4b4b4;">.</span>RedirectStandardOutput&nbsp;<span style="color: #b4b4b4;">=</span>&nbsp;<span style="color: #569cd6;">true</span>;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;p<span style="color: #b4b4b4;">.</span>StartInfo<span style="color: #b4b4b4;">.</span>FileName&nbsp;<span style="color: #b4b4b4;">=</span>&nbsp;<span style="color: #d69d85;">$@"</span>{<span style="color: #4ec9b0;">Environment</span><span style="color: #b4b4b4;">.</span>CurrentDirectory}<span style="color: #d69d85;">\..\Sim.Client\bin\SIM.exe"</span>;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;p<span style="color: #b4b4b4;">.</span>StartInfo<span style="color: #b4b4b4;">.</span>Arguments&nbsp;<span style="color: #b4b4b4;">=</span>&nbsp;arguments;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;p<span style="color: #b4b4b4;">.</span>Start();<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #569cd6;">string</span>&nbsp;output&nbsp;<span style="color: #b4b4b4;">=</span>&nbsp;p<span style="color: #b4b4b4;">.</span>StandardOutput<span style="color: #b4b4b4;">.</span>ReadToEnd();<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;p<span style="color: #b4b4b4;">.</span>WaitForExit();<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;p<span style="color: #b4b4b4;">.</span>Close();<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #4ec9b0;">Console</span><span style="color: #b4b4b4;">.</span>WriteLine(<span style="color: #d69d85;">"Command&nbsp;output:"</span>);<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #4ec9b0;">Console</span><span style="color: #b4b4b4;">.</span>WriteLine(output);<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #9b9b9b;">#endregion</span><br /> <br />&nbsp;&nbsp;&nbsp;&nbsp;}<br />}<br /></pre></pre></div><div>As you see, I create a Driver object and make sure it is disposed in my Dispose method; omitting this step will leave a lot of ChromeDriver.exe processes locking your files, which I learned the hard way.</div><div><br /></div><div>Next you see the auto-generated attributes and method names, which provide the implementation for each rule. These are pretty straight forward. The only points worth noting are:</div><div><ul><li>The actual navigation happens in the "SiteFound" method. (I'm losing some Command Query Separation karma here: clearly this is a query, but it also has a side effect. Something to refactor....)</li><li>To check if a site is found, I look for Chrome's "ERR_NAME_NOT_RESOLVED" message (there is no status code to capture because without a DNS entry chrome can't send a request). &nbsp;There is a brief moment in the test where it hits the IIS Home page, which I identify with the truly horrendous hack of looking for the go.microsoft.com link.&nbsp;</li><li>I chose to run the SIM commands through a command line shell, rather that directly through the SIM command classes, to better capture the full end-to-end nature of the action. &nbsp;Imagine if a parameter property was not properly bound to the command line; a test that did not directly call the command would miss that. Plus this way the tests document the command syntax.</li><ul><li>I just had an idea. I could have the command syntax in the WHEN clause: &nbsp;<i>WHEN I pass 'install -name TestInstance' to SIM</i>. That would surface the command syntax directly into the acceptance test. I like that.</li></ul><li>To check that the site is truly loaded, I use Selenium magic to read a H1 tag's value. This was starting to really feel like the examples in Freeman &amp; Pryce's book. &nbsp;</li></ul><div>A few additional things to note. I ran this through Visual Studio's Test runner, which is a premium feature. "SpecFlow" is free, but "SpecFlow+" is 159 GBP. Not cheap, and I haven't purchased it. They currently add a six-second delay, and ask you to pay for the product. I saw no indication that the evaluation period is limited, but still want to explore other ways of running these tests. &nbsp;I'll update this post (Done!) if I find any reasonable alternatives; of course, suggestions in the comments are welcome. To be clear, the tooling to create the tests is free, just the feature to run the tests through Visual Studio's test runner is (theoretically) not free.<br /><br /></div></div><div>Second, these tests generate a number of outputs:</div><div><br /></div><div>The Test Explorer view:</div><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-DnAyLF9OvmQ/WGnvFoDZ1fI/AAAAAAAADxA/UHy8vpToKashUg7pETIDSvdY3l4wzQBiACLcB/s1600/2017-01-02_1-03-51.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="https://1.bp.blogspot.com/-DnAyLF9OvmQ/WGnvFoDZ1fI/AAAAAAAADxA/UHy8vpToKashUg7pETIDSvdY3l4wzQBiACLcB/s320/2017-01-02_1-03-51.jpg" width="320" /></a></div><div><br /></div><div><br /></div><div>The "Output" view (which you get to from a link from the above view, not to be confused with Visual Studio's normal output window:</div><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-7wSTkzDh-KY/WGnvF51wk7I/AAAAAAAADxE/U2xcQPiNJN80KSdYHEd_Ue2_iOSUmU6_QCEw/s1600/2017-01-02_1-04-29.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="520" src="https://4.bp.blogspot.com/-7wSTkzDh-KY/WGnvF51wk7I/AAAAAAAADxE/U2xcQPiNJN80KSdYHEd_Ue2_iOSUmU6_QCEw/s640/2017-01-02_1-04-29.jpg" width="640" /></a></div><div><br /></div><div><br /></div><div>The Visual Studio Output window, important because it shows links to the HTML and Log reports:</div><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-LdAkZOGuYEo/WGnvF9Yn_XI/AAAAAAAADxM/PDbEPXem05s5QJA70Km3Dy-nyTu99QJYgCEw/s1600/2017-01-02_1-05-20.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="244" src="https://4.bp.blogspot.com/-LdAkZOGuYEo/WGnvF9Yn_XI/AAAAAAAADxM/PDbEPXem05s5QJA70Km3Dy-nyTu99QJYgCEw/s640/2017-01-02_1-05-20.jpg" width="640" /></a></div><div><br /></div><div><br /></div><div>An HTML report (stamped with "This is an evaluation copy" verbiage in red):</div><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-0crhNTwSVEc/WGnvGMJxcHI/AAAAAAAADxQ/bSMFoEZgcu4AfvTr9gqUQI6uajIZ73_twCEw/s1600/2017-01-02_1-06-11.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="348" src="https://4.bp.blogspot.com/-0crhNTwSVEc/WGnvGMJxcHI/AAAAAAAADxQ/bSMFoEZgcu4AfvTr9gqUQI6uajIZ73_twCEw/s640/2017-01-02_1-06-11.jpg" width="640" /></a></div><div><br /></div><div><br /></div><div>A log file:</div><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-zjfq-hrrsp0/WGnvGNaVgWI/AAAAAAAADxU/Ix-huxqcywMUpvHUuNda6TMXJfDVnDRKgCEw/s1600/2017-01-02_1-07-10.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="360" src="https://1.bp.blogspot.com/-zjfq-hrrsp0/WGnvGNaVgWI/AAAAAAAADxU/Ix-huxqcywMUpvHUuNda6TMXJfDVnDRKgCEw/s640/2017-01-02_1-07-10.jpg" width="640" /></a></div><div><br /></div><div><br /></div><div>It seems clear to me that the HTML output, combined with a Continuous Integration deployment process, could provide a detailed bench mark of what features have been implemented, and which have not, giving "burn-down" like visibility into a team's progress. This seems pretty powerful to me.<br /><div style="-webkit-text-stroke-width: 0px; color: black; font-family: &quot;Times New Roman&quot;; font-size: medium; font-style: normal; font-variant-caps: normal; font-variant-ligatures: normal; font-weight: normal; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px;"></div><br /><div><b>Update</b>: To run the tests through a test runner like NCrunch, instead of installing the package "SpecRun.SpecFlow", use the package SpecFlow.NUnit or SpecFlow.xUnit. &nbsp;This changes the auto-generated CS code-behind for the .feature file, so that they are now visible to a normal NUnit (or xUnit) test runner. &nbsp;The only difference is that you don't get the above reports, and you can no longer put breakpoints directly in your .feature file. &nbsp;However, the Console output of the test has the Given/When/Then steps and the time duration of each, so should work. Here is the output, for example, from Reshaper's test runner:</div><br /><div class="separator" style="clear: both; color: black; text-align: center;"><a href="https://1.bp.blogspot.com/-HzwoHSJhxtY/WGqk-qnJXEI/AAAAAAAADxs/ZKVUQ0HuoPY5-DtEthi8Vyhg1QhsATgTwCLcB/s1600/2017-01-02_14-06-55.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="270" src="https://1.bp.blogspot.com/-HzwoHSJhxtY/WGqk-qnJXEI/AAAAAAAADxs/ZKVUQ0HuoPY5-DtEthi8Vyhg1QhsATgTwCLcB/s640/2017-01-02_14-06-55.jpg" width="640" /></a></div><div style="color: black; margin: 0px;"><br /></div></div>http://www.dansolovay.com/2017/01/first-steps-with-specflow-and-selenium.htmlnoreply@blogger.com (Dan Solovay)1tag:blogger.com,1999:blog-5589343447323430312.post-4382838060408536265Wed, 09 Nov 2016 11:00:00 +00002016-11-09T06:00:10.266-05:00Developer ToolsPowershellSOLRZapping Solr Cores with PowershellImagine you've been working on a feature that creates Sitecore Solr indexes with SIM. Three's a lot of testing that goes into that. <br /><a name='more'></a>Suppose I tweak this? Does it still work? And each go round creates another 15 indexes. You end up with a lot of indexes named "test1a_sitecore_core_index, test1a_sitecore_master_index" and so forth. I think I was up above 200 of these at some point. SIM has a bulk delete feature but it doesn't (<a href="https://github.com/dsolovay/Sitecore-Instance-Manager/issues/40" target="_blank">yet</a>) support deleting Solr indexes, so I did a little googling into Powershell to see if I could whip up a script to do some house cleaning (True confession, I waited until Solr was DYING.). Here's what I came up with:<br /><br /><script src="https://gist.github.com/dsolovay/663c5dcaf7df49754b3b29a1d9aa3db3.js"></script><br /><br />Here's what the script does. <br /><br /><ul><li>Loads a required parameter "-prefix". &nbsp;This is useful since SIM creates Solr cores with the instance name of the site.</li><li>Creates an instance of the WebClient class, which we will use to communicate to the Solr API.</li><li>Loads the results of "/solr/admin/cores" to an XmlDocument object. The [xml] prefix causes the string return value to be loaded to an XmlDocument instance.</li><li><br /></li></ul>http://www.dansolovay.com/2016/11/zapping-solr-cores-with-powershell.htmlnoreply@blogger.com (Dan Solovay)0tag:blogger.com,1999:blog-5589343447323430312.post-9029769660517848122Wed, 02 Nov 2016 10:00:00 +00002016-11-02T06:00:29.724-04:00.NetDeveloper ToolsSIMSOLRTips & TricksDisposing of a DLLSo I've been working on a feature for Sitecore Instance Manager to automate installing Sitecore instances with Solr turned on. &nbsp;This has been <a href="https://github.com/Sitecore/Sitecore-Instance-Manager/pull/94" target="_blank">pulled </a>in to the Develop branch of SIM and should hopefully hit the downloadable version soon.<br /><br /><a name='more'></a>One interesting detail was how to handle the "Generate Solr Schema" step. This is normally done through the Sitecore Control Panel, and is responsible for updating the Solr schema.xml file to include definition of fields like "_uniqueid" and "__bucketable" that are required for Sitecore. To automate this step, I considered analyzing the changes and writing a script to apply them, or even copying a prebaked "schema.xml" to the Solr instance directory, but this didn't feel ideal. Suppose a new version of Sitecore introduces a new Solr field (let's say "__basketable"...). My feature would be instantly obsoleted. Bummer.<br /><br />It would be far better to use the actual code that is run by this option, and with a little digging, I was able to find the C# class that does the transformation, Sitecore.ContentSearch.ProviderSupport.Solr.SchemaGenerator, in the Sitecore.ContentSearch.dll. What if directly loaded that from the bin directory of the new instance? That worked nicely.<br /><br /><script src="https://gist.github.com/dsolovay/906d87197c95fa91e504fddefa7369db.js"></script><br /><br />The DLL path is passed in, because that depends on the instance, and the path to the schema is pulled from the Solr API. I ended up refactoring the bits that load the DLL to an existing ReflectionUtil class:<br /><br /><script src="https://gist.github.com/dsolovay/67c3be8119d94c34d958523c9a66523f.js"></script><br /><br />This worked really nicely until I tried to clean up some of the sites created during my testing, and got this error:<br /><br /><img src="https://cloud.githubusercontent.com/assets/689532/19422940/3dd71ea8-93e9-11e6-9e7f-d6accc177d0e.png" /><br /><br />Of course! I loaded the DLL from that site's bin directory to the SIM process, so now I can't delete the file. I did some googling of whether you can dispose of a DLL once you've loaded it, and long story short, you can't. &nbsp;Alen Pelin <a href="https://github.com/dsolovay/Sitecore-Instance-Manager/issues/39#issuecomment-254094459" target="_blank">suggested&nbsp;</a>I load the DLL from a memory stream, but that sounded complicated (= had no idea how to do this) so I did some more googling of disposing of DLLs and found discussion of AppDomains. &nbsp;These seemed to be designed expressly for this purpose--the DLL is not disposable, but the app domain it is loaded to is. Cool. However, the <a href="https://msdn.microsoft.com/en-us/library/6s0z09xw(v=vs.110).aspx" target="_blank">documentation </a>on MSDN looked pretty light, as did the Stack Overflow coverage, so it didn't look like a technique in heavy rotation. But what the hey, let's give it a try:<br /><br /><script src="https://gist.github.com/dsolovay/c14dfbb0e204bac968c3dd268aa1614c.js"></script><br /><br />So that blew up because GenerateSolrSchema does not implement Serializable:<br /><br /><img src="https://cloud.githubusercontent.com/assets/689532/19838503/e8afb4c6-9ea6-11e6-872b-ab759709e70b.png" /><br /><br />Okay, time to take another look at Alen's suggestion. It turns out that Assembly.Load can take a Stream parameter, so fixing this was as simple as adding FileStream.ReadOpen(path), and passing the results to AssemblyLoad (<a href="http://stackoverflow.com/a/20080196/402949" target="_blank">thanks </a>to this Stack Overflow answer too). Load the steam in a Using block and you're all set:<br /><br /><script src="https://gist.github.com/dsolovay/10a26e99b232d8d8aa4bae53bb7389d2.js"></script><br /><br />Now you can create and delete Sitecore+Solr instances all day long!<br /><br />As I mentioned at the top, this is not yet in the main branch of SIM, but you can play with it by building the Develop branch of <a href="https://github.com/sitecore/sitecore-instance-manager">https://github.com/sitecore/sitecore-instance-manager</a>, and you can check out this feature intro video: <br /><iframe allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/9BjQ6CpnzUg" width="560"></iframe><br /><br />http://www.dansolovay.com/2016/11/disposing-of-dll.htmlnoreply@blogger.com (Dan Solovay)0tag:blogger.com,1999:blog-5589343447323430312.post-6134121848771131935Wed, 26 Oct 2016 10:00:00 +00002016-10-26T06:00:06.463-04:00Developer ToolsTDDNCrunchIf you want to make unit testing and TDD truly addictive, you should look at NCrunch, which basically gives you while-you-type intellisense for unit tests.<br /><br /><a name='more'></a>In fact, I have had the odd experience of building large chunks of functionality with tests and main line code, and then not see them work in my browser, because I forgot to save and compile. (F6 fixed that quickly). What NCrunch does is squirrel your unsaved files to a folder in AppData, and run them there while you type out code. Getting that level of continuous instant feedback really changes the way you write code. &nbsp;For example:<br /><br /><ul><li>You get used to seeing green dots on the left. &nbsp;Their absence makes you feel a little skittish (don't refactor this! who knows what will break). &nbsp;And the motivation to get stuff green becomes very strong (let see if calling this works.. isolate, isolate, there, it's green!)</li><li>You become creative about breaking things. &nbsp;Earlier today I was working with code that generated user names from a first and last name column, and I had to add a bit of additional logic. &nbsp;I wanted to see if the current logic was covered meaningfully by tests, so I changed:<br /><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">userName = firstName + lastName</span>; &nbsp; to <br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">userName = firstName<b>;//</b>+lastName; </span><br /><br />just to see if anything would happen. &nbsp;Red lights a second later (semicolon, whack, whack, red, just like that) showed me that the current logic was covered, taking me to the test I needed to modify. Friction free.</li><li>Being able to jump from code to covering tests with a right click equally removes the friction of adding tests, since you can see at a glance what is already in place.</li><li>If a single test is failing, the red dots provide a beadcrumb trail through the executing code. This often leads to very rapid troubleshooting, since you can see at a glance whether, for example, an if block is being entered. &nbsp;Exceptions are called out with a red X, again making it a lot faster to pin down where code is failing.</li><li>The ability to tactically ignore tests can be useful when trying to fix a number of failures. Recently, in a code review session, we reviewed 12 tests that were failing. We ignored 11 of them, so that our red dots were meaningful (they corresponded to a single test). Then we analyzed the code under tests, and realized what was wrong with the test. &nbsp;A few keystrokes to change the test, and we had green. Then we unignored the other 11 tests, and they went green as well</li></ul><div>I recently did a deep dive through the documentation, and learned there was a lot more to the tool than I had realized.</div><div><ul><li>You can set up grids of workstations, and run unit tests on this grid. I cajoled two of my neighbors at work to turn this feature on, and now my unit tests are divided up and run where CPU cycles are available. We are still experimenting with this feature, but it looks very promising. Running unit tests for your team while you eat lunch down the street sounds pretty effective. http://www.ncrunch.net/documentation/guides_distributed-processing</li><li>There is a command line tool that can be used on CI servers, that produces very nice HTML test and code coverage reports.</li><li>The test engine does analysis of which tests are most likely to be impacted by recent code changes, and runs those tests first.</li><li>There are a number of syntax options for managing concurrency of integration tests (each thread runs in its own process, so there is no issue with static methods and values). You can specify that a given test requires exclusive or inclusive access to a resource, such as a file on disk, a database, etc.</li></ul><div>In short, there is a lot to this tool. The CI and distributed capabilities look very promising indeed. This website is <a href="http://ncrunch.net/">NCrunch.net</a>. And to give a feeling for the rhythm of feedback, here's a short example of NCrunch responding to a line of code being commented out:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-8UwuoAcxo24/WBAitRO08hI/AAAAAAAADvg/wKRZMbukUhYCg-fW_JhxAivpVxAxwvmEACLcB/s1600/NCrunch.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="307" src="https://3.bp.blogspot.com/-8UwuoAcxo24/WBAitRO08hI/AAAAAAAADvg/wKRZMbukUhYCg-fW_JhxAivpVxAxwvmEACLcB/s640/NCrunch.gif" width="640" /></a></div><br /></div></div><div><br /></div>http://www.dansolovay.com/2016/10/ncrunch.htmlnoreply@blogger.com (Dan Solovay)0tag:blogger.com,1999:blog-5589343447323430312.post-8467971866930276657Thu, 15 Sep 2016 23:21:00 +00002016-09-15T19:21:48.994-04:00Developer ToolsSitecoreSymposiumTDDTestingAnother Look at Sitecore and Unit TestingAt Sitecore Symposium 2016, I will be giving a <a href="http://www.sitecore.net/events/sitecore-symposium-2016/developer-track.aspx">talk</a> on unit testing in Sitecore. The focus of this talk is on the notion of testability. <br /><a name='more'></a>Picking up on the notion of <b>seams</b> (from Michael Feathers <a href="https://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052">Working Effectively with Legacy Code</a>, but I came upon this in Roy Osherove's <a href="https://www.amazon.com/Art-Unit-Testing-examples/dp/1617290890">The Arto of Unit Testing</a>), I look at how MVC introduced seams between controller code and implementation, and look the Glass Mapper can be used to introduce seams into Sitecore code. &nbsp;I then look at 8.2, and show how its introduction of virtual methods and abstract classes allows for testability with Sitecore code much as MVC did for ASP.NET. &nbsp;The presentation also introduces a number of tools that can remove the friction from test driven development, such as ReSharper, NCrunch, AutoFixture, and Fluent Assertions.<br /><br />I will post a link to a recording of this talk when it becomes available.<br /><br /><div><b>Code Samples</b><br /><a href="https://github.com/dsolovay/mvc-testability/">https://github.com/dsolovay/mvc-testability/</a><br /><a href="https://github.com/dsolovay/item-creation-demos">https://github.com/dsolovay/item-creation-demos</a><br /><a href="https://github.com/dsolovay/habitat-testability">https://github.com/dsolovay/habitat-testability</a><br /><br /><h4>My AutoSitecore project (8.2 Testing accelerator)</h4><a href="https://github.com/dsolovay/AutoSitecore">https://github.com/dsolovay/AutoSitecore</a><br /><br />Also on NuGet.<br /><br />Also check out my talk from SUGCON NA 2015. Video and related links <a href="http://www.dansolovay.com/2015/09/test-driven-sitecore-links.html">here</a>.<br /><br /><br /></div>http://www.dansolovay.com/2016/09/another-look-at-sitecore-and-unit.htmlnoreply@blogger.com (Dan Solovay)1tag:blogger.com,1999:blog-5589343447323430312.post-5158432165912939556Wed, 15 Jun 2016 03:16:00 +00002016-06-14T23:16:33.297-04:00DebuggingDeveloper ToolsSitecoreDebugging and Creating PDBs with ReSharperBeing able to debug Sitecore code is an important skill for supporting Sitecore solutions. &nbsp;There have been a number of excellent articles on how to do this, but they typically describe using JetBrains DotPeek product as a "symbol server". (See <a href="http://bilyukov.com/debugging-sitecore-dotpeek/,">http://bilyukov.com/debugging-sitecore-dotpeek/,</a> and <a href="https://jammykam.wordpress.com/2015/01/11/how-to-debug-sitecore-kernel-in-visual/">https://jammykam.wordpress.com/2015/01/11/how-to-debug-sitecore-kernel-in-visual/</a>). An alternative, which I find somewhat simpler, is to use ReSharper to generate PDB files, and place those in your solution bin directory. I will walk you through that approach in this article.<br /><br /><a name='more'></a><br /><br />This article assumes you have JetBrain's ReSharper installed. &nbsp;It may be possible to these steps purely with the free DotPeek, but I'm not sure how the Debugging experience will work without access to decompiled sources in Visual Studio. (I may come back to that in a later post.)<br /><br />Let's try to set a breakpoint in the BeginRequest pipeline. &nbsp;This example uses a Visual Studio solution set up outside the web root (as described in this <a href="https://www.youtube.com/watch?v=cskz2oZYCYs">video</a>), but the process should work similarly if the project is inside the web root.<br /><br />First, you need to enable decompiling of third party sources. &nbsp;This can be done by checking the following box in the ReSharper options: <br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-Wu9-gIvZcGY/V2Cz9TLRYfI/AAAAAAAADnI/yLHQNNC_pG4OaWQehA-t2bDtniWhGRTFwCLcB/s1600/2016-06-14_21-46-37.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="480" src="https://3.bp.blogspot.com/-Wu9-gIvZcGY/V2Cz9TLRYfI/AAAAAAAADnI/yLHQNNC_pG4OaWQehA-t2bDtniWhGRTFwCLcB/s640/2016-06-14_21-46-37.png" width="640" /></a></div><br />With this enabled, you will be able to navigate to internal methods in Sitecore DLLs that you reference in your project. Let's assume you have a reference to Sitecore.Kernell.dll, you will be able to type Control-T to find methods. &nbsp;Note that your shortcut might be different: You can find yours at ReSharper/Navigate/Go To Everything:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-m1mqwSbvsiA/V2C06KQj5XI/AAAAAAAADnU/lqrIRGJzEIg0U0FtPPxY0kRxporWDZ6kwCLcB/s1600/2016-06-14_21-50-29.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="516" src="https://3.bp.blogspot.com/-m1mqwSbvsiA/V2C06KQj5XI/AAAAAAAADnU/lqrIRGJzEIg0U0FtPPxY0kRxporWDZ6kwCLcB/s640/2016-06-14_21-50-29.png" width="640" /></a></div><br />This will pull up this navigation tool.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-C7nUCv4Z43k/V2DGitEOanI/AAAAAAAADpk/J7wfOjlYEbUElfafJHh6STA_LTXC7OExQCLcB/s1600/2016-06-14_23-07-08.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://1.bp.blogspot.com/-C7nUCv4Z43k/V2DGitEOanI/AAAAAAAADpk/J7wfOjlYEbUElfafJHh6STA_LTXC7OExQCLcB/s640/2016-06-14_23-07-08.png" width="640" /></a></div><br /><br />Click "Include Library Sources", and type "ItemResolver". &nbsp;If you are the first time doing this, you will need to acknowledge a legal warning from ReSharper that you are viewing proprietary code. &nbsp;(Standard disclaimer: I'm not a lawyer and I don't work for Sitecore, but I've long viewed reading Sitecore code to be a key part of a Sitecore developer's toolset. &nbsp;If you have concerns, ask your Sitecore Regional Representative whether your license permits you to do this.)<br /><br />Okay, at this point you will see the ItemResolver class. &nbsp;Go down to the Process method, and put a breakpoint on the first line.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-lZnJ_wVpU_s/V2C4iCU6ssI/AAAAAAAADnw/em_MSfIiwxQr0PmoA7WWFkUlrFVZCeyngCLcB/s1600/2016-06-14_22-05-28.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/-lZnJ_wVpU_s/V2C4iCU6ssI/AAAAAAAADnw/em_MSfIiwxQr0PmoA7WWFkUlrFVZCeyngCLcB/s1600/2016-06-14_22-05-28.png" /></a></div><br />Now attach to the w3wp.exe worker process. (See <a href="http://stackoverflow.com/a/36203091/402949">http://stackoverflow.com/a/36203091/402949</a>) Now, you will see the breakpoint will have a white center and error icon, and if you mouse over it you will see this tooltip warning, indicating that symbols have not been loaded for this location:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-ZpjZOfkDing/V2C5Xqy2t-I/AAAAAAAADn8/yd2967YGXWko9_oDeziH8Z8fO75XtMbGwCLcB/s1600/2016-06-14_22-09-47.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="94" src="https://3.bp.blogspot.com/-ZpjZOfkDing/V2C5Xqy2t-I/AAAAAAAADn8/yd2967YGXWko9_oDeziH8Z8fO75XtMbGwCLcB/s640/2016-06-14_22-09-47.png" width="640" /></a></div><br />To add the symbols, go to ReSharper/Windows/Assembly Explorer, and click the Open icon to add the Sitecore.Kernell.dll to the window.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-Bhq8YxgkuoQ/V2DCGU0AWcI/AAAAAAAADos/bkOltp4Y8_sgUV_qA2fZEm1Khj6myv1fgCLcB/s1600/2016-06-14_22-15-58.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="306" src="https://1.bp.blogspot.com/-Bhq8YxgkuoQ/V2DCGU0AWcI/AAAAAAAADos/bkOltp4Y8_sgUV_qA2fZEm1Khj6myv1fgCLcB/s320/2016-06-14_22-15-58.png" width="320" /></a></div><br />Now right click and select Generate PDBs.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-6yRiDAwxnvw/V2DCGUTxr1I/AAAAAAAADow/41R6vqLSyAQAI3-cXUX9OUzte67AXUtgACKgB/s1600/2016-06-14_22-16-37.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="293" src="https://3.bp.blogspot.com/-6yRiDAwxnvw/V2DCGUTxr1I/AAAAAAAADow/41R6vqLSyAQAI3-cXUX9OUzte67AXUtgACKgB/s320/2016-06-14_22-16-37.png" width="320" /></a></div><br /><br />You can select any location for them; the key thing is that you need to copy the finished PDB and put it next to the corresponding DLL that is being used to serve the website (i.e not the local copy in your Libraries folder or project bin if that is separate from your web root.) <br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-1lZpyD1Xdro/V2DCGTHVTBI/AAAAAAAADo0/uBw4DLB2QaA_RfiWm1nPo4sNiFtmb5EpACKgB/s1600/2016-06-14_22-18-07.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://4.bp.blogspot.com/-1lZpyD1Xdro/V2DCGTHVTBI/AAAAAAAADo0/uBw4DLB2QaA_RfiWm1nPo4sNiFtmb5EpACKgB/s320/2016-06-14_22-18-07.png" width="300" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-RrM1QvBtzXY/V2DCX19ljuI/AAAAAAAADpE/YgfazlQt6rcmlkksYLS-9iGUnzOGskm_gCLcB/s1600/2016-06-14_22-22-29.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="72" src="https://2.bp.blogspot.com/-RrM1QvBtzXY/V2DCX19ljuI/AAAAAAAADpE/YgfazlQt6rcmlkksYLS-9iGUnzOGskm_gCLcB/s640/2016-06-14_22-22-29.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Inside the webroot bin</td></tr></tbody></table><br />Note that ReSharper will create a folder called Sitecore.Kernel.pdb. You don't want that; inside of it is a folder with a Guid name, and inside that is the PDB. Copy that to your webroot bin directory:<br /><br />Now retry the debug session. &nbsp;This time your break point will get hit. The screen shot below shows you can mouse over and inspect the "args" parameter:<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-TFDg-EyN4pY/V2C90ictf6I/AAAAAAAADoY/6twpWWUh6M4BPeQLiZ4cei_ddtvBar0pwCLcB/s1600/2016-06-14_22-28-00.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="252" src="https://2.bp.blogspot.com/-TFDg-EyN4pY/V2C90ictf6I/AAAAAAAADoY/6twpWWUh6M4BPeQLiZ4cei_ddtvBar0pwCLcB/s640/2016-06-14_22-28-00.png" width="640" /></a></div><br />A couple of additional notes:<br /><br /><ul><li>The first time I reissued the request, the browser tab hung. &nbsp;I closed the tab and submitted the request again, and it worked fine.</li><li>There is a Visual Studio Debug option "Enable Just My Code". &nbsp;I usually have this unchecked, but in my experimenting the above breakpoint got hit either way.</li><li>I have not had do any of the steps described by Jammy Kam to get meaningful local variables to appear (such as the args parameter above). However a few local variables do get optimized away, and the steps described in Jammy Kam's article did not work for me on my most recent attempts. &nbsp;This has not been a blocker for me, as most variables are visible.. If you want to dig deeper here, read Jammy's post (link at top of article) and this one (<a href="http://blog.paulgeorge.co.uk/2016/02/05/disabling-optimizations-debugging-third-party-dlls-with-reflector-pro">http://blog.paulgeorge.co.uk/2016/02/05/disabling-optimizations-debugging-third-party-dlls-with-reflector-pro</a>/).</li><li>ReSharper really gives you the keys to the kingdom of how Sitecore is put together. I blogged awhile ago about things you can do with it to. (http://www.dansolovay.com/2013/01/resharper-shortcuts-every-sitecore.html)</li></ul><div>Happy spelunking!</div>http://www.dansolovay.com/2016/06/debugging-and-creating-pdbs-with.htmlnoreply@blogger.com (Dan Solovay)0tag:blogger.com,1999:blog-5589343447323430312.post-4883612180156810492Wed, 13 Jan 2016 11:00:00 +00002016-01-13T06:00:06.404-05:00AgilePomodoro TechniqueTips & TricksThe Pomodoro TechniqueThere is a special, lonely dread that accompanies a big, complex task. Am I up to it? Is it harder than I think? Am I missing something fundamental? I wonder what's going on on Twitter. Hey, I got retweeted...<br /><br /><a name='more'></a>Nope. That's not going to work. There must be a better way to manage the angst of impeding deliverables. And there is. <br /><br /><h4>Meet the Pomodoro Technique &nbsp;</h4><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-MtoG771mXos/VpRmcTSiJwI/AAAAAAAADj4/5SgJz4v4AcY/s1600/Timer.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="http://3.bp.blogspot.com/-MtoG771mXos/VpRmcTSiJwI/AAAAAAAADj4/5SgJz4v4AcY/s320/Timer.jpg" width="320" /></a></div><br />Pomodoro is Italian for tomato, and the technique gets its name from a kitchen timer much like the one pictured above. &nbsp;Francesco Cirillo, a student frustrated at his inability to stay focused on a task, set himself the challenge of seeing if he could remain undistracted for 10 solid minutes. His eye fell upon a kitchen timer shaped like a tomato, and the Pomodoro was born. According to his account, it took quite some time before he was able to take it through to the whole ten minutes. When he was finally able to do that, he started increasing the time, and eventually settled on a rhythm of 30 minutes: 25 minutes of focus, 5 minutes of recharging. &nbsp;(He has apparently mastered the technique of the five minute nap...)<br /><br />I started using this technique a couple of years back, largely as a way of managing interruptions. &nbsp;"I've got 15 minutes left in my Pomodoro, can I get back to you then?" &nbsp;I used the website <a href="http://tomatoi.st/">http://tomatoi.st</a>, and my coworkers got used to seeing an ominous downwards ticking on one of my monitors... "Oh great, I can bother Dan in 14:35, 14:34, 14:33..." &nbsp;This proved a very good way of carving out half hour chunks of focus, but I was only using a small part of the technique. &nbsp;Recently, I've read a couple of books on this, and have double down on the tomato timer.<br /><br /><h4>A few recent tweaks:</h4><br /><ol><li>I got a real timer. Cirillo writes eloquently of the learned effect of the ticking tomato... The ticking means you're focused. &nbsp;All is well in the world. You're on task.</li><li>I started doing planning and reviews. &nbsp;This is the real power of the technique. You keep an inventory of things you can do (analogous to a Scrum product backlog), then you pick the things you are going to do today (like a Scrum iteration backlog), and put one or more boxes after each task, for the number of pomodori you think it will take. &nbsp;You track external interruptions with a hyphen, and internal interruptions ("How did I find myself on Twitter?") with an apostrophe. Then you do a mini retro at the end of the day, and log stats. Most important is the number of pomodori you were able to do (your velocity), and whether you were successful in making it through your to do list. &nbsp;Also of interest is the number of interruptions, internal and external. &nbsp;[Full disclosure... my pomodoro records have slipped. I need to pick up the habit again. Agilistas would call this "Pomodoro But..."]. One thing I find very appealing about the technique is that you can tweak your logging based on what you want to work on. &nbsp;Suppose you want to get better at handling interruptions promptly. You could use a different symbol, say a plus, for interruptions you were able to negotiate and resolve within a minute or so, and then track your plus to minus ratio over a few days until it got where you wanted it to be. Just as with Scrum retros you pick one thing you want to work on as a team in the next Sprint, you do the same thing, at a personal level, with pomdoro records.</li><li>I stopped using break time to deal with interruptions, instead moving them into separate, "cleanup" pomodori. &nbsp;Breaks are sacred. You need to keep the motor humming. &nbsp;I got to see the power of this recently in a large group meeting. &nbsp;My "sadness spidey sense" could see the energy slip out of the room as we worked through a very long task list, and I proposed we break for 10. We did. It was a different group that came back in. Ten people working at double or triple efficiency. &nbsp;Do the math.</li></ol><h4>Objections...</h4><div><ol><li>"But I'm in a flow!" Yes, and that may be good, or that may be bad. &nbsp;Coming back to tasks at a regular intervals offers a constant stream of "hey we could just do this" moments, that IMHO more than compensate for the interruption.&nbsp;</li><li>"Twenty-five minutes is too short." &nbsp;I hear that a lot, and then people often change their tune after they try it. &nbsp;It's a really good length for easing you into concentration at the start of &nbsp;your day, and it's a manageable length when you start getting tired towards the end of the day. &nbsp;You tell me you can go without looking at blogs or twitter or Slack for 90 minutes on a regular basis? &nbsp;I'm not sure I believe you. &nbsp;The idea is to push yourself to do a quantum of total focus, then breathe. &nbsp;Do the reps, then recharge, then do the reps. &nbsp;But... if you really are constituted differently from the way I am, and can maintain absolute focus for 45 minutes, then make that your increment.</li><li>"My boss won't let me take breaks." Maybe not. &nbsp;Say you want to stand up and stretch every 30 minutes. You can probably get away with that. &nbsp;And if you produce a solid stream of focused pomodori in a day, you are going to get a lot done. &nbsp;I don't think this is going to be an issue.</li><li>"People expect immediate responses." &nbsp;Sometimes. &nbsp;Sometimes they just want a realistic prediction. "I'm in the middle of something, can I get back to you in 15 minutes?" is professional, respectful, and if the building is not burning down, usually acceptable. &nbsp;Sometimes it isn't. &nbsp;Then void the pomodoro and do what you need to do.</li><li>"There is no way I can go for 25 minutes without Twitter/Facebook/Linkedin/Vine/Twitch/Whatever." Hmmm. &nbsp;Maybe emulate Signore Cirrillo, and start with 10 minutes. &nbsp;We all have to start somewhere.</li></ol><div>Sometimes, if you are not sleeping well, you look at the clock, and it says 11 PM. That's a good feeling--"I thought it was going to be 4 AM; still plenty of time left". &nbsp;You will feel exactly the same thing looking at your tomato timer, and saying, ah, still at 22 minutes, there's still plenty of time left.</div></div><div><br /></div><h4>To learn more..</h4><div><br /></div><div>Cirrillo's <a href="http://pomodorotechnique.com/book/">book </a>is quite good. &nbsp;As is this <a href="http://www.amazon.com/Pomodoro-Technique-Illustrated-Pragmatic-Life/dp/1934356506">one</a>, which gets into some of the brain science that makes it work. And props to John Sonmez, who in this <a href="https://www.dotnetrocks.com/?show=980">interview</a> pointed out a key benefit of the technique as a boon to estimation ("you start thinking a blog post takes three pomodori"). Well, this one took 2.5... A little time left to proof it. ... tick, tick, tick,...DING!</div>http://www.dansolovay.com/2016/01/the-pomodoro-technique.htmlnoreply@blogger.com (Dan Solovay)2tag:blogger.com,1999:blog-5589343447323430312.post-5579677887605414267Wed, 06 Jan 2016 11:00:00 +00002016-01-06T06:00:12.858-05:00BooksJavaScriptProperties in JavaScriptContinuing my exploration of JavaScript, with Kyle Simpson's <a href="https://github.com/getify/You-Dont-Know-JS/tree/master/this%20&amp;%20object%20prototypes#you-dont-know-js-this--object-prototypes">this and Object Prototypes</a>&nbsp;as my guide, I'm going to look at some of the functionality introduced with ES5 to allow greater control over the behavior of object properties, which Simpson looks at in <a href="https://github.com/getify/You-Dont-Know-JS/blob/master/this%20&amp;%20object%20prototypes/ch3.md">Chapter 3</a> of his book.<br /><br /><a name='more'></a><br /><div><h4>Object Properties</h4><br />There are a number of ways of attaching a property "a" to an object "myObject":</div><div><br /></div><div>var myObject = {};</div><div>myObject.a = 2;</div><div><br /></div><div>or</div><div><br /></div><div>myObject["a"] = 2;</div><div><br /></div><div>or&nbsp;</div><div><br /></div><div>var myObject = {</div><div>&nbsp; a: 2</div><div>}</div><div><br /></div><div><h4>The Object.defineProperty function</h4></div><div><br /></div><div>Starting with EcmaScript5 (which is widely <a href="http://kangax.github.io/compat-table/es5/">supported </a>since the release of IE9), you can also add a property like this:</div><div><br /></div><div>var myObject = {};</div><div><br /></div><div>Object.defineProperty(myObject, "a", {</div><div>&nbsp; value: 2</div><div>});</div><div><br /></div><div>So why would you want to do that? Well, this adds a bit more control to how the property behaves. &nbsp;(Note: A good way to run through these examples is with interactive "node", so you don't have to throw alerts or add console.log statements to inspect values, since myObject.a will print the value to the screen.)<br /><br />By default this property is read-only:<br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">myObject.a; // 2</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">myObject.a = 3; &nbsp;// Doesn't throw an error, but has no effect.</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">myObject.a; // 2</span><br /><br />You can make the property writable by specifying this:<br /><br /><span style="font-family: Courier New, Courier, monospace;">var myObject2 = {}</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">Object.defineProperty(myObject2, "a", {</span><br /><span style="font-family: Courier New, Courier, monospace;">&nbsp; value: 2,</span><br /><span style="font-family: Courier New, Courier, monospace;">&nbsp; writable: true</span><br /><span style="font-family: Courier New, Courier, monospace;">});</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">myObject.a; //2</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">myObject.a = 3;</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">myObject.a; //3</span><br /><br />This throws a TypeException only if &nbsp;"use strict"; (or --use-strict for the node REPL) has been called. &nbsp;Otherwise the assignment fails silently.<br /><br />Two other properties you can set are worth a look:<br /><br /><span style="font-family: Courier New, Courier, monospace;">configurable: [true/false]</span><br /><br />determines whether you can change these settings. &nbsp;It is a one-way street; you cannot change a property from configurable: false to configurable: true, because that would be changing its configuration. &nbsp;Interestingly, there is a loophole for "writable": you can make a field read-only after you've locked down configuration, but you cannot change it back.<br /><br /><span style="font-family: Courier New, Courier, monospace;">enumerable: [true/false]</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: inherit;">determines whether the property shows up in a statement like this:</span><br /><span style="font-family: inherit;"><br /></span><span style="font-family: Courier New, Courier, monospace;">for (var key in myObject) {...}</span><br /><br />It also appears to determine whether the property shows up on a REPL. &nbsp;If you type the name of an object with a non-enumerable property, it shows up as "{}", but an enumerable property will display: "{ a: 1 }".<br /><br />There are two additional functions available from Object, which allow you to lock down configuration for all properties at a stroke. &nbsp;<span style="font-family: Courier New, Courier, monospace;"><b>Object.seal(someObject) </b></span>sets configurable:false for all properties, and <span style="font-family: Courier New, Courier, monospace;"><b>Object.freeze(someObject)</b></span> does that and also makes the properties read-only, making the object immutable. <br /><br />For example, let's lock down a normal JavaScript property.<br />&gt;var car = {make: 'Honda'}<br />{ make: 'Honda' }<br />&gt;car<br />{make: 'Honda'}<br />&gt;car.make = 'Chevy'<br />'Chevy'<br />&gt;car.make<br />'Chevy'<br />&gt;Object.freeze(car)<br />{ make: 'Honda' }<br />&gt;car.make = 'Dodge'<br />'Dodge'<br />&gt;car.make<br />'Chevy'<br /><h4><span style="font-weight: normal;">As usual, the assignment fails silently unless in strict mode, in which case you get a type error when you try to assign to the (now) read only property</span></h4><h4><span style="font-family: inherit;"><br /></span><span style="font-family: inherit;">Getters and Setters</span></h4>Finally, there are formal getters and setters, much like in C#:<br /><br /><span style="font-family: Courier New, Courier, monospace;">var myObject = {</span><br /><span style="font-family: Courier New, Courier, monospace;">&nbsp; get a() { return 1; }</span><br /><span style="font-family: Courier New, Courier, monospace;">}</span><br /><br />On node, if you type myObject, you get this:<br />{ a: [Getter] }<br /><br />And typing myObject.a gets you this:<br />1<br /><br />So we now have a read-only property. &nbsp;We can add a setter as well:<br /><br /><span style="font-family: Courier New, Courier, monospace;">var myObject = {</span><br /><span style="font-family: Courier New, Courier, monospace;">&nbsp; set a(value) { this._a_ = value; }</span><br /><span style="font-family: Courier New, Courier, monospace;">&nbsp; get a() { return this._a_; }</span><br /><span style="font-family: Courier New, Courier, monospace;">}</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">&gt;myObject</span><br /><span style="font-family: Courier New, Courier, monospace;">{ a: [Getter/Setter] }</span><br /><span style="font-family: Courier New, Courier, monospace;">&gt;myObject.a = 1;</span><br /><span style="font-family: Courier New, Courier, monospace;">{ a: [Getter/Setter], _a_: 1 }</span><br /><span style="font-family: Courier New, Courier, monospace;">&gt;myObject.a</span><br /><span style="font-family: Courier New, Courier, monospace;">1</span><br /><br />So now we can do all kinds of settery things, like constrain values to an acceptable range, or log changes.<br /><br /><h4>Conclusions</h4>All this stuff may be widely known in the JavaScript community, but for me as a C# guy whose JS knowledge has been limited to peeking in JQuery documentation from time to time, reading about this functionality has been a revelation. I had no idea this was there. It's interesting how the "Object" prototype is the mechanism for introducing this functionality, and it is clear that this is functionality I will want to make use of. &nbsp;It's very appealing to me how these methods allow you to combine the fluidity of JavaScript object manipulation with the rigors of strongly typed (C#) and functional (F#) languages. &nbsp;There's a lot of possibilities here.</div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><br />http://www.dansolovay.com/2016/01/properties-in-javascript.htmlnoreply@blogger.com (Dan Solovay)0tag:blogger.com,1999:blog-5589343447323430312.post-6144924313085702883Wed, 30 Dec 2015 11:00:00 +00002015-12-31T07:49:52.142-05:00GitInteractive Rebasing in GitThis post is a quick look at one of my favorite features in Git, interactive rebases. &nbsp;I like this feature because it lets you do two conflicting things: make micro commits (like saving every couple of &nbsp;minutes when editing a Word doc) so you can replay your work, and always go back to a working state of your code, and making clean, well worded, self contained commits to a project repo. &nbsp;Interactive rebasing lets you squish your commits together when you are ready to share them.<br /><br /><a name='more'></a>First a word of warning. Rebasing changes history, and should not be done on work that has been shared with others. But that doesn't mean it shouldn't be done. &nbsp;It's like proofing your work, and making it clean before you share it. &nbsp;Good professional practice. <br /><br />And a technical point, especially for Windows users. &nbsp;This command uses a text editor as a point of interaction. The Notepad editor that ships with Windows 7 and earlier (and maybe later ones, I haven't checked), garbles the newlines that the Git rebase engine produces, because Git uses Unix style new-lines. Most recent editors (Notepad++, Notepad2, VS Code) can handle this, and if you use Git Extensions (which I recommend), then your Git settings will be set up to use the Git Extensions editor. &nbsp;You can check whether you have an editor set up by typing:<br /><br />&gt;git config --global core.editor<br /><br />Since I have Git-Extensions installed, I get back this:<br />"C:/Program Files (x86)/GitExtensions/GitExtensions.exe" fileeditor<br /><br />If you want to use Notepad++, for example, you can type in this (options included to make this function as a standalone interaction point, courtesy of this answer on Stack Overflow:&nbsp;<a href="http://stackoverflow.com/a/2486342/402949">http://stackoverflow.com/a/2486342/402949</a>, also see&nbsp;<a href="http://docs.notepad-plus-plus.org/index.php/Command_Line_Switches">http://docs.notepad-plus-plus.org/index.php/Command_Line_Switches</a>)<br /><br />&gt;git config --global core.editor "'C:/Program Files (x86)/Notepad++/notepad++.exe' -multiInst -noplugin -nosesssion -notabbar"<br /><br />Now lets' get a feel for using interactive rebase. Let's create a repo:<br /><br />&gt;git init testing<br /><br />And let's add a file:<br /><br />&gt;echo file 1 content &gt; file1.txt<br /><br />And let's add that to the repo and commit it.<br /><br />&gt;git add file1.txt<br /><br />&gt;git commit -m "Add file1.txt"<br /><br />Now let's create another couple of commits....<br /><br />&gt;echo file 2 content&gt;file2.txt<br />&gt;git add file2.txt<br />&gt;git commit -m "Add file2.txt"<br /><br />&gt;echo file 3 content&gt;file3.txt<br />&gt;git add file3.txt<br />&gt;git commit -m "Add file3.txt"<br /><br />Okay, if you type git log now, you should see something like this:<br /><span style="font-family: Courier New, Courier, monospace;">C:\Users\Dan.Solovay\testing&gt;git log</span><br /><span style="font-family: Courier New, Courier, monospace;">commit 1462505217add0edfe0451a2f608cbcf72cfbda2</span><br /><span style="font-family: Courier New, Courier, monospace;">Author: dsolovay &lt;dsolovay@gmail.com&gt;</span><br /><span style="font-family: Courier New, Courier, monospace;">Date: &nbsp; Tue Dec 29 18:00:48 2015 -0500</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">&nbsp; &nbsp; Add file3.txt</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">commit 3ad7253ba59e60642af2b0610a9e40c6af4b9f60</span><br /><span style="font-family: Courier New, Courier, monospace;">Author: dsolovay &lt;dsolovay@gmail.com&gt;</span><br /><span style="font-family: Courier New, Courier, monospace;">Date: &nbsp; Tue Dec 29 17:59:21 2015 -0500</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">&nbsp; &nbsp; Add file2.txt</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">commit 78f189fe8f38d80a58ce061e7b43a810674c55df</span><br /><span style="font-family: Courier New, Courier, monospace;">Author: dsolovay &lt;dsolovay@gmail.com&gt;</span><br /><span style="font-family: Courier New, Courier, monospace;">Date: &nbsp; Tue Dec 29 17:58:32 2015 -0500</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">&nbsp; &nbsp; Add file1.txt</span><br /><br />And let use a more arcane command, but a really good one to know about...<br /><br />&gt;git reflog<br /><span style="font-family: Courier New, Courier, monospace;">1462505 HEAD@{0}: commit: Add file3.txt</span><br /><span style="font-family: Courier New, Courier, monospace;">3ad7253 HEAD@{1}: commit: Add file2.txt</span><br /><span style="font-family: Courier New, Courier, monospace;">78f189f HEAD@{2}: commit (initial): Add file1.txt</span><br /><br />Pretty much the same information, presented a little differently. That will soon change.<br /><br />Let's kick off an interactive rebase. &nbsp;The nature of the rebase command is that you rebase <b>onto</b>&nbsp;a commit, so we have to leave the initial commit alone, as our building block. &nbsp;We can launch the interactive rebase two ways:<br /><br />git rebase&nbsp;<span style="font-family: 'Courier New', Courier, monospace;">78f189f&nbsp;</span>-i &nbsp; &nbsp;(You will need to change this to your initial commit.)<br /><br />Or this:<br /><br />git rebase "HEAD^^" -i<br /><br />HEAD means most recent commit, HEAD^ is it's parent, and HEAD^^ is thus your initial commit. &nbsp;But.... the caret character means line continuation on Windows, so you need to but "HEAD^^" in quotes or you will get a "More?" prompt. &nbsp;You need to remember this when you read Git documentation and use it on Windows (cmd or PowerShell)..<br /><br />Either command should cause Notepad++ to open, with this display:<br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">pick 3ad7253 Add file2.txt</span><br /><span style="font-family: Courier New, Courier, monospace;">pick 1462505 Add file3.txt</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;"># Rebase 78f189f..1462505 onto 78f189f</span><br /><span style="font-family: Courier New, Courier, monospace;">#</span><br /><span style="font-family: Courier New, Courier, monospace;"># Commands:</span><br /><span style="font-family: Courier New, Courier, monospace;"># &nbsp;p, pick = use commit</span><br /><span style="font-family: Courier New, Courier, monospace;"># &nbsp;r, reword = use commit, but edit the commit message</span><br /><span style="font-family: Courier New, Courier, monospace;"># &nbsp;e, edit = use commit, but stop for amending</span><br /><span style="font-family: Courier New, Courier, monospace;"># &nbsp;s, squash = use commit, but meld into previous commit</span><br /><span style="font-family: Courier New, Courier, monospace;"># &nbsp;f, fixup = like "squash", but discard this commit's log message</span><br /><span style="font-family: Courier New, Courier, monospace;"># &nbsp;x, exec = run command (the rest of the line) using shell</span><br /><span style="font-family: Courier New, Courier, monospace;">#</span><br /><span style="font-family: Courier New, Courier, monospace;"># These lines can be re-ordered; they are executed from top to bottom.</span><br /><span style="font-family: Courier New, Courier, monospace;">#</span><br /><span style="font-family: Courier New, Courier, monospace;"># If you remove a line here THAT COMMIT WILL BE LOST.</span><br /><span style="font-family: Courier New, Courier, monospace;">#</span><br /><span style="font-family: Courier New, Courier, monospace;"># However, if you remove everything, the rebase will be aborted.</span><br /><span style="font-family: Courier New, Courier, monospace;">#</span><br /><span style="font-family: Courier New, Courier, monospace;"># Note that empty commits are commented out</span><br /><div><br /></div>Let's edit the file, by switching the order of the commits, like this:<br /><br /><span style="font-family: Courier New, Courier, monospace;"><b>pick 1462505 Add file3.txt</b></span><br /><span style="font-family: Courier New, Courier, monospace;"><b>pick 3ad7253 Add file2.txt</b></span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;"># Rebase 78f189f..1462505 onto 78f189f</span><br /><span style="font-family: Courier New, Courier, monospace;">#</span><br /><span style="font-family: Courier New, Courier, monospace;"># Commands:</span><br /><span style="font-family: Courier New, Courier, monospace;"># &nbsp;p, pick = use commit</span><br /><span style="font-family: Courier New, Courier, monospace;"># &nbsp;r, reword = use commit, but edit the commit message</span><br /><span style="font-family: Courier New, Courier, monospace;"># &nbsp;e, edit = use commit, but stop for amending</span><br /><span style="font-family: Courier New, Courier, monospace;"># &nbsp;s, squash = use commit, but meld into previous commit</span><br /><span style="font-family: Courier New, Courier, monospace;"># &nbsp;f, fixup = like "squash", but discard this commit's log message</span><br /><span style="font-family: Courier New, Courier, monospace;"># &nbsp;x, exec = run command (the rest of the line) using shell</span><br /><span style="font-family: Courier New, Courier, monospace;">#</span><br /><span style="font-family: Courier New, Courier, monospace;"># These lines can be re-ordered; they are executed from top to bottom.</span><br /><span style="font-family: Courier New, Courier, monospace;">#</span><br /><span style="font-family: Courier New, Courier, monospace;"># If you remove a line here THAT COMMIT WILL BE LOST.</span><br /><span style="font-family: Courier New, Courier, monospace;">#</span><br /><span style="font-family: Courier New, Courier, monospace;"># However, if you remove everything, the rebase will be aborted.</span><br /><span style="font-family: Courier New, Courier, monospace;">#</span><br /><span style="font-family: Courier New, Courier, monospace;"># Note that empty commits are commented out</span><br /><div><br /></div>You should see this message:<br /><span style="font-family: Courier New, Courier, monospace;">Successfully rebased and updated refs/heads/master.</span><br /><br />If you don't, you can always get back to your pre rebase state with this command: git rebase --abort<br /><br />Now lets look at the log:<br /><span style="font-family: Courier New, Courier, monospace;">&gt;git log</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">commit 61a380892db71cc02db305492867830059c2df22</span><br /><span style="font-family: Courier New, Courier, monospace;">Author: dsolovay &lt;dsolovay@gmail.com&gt;</span><br /><span style="font-family: Courier New, Courier, monospace;">Date: &nbsp; Tue Dec 29 17:59:21 2015 -0500</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">&nbsp; &nbsp; Add file2.txt</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">commit 527a0483703d281628e7f847aec7ecf7ef6babbe</span><br /><span style="font-family: Courier New, Courier, monospace;">Author: dsolovay &lt;dsolovay@gmail.com&gt;</span><br /><span style="font-family: Courier New, Courier, monospace;">Date: &nbsp; Tue Dec 29 18:00:48 2015 -0500</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">&nbsp; &nbsp; Add file3.txt</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">commit 78f189fe8f38d80a58ce061e7b43a810674c55df</span><br /><span style="font-family: Courier New, Courier, monospace;">Author: dsolovay &lt;dsolovay@gmail.com&gt;</span><br /><span style="font-family: Courier New, Courier, monospace;">Date: &nbsp; Tue Dec 29 17:58:32 2015 -0500</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">&nbsp; &nbsp; Add file1.txt</span><br /><div><br /></div>You've just rewrote history. &nbsp;You can see your steps with<br /><span style="font-family: Courier New, Courier, monospace;">&gt;git reflog</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">61a3808 HEAD@{0}: rebase -i (finish): returning to refs/heads/master</span><br /><span style="font-family: Courier New, Courier, monospace;">61a3808 HEAD@{1}: rebase -i (pick): Add file2.txt</span><br /><span style="font-family: Courier New, Courier, monospace;">527a048 HEAD@{2}: rebase -i (pick): Add file3.txt</span><br /><span style="font-family: Courier New, Courier, monospace;">78f189f HEAD@{3}: rebase -i (start): checkout HEAD^^</span><br /><span style="font-family: Courier New, Courier, monospace;">1462505 HEAD@{4}: rebase -i (finish): returning to refs/heads/master</span><br /><span style="font-family: Courier New, Courier, monospace;">1462505 HEAD@{5}: rebase -i (start): checkout HEAD^^</span><br /><span style="font-family: Courier New, Courier, monospace;">1462505 HEAD@{6}: commit: Add file3.txt</span><br /><span style="font-family: Courier New, Courier, monospace;">3ad7253 HEAD@{7}: commit: Add file2.txt</span><br /><span style="font-family: Courier New, Courier, monospace;">78f189f HEAD@{8}: commit (initial): Add file1.txt</span><br /><br />Actually, you won't have HEAD@{4} and HEAD@{5}, since those are only there because I forgot to save the first time I tried this. Reflog remembers everything, at least for 90 days. This makes it a very good place to look if you ever can't find a commit (e.g. due to a reset error).<br /><br />Note the distinction between HEAD^^ (two commits back on this branch) and HEAD@{2} two commits back on this repo on this machine.<br /><br />We can get back to our pre-rebase state by putting this in another branch:<br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">git branch the-old-master 146205</span><br /><span style="font-family: Courier New, Courier, monospace;">git checkout the-old-master</span><br /><span style="font-family: Courier New, Courier, monospace;">git log</span><br />Now you will see the original history.<br /><br />Or, you can create a branch based on the state after you added file3.txt, and before you added file2.txt. Of course, that moment never actually happened, but you photoshopped it into reality with the rebase:<br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">git checkout master</span><br /><span style="font-family: Courier New, Courier, monospace;">git branch fake_moment "HEAD^"</span><br /><span style="font-family: Courier New, Courier, monospace;">git checkout fake_moment</span><br /><span style="font-family: Courier New, Courier, monospace;">dir</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">12/29/2015 &nbsp;06:34 PM &nbsp; &nbsp;&lt;DIR&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;.</span><br /><span style="font-family: Courier New, Courier, monospace;">12/29/2015 &nbsp;06:34 PM &nbsp; &nbsp;&lt;DIR&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;..</span><br /><span style="font-family: Courier New, Courier, monospace;">12/29/2015 &nbsp;05:58 PM &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;14 file1.txt</span><br /><span style="font-family: Courier New, Courier, monospace;">12/29/2015 &nbsp;06:33 PM &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;16 file3.txt</span><br /><span style="font-family: Courier New, Courier, monospace;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;2 File(s) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 30 bytes</span><br /><br />And you can go back to how things were by going back to master:<br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">git checkout master</span><br /><span style="font-family: Courier New, Courier, monospace;">dir</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">12/29/2015 &nbsp;06:36 PM &nbsp; &nbsp;&lt;DIR&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;.</span><br /><span style="font-family: Courier New, Courier, monospace;">12/29/2015 &nbsp;06:36 PM &nbsp; &nbsp;&lt;DIR&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;..</span><br /><span style="font-family: Courier New, Courier, monospace;">12/29/2015 &nbsp;05:58 PM &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;14 file1.txt</span><br /><span style="font-family: Courier New, Courier, monospace;">12/29/2015 &nbsp;06:36 PM &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;16 file2.txt</span><br /><span style="font-family: Courier New, Courier, monospace;">12/29/2015 &nbsp;06:33 PM &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;16 file3.txt</span><br /><span style="font-family: Courier New, Courier, monospace;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;3 File(s) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 46 bytes</span><br /><br /><span style="font-family: inherit;">Okay, let's do a little more. Let's combine the last two commits:</span><br /><span style="font-family: inherit;"><br /></span><span style="font-family: Courier New, Courier, monospace;">git rebase -i "HEAD^^"</span><br /><span style="font-family: inherit;"><br /></span><span style="font-family: inherit;">And let's change the second commit to a "squash":</span><br /><br /><span style="font-family: Courier New, Courier, monospace;">pick 527a048 Add file3.txt</span><br /><span style="font-family: Courier New, Courier, monospace;">s 61a3808 Add file2.txt</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;"># Rebase 78f189f..61a3808 onto 78f189f</span><br /><span style="font-family: Courier New, Courier, monospace;">#</span><br /><span style="font-family: Courier New, Courier, monospace;"># Commands:</span><br /><span style="font-family: Courier New, Courier, monospace;"># &nbsp;p, pick = use commit</span><br /><span style="font-family: Courier New, Courier, monospace;"># &nbsp;r, reword = use commit, but edit the commit message</span><br /><span style="font-family: Courier New, Courier, monospace;"># &nbsp;e, edit = use commit, but stop for amending</span><br /><span style="font-family: Courier New, Courier, monospace;"># &nbsp;s, squash = use commit, but meld into previous commit</span><br /><span style="font-family: Courier New, Courier, monospace;"># &nbsp;f, fixup = like "squash", but discard this commit's log message</span><br /><span style="font-family: Courier New, Courier, monospace;"># &nbsp;x, exec = run command (the rest of the line) using shell</span><br /><span style="font-family: Courier New, Courier, monospace;">#</span><br /><span style="font-family: Courier New, Courier, monospace;"># These lines can be re-ordered; they are executed from top to bottom.</span><br /><span style="font-family: Courier New, Courier, monospace;">#</span><br /><span style="font-family: Courier New, Courier, monospace;"># If you remove a line here THAT COMMIT WILL BE LOST.</span><br /><span style="font-family: Courier New, Courier, monospace;">#</span><br /><span style="font-family: Courier New, Courier, monospace;"># However, if you remove everything, the rebase will be aborted.</span><br /><span style="font-family: Courier New, Courier, monospace;">#</span><br /><br /><span style="font-family: Courier New, Courier, monospace;"># Note that empty commits are commented out</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: inherit;">You'll get a chance to edit the combined message:</span><br /><span style="font-family: inherit;"><br /></span><span style="font-family: Courier New, Courier, monospace;"># This is a combination of 2 commits.</span><br /><span style="font-family: Courier New, Courier, monospace;"># The first commit's message is:</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">Add file3.txt</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;"># This is the 2nd commit message:</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">Add file2.txt</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;"># Please enter the commit message for your changes. Lines starting</span><br /><span style="font-family: Courier New, Courier, monospace;"># with '#' will be ignored, and an empty message aborts the commit.</span><br /><span style="font-family: Courier New, Courier, monospace;"># rebase in progress; onto 78f189f</span><br /><span style="font-family: Courier New, Courier, monospace;"># You are currently editing a commit while rebasing branch 'master' on '78f189f'.</span><br /><span style="font-family: Courier New, Courier, monospace;">#</span><br /><span style="font-family: Courier New, Courier, monospace;"># Changes to be committed:</span><br /><span style="font-family: Courier New, Courier, monospace;">#<span class="Apple-tab-span" style="white-space: pre;"> </span>new file: &nbsp; file2.txt</span><br /><span style="font-family: Courier New, Courier, monospace;">#<span class="Apple-tab-span" style="white-space: pre;"> </span>new file: &nbsp; file3.txt</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">#</span><br /><div><br /></div><div>Let's accept by saving and closing. &nbsp;We can see the most recent commit with git show and see the combined message and the combined edits:</div><div><span style="font-family: Courier New, Courier, monospace;"><br /></span></div><div><span style="font-family: Courier New, Courier, monospace;">&gt;git show</span></div><div><span style="font-family: Courier New, Courier, monospace;">commit 424c80411552e87d1af316e06977e681bdec92a8</span></div><div><div><span style="font-family: Courier New, Courier, monospace;">Author: dsolovay &lt;dsolovay@gmail.com&gt;</span></div><div><span style="font-family: Courier New, Courier, monospace;">Date: &nbsp; Tue Dec 29 18:00:48 2015 -0500</span></div><div><span style="font-family: Courier New, Courier, monospace;"><br /></span></div><div><span style="font-family: Courier New, Courier, monospace;">&nbsp; &nbsp; Add file3.txt</span></div><div><span style="font-family: Courier New, Courier, monospace;"><br /></span></div><div><span style="font-family: Courier New, Courier, monospace;">&nbsp; &nbsp; Add file2.txt</span></div><div><span style="font-family: Courier New, Courier, monospace;"><br /></span></div><div><span style="font-family: Courier New, Courier, monospace;">commit 78f189fe8f38d80a58ce061e7b43a810674c55df</span></div><div><span style="font-family: Courier New, Courier, monospace;">Author: dsolovay &lt;dsolovay@gmail.com&gt;</span></div><div><span style="font-family: Courier New, Courier, monospace;">Date: &nbsp; Tue Dec 29 17:58:32 2015 -0500</span></div><div><span style="font-family: Courier New, Courier, monospace;"><br /></span></div><div><span style="font-family: Courier New, Courier, monospace;">&nbsp; &nbsp; Add file1.txt</span></div></div><div><br /></div><div>A few other things you can do:</div><div><br /></div><div>"f" works just like "s", but doesn't change the first commits message, and doesn't give you a chance to change it. You can also simply remove a commit to remove it from the branch history (including whatever file system changes you may have made.</div><div><br /></div><div><span style="font-family: inherit;">This may sound cumbersome, but it really gets to be a groove, and allows &nbsp;you to contribute really well crafted commits with very little effort. Running interactive rebase starts to feel like giving an email a quick read before hitting "Send". &nbsp;Here is a commit of mine that was originally about 10 or so micro commits:&nbsp;</span><a href="https://github.com/dsolovay/hexo-migrator-rss/commit/5c44479c5027b9f45f39ec188c354df0baecc650">https://github.com/dsolovay/hexo-migrator-rss/commit/5c44479c5027b9f45f39ec188c354df0baecc650</a>, adding tests to a Node.js module. &nbsp;Having the commits grouped together with a detailed commit message makes it much easier for the person reviewing the pull request to think through.</div><br />And even if you never use this technique in your day-to-day coding, I recommend stepping through it at least once or twice with a test repo. It really helps anchor the concepts of local commits vs. upstream commits, and gives you a feeling for the power and flexibility that git offers you as an author of code.<br /><br />http://www.dansolovay.com/2015/12/interactive-rebasing-in-git.htmlnoreply@blogger.com (Dan Solovay)0tag:blogger.com,1999:blog-5589343447323430312.post-6559401290194403543Wed, 23 Dec 2015 11:00:00 +00002015-12-23T06:00:01.433-05:00Developer ToolsGitA Look at Git HashesGit is a locally stored database with integrity guarantees. To get &nbsp;a feel for how this works, this post takes a look at a very simple repository using the commands <span style="font-family: Courier New, Courier, monospace;">ha<span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">sh-object</span></span> and <span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">cat-file.</span><span style="font-family: inherit;">&nbsp;Although these are not commands you would normally use (they are among the "plumbing commands" that lie behind the "porcelain commands" like </span><span style="font-family: Courier New, Courier, monospace;">clone </span><span style="font-family: inherit;">and </span><span style="font-family: Courier New, Courier, monospace;">commit</span><span style="font-family: inherit;">), they are very helpful for inspecting the objects that a Git repository manages.</span><br /><br /><a name='more'></a><br />Let's create a Git repository. On a command line, type:<br /><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&gt;git init test</span><br /><br />You should see something like:<br /><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">Initialized empty Git repository at C:/Some/Path/test/.git</span><br /><br />It's interesting to note that the response refers to the .git directory as the repository. &nbsp;The parent directory, "test", is called the "working directory" in Git documentation. It is tracked by Git, but it is not the repository; it's what the repository tracks and controls. &nbsp;Of course, on a non distributed version control system, the repository would reside in an external location.<br /><br />Here are the contents of an empty .git repository:<br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><br /></span> <span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">12/21/2015 &nbsp;06:06 AM &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 157 config</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">12/21/2015 &nbsp;06:06 AM &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;73 description</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">12/21/2015 &nbsp;06:06 AM &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;23 HEAD</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">12/21/2015 &nbsp;06:06 AM &nbsp; &nbsp;&lt;DIR&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;hooks</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">12/21/2015 &nbsp;06:06 AM &nbsp; &nbsp;&lt;DIR&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;info</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">12/21/2015 &nbsp;06:06 AM &nbsp; &nbsp;&lt;DIR&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;objects</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">12/21/2015 &nbsp;06:06 AM &nbsp; &nbsp;&lt;DIR&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;refs</span><br /><br />Let's look at how these files and directories make Git work. &nbsp;The file HEAD contains text that shows what the current branch is:<br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><br /></span> <span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&gt;type .git\HEAD</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">ref: refs/heads/master</span><br /><br />These 23 bytes of ASCII text are how Git knows what &nbsp;you are working on. &nbsp;The "refs/heads/master" is also a text file. Or will be as soon as you make a commit. &nbsp;It contains 40 alphanumeric characters, the hash value of the current commit. &nbsp;When you switch to a new branch in Git, it does two very simple things: (1) it creates a new file in "refs\heads\" with the name of your new branch, and with the 40 character identifier of the current commit, (2) it updates HEAD to "ref: refs/heads/&lt;your-new-branch&gt;". &nbsp;Contrast this to SVN or TFS, where creating branches is a much heavier operation, causing hierarchies to get created on other servers. &nbsp;On Git, branching is just creating a 40 byte file. &nbsp;Which is pretty fast.<br /><br />The real meat of the repository is the "objects" directory, where Git stores the objects it tracks. &nbsp;Let's put some things in there.<br /><br /><span style="font-family: Courier New, Courier, monospace;">&gt;copy con droid1.txt</span><br /><span style="font-family: Courier New, Courier, monospace;">R2-D2</span><br /><span style="font-family: Courier New, Courier, monospace;">^Z</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">&gt;copy con droid2.txt</span><br /><span style="font-family: Courier New, Courier, monospace;">C-3PO</span><br /><span style="font-family: Courier New, Courier, monospace;">^Z</span><br /><br />So now we have two files. &nbsp;If Git is going to manage these files, it will need a way to track their contents, to refer to the contents and to track changes. &nbsp;Git uses <a href="https://en.wikipedia.org/wiki/SHA-1">SHA1</a>&nbsp;hashes to do this. &nbsp;They are fast to generate, and they are strongly <a href="http://stackoverflow.com/questions/10434326/hash-collision-in-git/23253149#23253149">unique</a>.<br /><br />You can tell the SHA1 hash value for any file (even one not tracked by Git) using <span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">git hash-object.</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><br /></span> <span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&gt;git hash-object droid1.txt</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">668b9c33030c59db9c0f11f777029cc3fc0fdaf1</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><br /></span> <span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&gt;git hash-object droid2.txt</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">27c825d7d5393f79c5b14cf0dd719e3dbb391c4e</span><br /><br /><br /><br />This version of the command just outputs the 40 character value, but you can also use the "-w" option to store it in the "objects" directory. &nbsp;So after<br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><br /></span> <span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&gt;git hash-object droid1.txt -w&nbsp;</span><br /><br />Git splits the hash into a two character directory name and a 38 character file name, so you would now find a directory named ".git\objects\66" with a file named "<span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">8b9c33030c59db9c0f11f777029cc3fc0fdaf1"</span><span style="font-family: inherit;">, which would contain a compressed version of the file (with a very short header stating it was a "blob", and how many bytes long it is). &nbsp;Of course, you normally would not use "hash-object" to do this, you would use "git add" or a GUI tool. &nbsp;But this is what Git uses behind the scenes.</span><br /><span style="font-family: inherit;"><br /></span> <span style="font-family: inherit;">Let's go back to normal Git commands, and add these files to Git.</span><br /><span style="font-family: Courier New, Courier, monospace;"><span style="font-family: inherit;"><br /></span> <span style="font-family: inherit;">&gt;git add *</span></span><br /><span style="font-family: Courier New, Courier, monospace;">&gt;git commit -m "These are not the droids you're looking for"</span><br /><span style="font-family: Courier New, Courier, monospace;">[master (root-commit) 914ebae] These are not the droids you're looking for</span><br /><span style="font-family: Courier New, Courier, monospace;">&nbsp;2 files changed, 2 insertions(+)</span><br /><span style="font-family: Courier New, Courier, monospace;">&nbsp;create mode 100644 droid1.txt</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">&nbsp;create mode 100644 droid2.txt</span><br /><br />Up to now, the hashes in this post should match yours, but this will not be true with this commit hash, because it contains my name, email address, and the current date. &nbsp;So Git uses one hash to guarantee the contents of a file, and a different hash to guarantee that I was the person who added them to the repository.<br /><br />Let's take a look at the artifacts created by this commit.<br /><br /><span style="font-family: Courier New, Courier, monospace;">&gt;type .git\HEAD</span><br /><span style="font-family: Courier New, Courier, monospace;">ref: refs/heads/master</span><br /><br />No change there. I am still on the "master" branch.<br /><br /><span style="font-family: Courier New, Courier, monospace;">&gt;type .git\refs\heads\master</span><br /><span style="font-family: Courier New, Courier, monospace;">914ebae549d6f4070184c7db9e1ddbaaf80e1d3b</span><br /><span style="font-family: inherit;"><br /></span> Ah, now there is a hash value. &nbsp;A more user friendly ways of seeing this is using the command git log:<br /><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&gt;git log</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">commit 914ebae549d6f4070184c7db9e1ddbaaf80e1d3b</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">Author: dsolovay &lt;dsolovay@gmail.com&gt;</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">Date: &nbsp; Mon Dec 21 06:54:17 2015 -0500</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><br /></span> <span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; &nbsp; These are not the droids you're looking for</span><br /><br />But what does the commit contain? We can use another git plumbing command, git cat-file, to inspect it:<br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><br /></span> <span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&gt;git cat-file -p 914e</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">tree 9b8c255bff7beb1440cc726ebe3346816dc04d67</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">author dsolovay &lt;dsolovay@gmail.com&gt; 1450698857 -0500</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">committer dsolovay &lt;dsolovay@gmail.com&gt; 1450698857 -0500</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><br /></span> <span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">These are not the droids you're looking for</span><br /><div><br /></div>A few things to note. &nbsp;I typed "914e" instead of the full hash; Git allows these short cuts. &nbsp;The value of the hash is generated from just the text above: the "tree" (we will get to that next), the author and the committer (these will be different if someone is merging in someone else's work), the time stamp for each, and the commit message. Change any of these, and you change the hash value, so the hash guarantees the contents. If this were not an initial commit, the commit would also contain the hash of the parent, thus including its ancestry as part of the SHA1 hash guarantee.<br /><br />You can see the power of this guarantee when you fire up, for example, a mongod instance. &nbsp;The hash of the git commit of the version getting used appears in the output. &nbsp;This shows (sorry) that this <b>is </b>the mongod you're looking for:<br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace; font-size: x-small;"><br /></span><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace; font-size: x-small;">2015-12-21T07:12:33.466-0500 Hotfix KB2731284 or later update is not installed,</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace; font-size: x-small;">will zero-out data files</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace; font-size: x-small;">2015-12-21T07:12:33.472-0500 [initandlisten] MongoDB starting : pid=8748 port=27</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace; font-size: x-small;">017 dbpath=\data\db\ 64-bit host=DanSolovay-PC</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace; font-size: x-small;">2015-12-21T07:12:33.472-0500 [initandlisten] targetMinOS: Windows 7/Windows Serv</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace; font-size: x-small;">er 2008 R2</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace; font-size: x-small;">2015-12-21T07:12:33.473-0500 [initandlisten] db version v2.6.11</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace; font-size: x-small;">2015-12-21T07:12:33.473-0500 [initandlisten] git version: d00c1735675c457f75a12d</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace; font-size: x-small;">530bee85421f0c5548</span><br /><br />And now let's inspect the tree that the commit referred to:<br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><br /></span><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&gt;git cat-file -p 9b8c</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">100644 blob 668b9c33030c59db9c0f11f777029cc3fc0fdaf1 &nbsp; &nbsp;droid1.txt</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">100644 blob 27c825d7d5393f79c5b14cf0dd719e3dbb391c4e &nbsp; &nbsp;droid2.txt</span><br /><div><br /></div>Again, a hash of a little bit of text. The 100644 is a little bit of UNIX-ese saying this is a non-executable file, "blob" means file (the other objects Git stores are commits, trees, and tags), and then we have the hash of the contents, and the name of the file. &nbsp;So if we renamed the file, the hash wouldn't change, but the name would. &nbsp;It's important to note that the name or location of the directory is not stored in this contents, so if we were to move the directory to a new location (say to test\resources\droids) the hash of the tree object would still be&nbsp;<span style="font-family: 'courier new', courier, monospace;">9b8c255bff7beb1440cc726ebe3346816dc04d67.</span><span style="font-family: inherit;">&nbsp;In fact, if you do these steps locally</span>, your tree should also be named&nbsp;<span style="font-family: 'courier new', courier, monospace;">9b8c255bff7beb1440cc726ebe3346816dc04d67</span>. &nbsp;Any directory, on any computer, in any universe, at any time in history, that&nbsp;has two files named droid1.txt and droid2.txt with the contents "R2-D2" and "C-3PO" will have the hash <span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">9b8c255bff7beb1440cc726ebe3346816dc04d67.</span><span style="font-family: inherit;"> Guaranteed.</span><br /><span style="font-family: inherit;"><br /></span> <span style="font-family: inherit;">If you found this exploration entertaining, I highly recommend <a href="https://git-scm.com/book/en/v2/Git-Internals-Git-Objects">Chapter 10</a>&nbsp;of Pro Git, which builds a commit by hand. Just a note that there is a Ruby script at the end of section 2 that does not format correctly on the website, so if &nbsp;want to walk through that, please go to the <a href="https://progit2.s3.amazonaws.com/en/2015-12-01-4cfce/progit-en.935.pdf">PDF </a>version. &nbsp;And if you would like a deeper dive into the .git directory, <a href="http://gitready.com/advanced/2009/03/23/whats-inside-your-git-directory.html">this blog post</a> is very useful.&nbsp;</span><br /><br />http://www.dansolovay.com/2015/12/a-look-at-git-hashes.htmlnoreply@blogger.com (Dan Solovay)0tag:blogger.com,1999:blog-5589343447323430312.post-8694744712381684078Wed, 16 Dec 2015 11:00:00 +00002015-12-16T06:00:00.738-05:00JavaScriptLooking at 'this' in JavaScriptJavaScript looks a lot like C#. But the internal plumping is utterly different. I've always found this somewhat befuddling. As a C# developer, you kind of think you know how things are working, but often you don't. Which makes figuring out what's happening when things are not working really challenging. I've just started reading a book in Kyle Simpson's (<a href="http://www.twitter.com/getify">@getify</a>) accurately titled series, You Don't Know JavaScript, a slim monograph on <a href="https://github.com/getify/You-Dont-Know-JS/tree/master/this%20%26%20object%20prototypes">this &amp; Object Prototypes</a>, and things are kind of clearer. Kind of.<br /><br /><a name='more'></a><br />So the following bit of C# and JavaScript look a lot alike:<br /><br />C#:<br /><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">public class Car {</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; private string make;</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; private string model;&nbsp;</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; private int year;</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><br /></span><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; public Car(string make, string model, int year) {</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; &nbsp; this.make = make;</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; &nbsp; this.model = model;</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; &nbsp; this.year = year;</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp;}</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><br /></span><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; public override string ToString() {</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; &nbsp; return string.Format("Car: {0} {1} {2}", make, model, year);</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; }</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">}</span><br /><br />JavaScript:<br /><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">function Car(make, model, year) {</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; this.make = make;</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; this.model = model;</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; this.year = year;</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">}</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><br /></span><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">Car.prototype.toString = function () {</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; return "Car: " + this.make + " " + this.model + " " + this.year; </span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">}</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><br /></span>Invocation is pretty similar too:<br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">static void Main()</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">{</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; var car = new Car("Honda", "Civic", 2003);</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; Console.WriteLine(car);</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">}</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><br /></span>or<br /><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">var car = new Car("Honda", "Civic", 2003);</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">console.log(car.toString());</span><br /><br />But underneath this similarity is a whole lot of difference. C# has classes, and JavaScript does not. JavaScript has functions that can returned typed objects, but does this without a class mechanism. Put differently, the "this" in C# is tightly circumscribed within in a class membrane, and can only appear there. That makes its behavior very predictable. It refers to current instance of the class, and that's it.<br />JavaScript on the other hand has functions, and these functions might be intended to create classes or they might not. And so the language specification is powerless to limit the context of this, and must therefore define its behavior in a whole heap of contexts, which the first two chapters of Kyle Simpson's book enumerate. Let's take a look at a few of them.<br /><br />Simpson deals with <b><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">new</span></b> last, and we've already taken a look at that. It's the friendly one, the one that makes sense to us classical programmers. But what about this:<br /><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">var message = "hello";</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><br /></span><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">function sayMessage() {</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; console.log(this.message);</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">}</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><br /></span><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">sayMessage();</span><br /><br />That works. Try it. But this does not:<br /><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">(function() { </span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; var message = "hello";</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; function sayMessage() {</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; console.log(this.message);</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; }</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; sayMessage();</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">})();</span><br /><br />Why does one return "hello", and the other undefined? Because the fallback rule if nothing else determines the meaning of this is to use the global namespace. Yuck. It's kind of scary to have code that works outside of an IIFE but fails inside of one. Fortunately, you can "use strict" inside the function, and then the behavior is consistent:<br /><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">var message = "hello";</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><br /></span><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">function sayMessage() {</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; "use string";</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; console.log(this.message);</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">}</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><br /></span><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">sayMessage();</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><br /></span>And now the output is:<br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">TypeError: Cannot read property 'message' of undefined</span><br /><br />For my money, this is a heck of a good reason to "use strict". I can picture spending a few very unpleasant hours trying to figure that one out. Note that "use strict" has to cover the place where the this is used, so that this code still outputs "hello":<br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">var message = "hello";</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><br /></span><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">function sayMessage() {</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><br /></span><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; console.log(this.message);</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">}</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><br /></span><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">"use string";</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">sayMessage();</span><br /><br />The book covers a whole host of other scenarios: implicit binding, explicit binding, call and apply. I will leave you with just one more:<br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">var anotherCar = { construct: Car};</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><br /></span><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">anotherCar.construct("Ford", "Fiesta", 1978);</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><br /></span><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">console.log(anotherCar.year);</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&gt;&gt; 1978</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><br /></span>Okay, so far so good. Rather than using "new" to supply the "this", we are attaching the this by hand by including the function directly to an object. But what if we then assign the function to another variable?<br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">var anotherCar = { construct: Car};</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><br /></span><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">var c = anotherCar.construct; </span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">c("Ford", "Model T", 1908);</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><br /></span><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">console.log(anotherCar.year);</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&gt;&gt; undefined</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><br /></span>Because c is a reference to the function itself, not to the copy of the function that lived inside the "anotherCar" object. The moral of the story: it's where the function is called that determines the behavior of "this".http://www.dansolovay.com/2015/12/looking-at-this-in-javascript.htmlnoreply@blogger.com (Dan Solovay)0tag:blogger.com,1999:blog-5589343447323430312.post-3973986204960114944Wed, 09 Dec 2015 11:00:00 +00002015-12-14T22:32:56.779-05:00Developer ToolsJavaScriptMochaNode.jsTDDVS CodeNode.js Development with Visual Studio CodeLast weekend I took my first steps in Node.js development, adding a small feature to a Hexo plugin. <a href="http://hexo.io/" target="_blank">Hexo</a>, as I blogged <a href="http://www.dansolovay.com/2015/12/hexo-nodejs-blog-platform.html" target="_blank">last week</a>, is a static site generator written in&nbsp;Node.js, to which I am planning to migrate this blog. I made good progress as first, but soon required a debugger, and took the opportunity to learn the basics of VS Code as a Node editor and debugger. This article will walk through the basics of environment set up and debugging with VS Code.<br /><a name='more'></a><br /><!--more--><br /><br /><h2>Getting Started</h2>This article assumes two things. Well maybe three. <br /><ol><li>You have Visual Studio Code installed. You can find it <a href="http://code.visualstudio.com/" target="_blank">here</a>. <br /></li><li>You have <code>Node.js</code> and <code>npm</code> installed. Same <a href="https://www.blogger.com/(https://nodejs.org/en/download/)" target="_blank">download</a></li><li>You know just enough JavaScript to be dangerous. <br /></li></ol><h2>Creating a NPM package</h2>As an exercise, let's imagine we are building a small node module called <code>linecount</code> that tallies up the lines in a file. Let's get started creating a node project.<br /><br /><pre><code>mkdir linecount<br />cd linecount<br />npm init<br /></code></pre><br />This will fire off a small wizard that creates a package.json file, which is roughly equivalent to a .csproj file for a C# project. If we step through the wizard and accept the results, we will end up with a file like this:<br /><br /><pre><code>{<br /> "name": "linecount",<br /> "version": "1.0.0",<br /> "description": "",<br /> "main": "index.js",<br /> "scripts": {<br /> "test": "echo \"Error: no test specified\" &amp;&amp; exit 1"<br /> },<br /> "author": "",<br /> "license": "ISC"<br />}<br /></code></pre><br />I'm a TDD guy, so I like to make sure my code doesn't work before I get to far ahead of myself. So let's execute this and confirm it's broken, using <code>node .</code> to execute the current directory.<br /><br /><pre><code>c:\src\linecount&gt;node .<br /></code></pre><pre><code>module.js:338<br /> throw err;<br /> ^<br />Error: Cannot find module 'c:\src\linecount'<br /> at Function.Module._resolveFilename (module.js:336:15)<br /> at Function.Module._load (module.js:278:25)<br /> at Function.Module.runMain (module.js:501:10)<br /> at startup (node.js:129:16)<br /> at node.js:814:3<br /></code></pre><br />So we have our "red light." Let's edit this in Code to add a little functionality. The command is <code>code .</code>. No, I don't think the one letter change from <code>node</code> to <code>code</code> is a coincidence. <br /><br /><pre><code>c:\src\linecount&gt;code .<br /></code></pre><br />That will pull up a screen like this:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://i.imgur.com/pjixglR.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://i.imgur.com/pjixglR.png" height="480" width="640" /></a></div><br /><br />If we add a file called <strong>index.js</strong>, we will be able to run <code>node .</code> without error. We will also get basic JavaScript IntelliSense and AutoComplete. For example, if we type <code>console.</code>, we get this:<br /><pre></pre><pre><code><div class="separator" style="clear: both; text-align: center;"><br /><a href="http://i.imgur.com/mKVUSxh.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://i.imgur.com/mKVUSxh.png" height="480" width="640" /></a></div><br /></code></pre><br /><h2>Adding More IntelliSense</h2>However, this is only part of the IntelliSense story. If we type something like<br /><br /><pre><code>var fs = require('fs');<br />fs.<br /></code></pre><br />We get no information about either the <code>require</code> statement or the <code>fs</code> class. We can fix that by installing TypeScript declaration files for <code>node</code>. First, we need to install <code>tsd</code> a utility for downloading ypeScript. <br /><br /><pre><code><br /></code></pre><pre><code>npm install -g tsd<br /></code></pre><br />Now we can download the definition files with this command from a command prompt.<br /><br /><pre><code>tsd install node<br /></code></pre><br />This will create a directory called .vscode, and place a file node.d.ts inside it. Now our <code>fs.</code> gives us a rich list of options. (This is a great way to explore the capabilities of the Node API.)<br /><pre></pre><div class="separator" style="clear: both; text-align: center;"><a href="http://i.imgur.com/dhUC2pz.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://i.imgur.com/dhUC2pz.png" height="480" width="640" /></a></div><pre>&nbsp;</pre><br />There are quite a large number of TypeScript files available. You can use <code>tsd query</code> to get a list. We will install the one for <strong>mocha</strong> in a little bit, when we move on to testing. One more thing before we move on to debugging. You might want to <code>require</code> your own files, with this syntax:<br /><pre><code>var myClass = require('./myClass');<br /></code></pre><br />For example, suppose you had a file <strong>helper.js</strong> that had this code:<br /><pre><code>exports.hello = function() {<br /> console.log("hello from helper");<br />}<br /></code></pre><br />If you reference this file from <strong>index.js</strong>, you will get IntelliSense:<br /><pre><code><br /></code></pre><pre><code>var helper = require('./helper');<br />helper.<br /><br /></code></pre><div class="separator" style="clear: both; text-align: center;"><a href="http://i.imgur.com/bn13fcN.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://i.imgur.com/bn13fcN.png" height="486" width="640" /></a></div><pre><code><br /></code></pre><pre><code><br /></code></pre><br /><h2>Debugging</h2>Okay, so far so good. But now we want to be able to put a breakpoint in <strong>helper.js</strong>. This is easily set, but clicking in the left margin as you would in Visual Studio. To fire off the debugger, however, takes a little bit of one-time (per project) configuration.<br />Click on the Debug icon (1), then the gear icon (2), then select Node.js in the drop-down list (3):<br /><pre></pre><div class="separator" style="clear: both; text-align: center;"><a href="http://i.imgur.com/twblDc9l.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://i.imgur.com/twblDc9l.png" height="360" width="640" /></a></div><pre>&nbsp;</pre><br />If you now launch the debugger, you will hit your breakpoint, with all the usual features of a debugger (variables, watches, and the call stack). <br /><br />Note: There are more complex scenarios that require a bit more configuration. For example, I was recently testing a module that was not directly called, but was used when this command was issued<br /><pre><code><br /></code></pre><pre><code>hexo migrate rss &lt;RSS feed url&gt; --alias<br /></code></pre><br />In order to get this to work, I needed to make a few edits to my launch.json fileplace the path to hexo package, located in my %appdata% folder for the program, and had to put the remaining command line arguments in the <code>args</code> array:<br /><pre><code><br /></code></pre><pre><code>"configurations": [<br /> {<br /> "name": "Launch",<br /> "type": "node",<br /> "request": "launch",<br /> "program": "c:/users/Dan.Solovay/AppData/Roaming/npm/node_modules/hexo-cli/bin/hexo",<br /> "stopOnEntry": false,<br /> "args": ["migrate", "rss", "http://www.dansolovay.com/feeds/posts/default", "--alias"],<br /> "cwd": ".",<br /> "runtimeExecutable": null,<br /> "runtimeArgs": [<br /> "--nolazy"<br /> ],<br /> "env": {<br /> "NODE_ENV": "development"<br /> },<br /> "externalConsole": false,<br /> "sourceMaps": false,<br /> "outDir": null<br /> },<br /></code></pre><br /><h2>Adding Tests</h2>Okay, so let's add some tests, using Mocha.js, a widely used test framework for Node development.<br /><br /><pre><code>npm install -g mocha<br /></code></pre><br />Now lets add a file <strong>test.js. &nbsp;</strong>Mocha tests are written in a nested call syntax, with a <em>describe</em> function for the unit under test, a <em>context</em> clause, and an <em>it</em> clause to indicate each test.<br /><br /><pre><code>var assert = require('assert');<br />var sut = require('./linecount')<br /><br />describe("linecount", function() {<br /> context("file with three lines", function() {<br /> var result = sut('./testfile.txt');<br /> it("returns 3", function() {<br /> assert.equal(3, result);<br /> });<br /> });<br />});<br /></code></pre><br />We can run this test by going to a command prompt and typing "mocha", and we can get to a naive passing result with this <strong>linecount.js</strong>:<br /><br /><pre><code> exports.getLineCount = function(fileName) {<br /><br /> return 3;<br /> }<br /></code></pre><br />Okay, not production ready, but this does show that our test framework works, which is not nothing:<br /><pre></pre><div class="separator" style="clear: both; text-align: center;"><a href="http://i.imgur.com/Ad31Z15.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://i.imgur.com/Ad31Z15.png" height="348" width="640" /></a></div><pre>&nbsp;</pre><br /><h2>Debugging Tests</h2>If we need to debug these, we can call mocha with a debug option, which will break on the first line. <br /><pre><code><br /></code></pre><pre><code>mocha --debug-brk<br /></code></pre><br />Then we can use the "Attach" option of the debugger to continue through the program. You can see this is a full debugger experience, with watches, locals, and a call stack.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://i.imgur.com/5uLJT7K.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://i.imgur.com/5uLJT7K.png" height="480" width="640" /></a></div><br /><h2>To Learn More</h2>I hope this gives you a flavor of the power and flexibility of VS Code. To dig deeper into the topics discussed, here are a few resources that were helpful for me:<br /><ul><li><a href="https://code.visualstudio.com/Docs/editor/codebasics" target="_blank">Series introducting VS Code</a></li><li><a href="https://code.visualstudio.com/Docs/runtimes/nodejs" target="_blank">Node development overview</a></li><li><a href="https://code.visualstudio.com/docs/editor/tasks_appendix" target="_blank">Integrating grunt/gulp build and test tasks</a></li><li><a href="https://www.dotnetrocks.com/?show=1215" target="_blank">Dot Not Rocks podcast</a></li><li><a href="https://mochajs.org/" target="_blank">Mocha reference</a></li><li>Two recent forays into Node development:</li><ul><li><a href="https://github.com/hexojs/hexo-generator-alias/pull/5" target="_blank">https://github.com/hexojs/hexo-generator-alias/pull/5</a></li><li><a href="https://github.com/hexojs/hexo-migrator-rss/pull/5">https://github.com/hexojs/hexo-migrator-rss/pull/5</a></li></ul></ul>http://www.dansolovay.com/2015/12/nodejs-development-with-visual-studio.htmlnoreply@blogger.com (Dan Solovay)1tag:blogger.com,1999:blog-5589343447323430312.post-5774270613659338897Wed, 02 Dec 2015 11:00:00 +00002015-12-02T06:00:02.594-05:00BloggingGithubHexoHexo, a Node.js Blog PlatformIt's been five years since I wrote my first blog post, a look at XSLT that still gets pretty good traffic. I chose Blogger because it was free and easy, and the fact Google owned it gave me some confidence it would stick around. &nbsp;But it's been feeling like less and less of a good fit recently. <br /><a name='more'></a>My bread and butter as a blogger is the technical walk through, which means code, &nbsp;which means angle brackets, which means using their WYSWIG editor (which does weird things to styles over the course of a post) or editing HTML directly, where you have to replace each and every angle bracket with a &amp;lt; and &amp;gt;, which is a true pain, and which I don't always remember to do. &nbsp;I also really do not like the lack of versionning. &nbsp;I use git commits very very regularly when I code (like hitting Save when writing a Word document), and have become accustomed to the idea I can always go back to an earlier state. The thought I might accidentally delete a paragraph from a post have to go to the Wayback machine to find it was not a comfortable feeling. <br /><div><br /></div><div>The rise of Github and Github <a href="https://pages.github.com/" target="_blank">pages</a>, and a whole <a href="https://staticsitegenerators.net/" target="_blank">ecosystem </a>of static site generators built around them offered the promise of easier way. I especially liked the idea of being able to author in Markdown, where code is indicated by tabbing four spaces (which MarkDown Pad can do with a single tab stroke). I was planning to look at Jekyll, but came upon a <a href="http://kamsar.net/index.php/2015/04/Blogging-with-Hexo-a-Node-js-detour/" target="_blank">post </a>by Kam Figy recommending <a href="http://hexo.io/" target="_blank">Hexo</a>. Since it is built on Node.js, there's a lot less of the "we're not sure this is going to work on Windows" about it then you will run into with a Ruby based platform, and JavaScript is a language I kind of know, unlike Ruby, which I kind of don't.</div><div><br /></div><div>The more I've dug into Hexo, the more impressed I've been. Assuming you have node installed on your computer, set up was something you could do in seconds. The following set of commands, taken form the project home page, got me up and running locally:</div><div><ul id="intro-cmd-wrap" style="background: rgb(238, 238, 238); border: 0px; box-sizing: inherit; color: #444444; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 15px; line-height: 15px; list-style: none; margin: 50px auto 0px; max-width: 700px; outline: 0px; padding: 15px 0px; vertical-align: baseline;"><li class="intro-cmd-item" style="border: 0px; box-sizing: inherit; font-family: 'Source Code Pro', Monaco, Menlo, Consolas, monospace; font-size: 1pc; font-style: inherit; font-weight: inherit; line-height: 2; margin: 0px; outline: 0px; padding: 0px 30px; vertical-align: baseline;">npm install hexo-cli -g</li><li class="intro-cmd-item" style="border: 0px; box-sizing: inherit; font-family: 'Source Code Pro', Monaco, Menlo, Consolas, monospace; font-size: 1pc; font-style: inherit; font-weight: inherit; line-height: 2; margin: 0px; outline: 0px; padding: 0px 30px; vertical-align: baseline;">hexo init blog</li><li class="intro-cmd-item" style="border: 0px; box-sizing: inherit; font-family: 'Source Code Pro', Monaco, Menlo, Consolas, monospace; font-size: 1pc; font-style: inherit; font-weight: inherit; line-height: 2; margin: 0px; outline: 0px; padding: 0px 30px; vertical-align: baseline;">cd blog</li><li class="intro-cmd-item" style="border: 0px; box-sizing: inherit; font-family: 'Source Code Pro', Monaco, Menlo, Consolas, monospace; font-size: 1pc; font-style: inherit; font-weight: inherit; line-height: 2; margin: 0px; outline: 0px; padding: 0px 30px; vertical-align: baseline;">npm install</li><li class="intro-cmd-item" style="border: 0px; box-sizing: inherit; font-family: 'Source Code Pro', Monaco, Menlo, Consolas, monospace; font-size: 1pc; font-style: inherit; font-weight: inherit; line-height: 2; margin: 0px; outline: 0px; padding: 0px 30px; vertical-align: baseline;">hexo server</li></ul></div><div><br /></div><div>After this, you will have a version of the website running on port 4000. The whole thing took me about a minute, mostly for the "npm install" step. &nbsp;After issuing the command hexo init blog &nbsp;("blog" is a directory, you could choose anything else), you will get the following structure:</div><div><br /></div><div><div>C:\Users\Dan.Solovay\testhexo&gt;dir</div><div>&nbsp;Volume in drive C is OS</div><div>&nbsp;Volume Serial Number is F67A-0624</div><div><br /></div><div>&nbsp;Directory of C:\Users\Dan.Solovay\testhexo</div><div><br /></div><div><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">11/26/2015 &nbsp;04:09 PM &nbsp; &nbsp;&lt;DIR&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;.</span></div><div><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">11/26/2015 &nbsp;04:09 PM &nbsp; &nbsp;&lt;DIR&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;..</span></div><div><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">11/26/2015 &nbsp;04:09 PM &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;65 .gitignore</span></div><div><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">11/26/2015 &nbsp;04:09 PM &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 442 package.json</span></div><div><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">11/26/2015 &nbsp;04:09 PM &nbsp; &nbsp;&lt;DIR&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;scaffolds</span></div><div><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">11/26/2015 &nbsp;04:09 PM &nbsp; &nbsp;&lt;DIR&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;source</span></div><div><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">11/26/2015 &nbsp;04:09 PM &nbsp; &nbsp;&lt;DIR&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;themes</span></div><div><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">11/26/2015 &nbsp;04:09 PM &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 1,477 _config.yml</span></div><div><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;3 File(s) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1,984 bytes</span></div><div><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;</span></div></div><div><br /></div><div>The .gitignore is there in case you want to set up a git repository for your blog. It ignores the built in and generated stuff, so just your markdown files and configuration is recorded. &nbsp;package.json includes the Node modules that make that power hexo. They will get loaded when you run "npm install". &nbsp;_config.yml is where you define site wide options, like the name of your site, and the location where you want to deploy to. Your posts go in the "source" directory as Markdown files, and the generated HTML will go in a "public" folder.&nbsp;</div><div><b><br /></b></div><div><b>Basic Workflow</b></div><div>You can generate a new post by typing</div><div><br /></div><div><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">hexo new post "My Post Name"</span></div><div><br /></div><div>or&nbsp;</div><div><br /></div><div><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">hexo new draft "My Post Name"</span></div><div><br /></div><div>These create Markdown files either in source/_posts or source/_drafts. &nbsp;Drafts will not get deployed to your destination site if you do a deploy (more on that later), and can get turned into posts by typing</div><div><br /></div><div><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">hexo publish post "My Post Name"</span></div><div><br /></div><div>This moves it over to the _posts folder, and adds a date field to the page header, if one is not already there. &nbsp;But the post is still in &nbsp;Markdown. &nbsp;The HTML gets created by typing:</div><div><br /></div><div><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">hexo generate</span></div><div><br /></div><div>Again, you can type "hexo server" to launch a server at port 4000, and "hexo server --drafts" to include draft posts (useful for double checking Markdown conversion, and so forth.)</div><div><b><br /></b></div><div>To give you a flavor of the power of editing in Markdown, here is a side by side of a test post in Markdown and what was generated using the preinstalled "landscape" layout.</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://i.imgur.com/F95TGYy.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://i.imgur.com/F95TGYy.png" height="344" width="640" /></a></div><div><br /></div><div>You can see that the angle brackets in the Markdown were respected, and there was even built in syntax highlighting. &nbsp;I find the overall look and feel very readable. &nbsp;This easy of readability really jumps out when we look at an actual post. &nbsp;Here is a a page of a recent post on my current Blogger layout, and in Hexo. &nbsp;I think the syntax highlighting of the PowerShell makes the post much more readable. The kicker was how much easier the version on the right was to compose. &nbsp;Less work, better results. &nbsp;That is the power of using current tooling, my friends.</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://i.imgur.com/YuX7KqP.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://i.imgur.com/YuX7KqP.png" height="360" width="640" /></a></div><div><br /></div><div><br /></div><h4>Deploying to Github</h4><div>Of course, you need to get your content to some place where people can see it. One compelling option is Github pages. Every github user gets a free website at URL username.github.io, which is supported by a repository of the same name. &nbsp;In other words, if your github user account name is "myname", and you create a repository called "myname.github.io", and HTML pages there will be served by Github at the URL http://myname.github.io. And since the site is supported by a repo, you get versioning for free, at least for the generated HTML.</div><div><br /></div><div>With a very little bit of setup, you can push to this repo with a single command. &nbsp;First, you need to install the Node module "hexo-deployer-git". &nbsp;Then, &nbsp;in your _config.yml file, go to the section Deployment, &nbsp;and set the type of deployment to "git" and put the Url of the git repository for the destination site. You can also configure the commit message and branch, as described <a href="https://hexo.io/docs/deployment.html" target="_blank">here</a>. <br /><br />My deployment settings are: <br /><br /></div><div><div><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"># Deployment</span></div><div><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">## Docs: http://hexo.io/docs/deployment.html</span></div><div><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">deploy:</span></div><div><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; type: git</span></div><div><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; repo: https://github.com/dsolovay/dsolovay.github.io.git</span></div></div><div><br /></div><div>Now deployment can be handled by Hexo using a simple git push behind the scenes, triggered by the command <span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">hexo deploy</span>&nbsp;. &nbsp;You can use option -g to regenerate the site, which I needed to do to make a backdated post appear (there is probably some updated up to here logic I was bypassing by setting dates in the past).<br /><br /><h4>Next Steps</h4>I expect the full migration process to take me a month or two. &nbsp;I'd like to pull comments over, and this is not supported out of the box from DISQUS. &nbsp;I am working on getting redirects from my old URLs working, and should blog about that soon. &nbsp;When things look good, I'll repoint http://www.dansolovay.com to the new URL (dsolovay.github.io), but will keep my original URL http://dan-explorations.blogspot.com on line. It is free, after all, and it's where I got my start blogging.</div><div><b><br /></b></div><div><br /></div>http://www.dansolovay.com/2015/12/hexo-nodejs-blog-platform.htmlnoreply@blogger.com (Dan Solovay)2tag:blogger.com,1999:blog-5589343447323430312.post-1670817849354732177Wed, 25 Nov 2015 11:00:00 +00002015-11-25T06:00:10.880-05:00MongoDBPowershellCreating MongoDB Shards and Replica Sets with PowerShellIn the MongoDB University course "M101N: MongoDB for .NET Developers", there is a walk through of a UNIX script to create a 3x3 set of shards and replica sets. To brush up on my MongoDB and my Powershell, I decided this weekend to try to rewrite and extend the script in Powershell. A somewhat painful, but ultimately rewarding, experience: You can see what I came up with <a href="https://gist.github.com/dsolovay/f098e10df9d7dd91b1ab" target="_blank">here</a>. &nbsp;In this post I will walk through some of the powershell and some of the MongoDB syntax and tricks.<br /><br /><a name='more'></a><br /><br /><h3>General Approach</h3><div>I wanted to be able to run the script multiple times, so I decided to put my data directories under c:\temp, and to delete them at the start of the run. &nbsp;Similarly, I spawn all the "mongod" processes as windows, so they are easy to kill with a right click on the task bar. &nbsp;(I didn't want to kill all mongod processes, because I didn't want to touch the one running as a service, that supports my Development Sitecore instances.) &nbsp;Also, I used simple values for my port numbers: the mogod processes run as 30000 to 30008, the configuration servers as 40000, 400001, 400002, and the mongos (which functions as a router in a sharded environment) on port 50000. &nbsp;I also have added some diagnostics to check on statuses of various steps in the processed, rather than simply waiting for an arbitrary 60 seconds, as the original script does.<br /><br /></div><h4>Clean Up, Set Up</h4><div>The script begins by establishing a temporary directory, cleaning up an old copy and creating an output function, "report", to facilitate nicely formatted status reporting. &nbsp;The output is piped to Out-Null to keep the output stream clean.</div><div><br /></div><div><pre>$rootpath = "/temp/mongoshards/"<br /><br />new-module -scriptblock {function report($text) {<br />write-output $("-" * $text.length)<br />write-output $text<br />write-output $("-" * $text.length)<br />write-output ""<br />}} | Out-Null<br /><br />report "Remove temporary directory"<br /><br />remove-item $rootpath -recurse <br /><br />report "Create data directories"<br /><br />new-item -type directory -path $rootpath | Out-Null<br /><br />report "Create mongod instances"</pre><pre></pre><h3>Creating the Mongod processes</h3></div><div>The logic to create the mongod processes is pretty straight-forward:</div><div><br /></div><div><pre>report "Create mongod instances"<br /><br />$shards = 0..2<br />foreach ($shard in $shards)<br />{<br /> <br /> $rss = 0..2<br /> foreach ($rs in $rss)<br /> {<br /> $dbpath = "$rootpath/data/shard${shard}/r${rs}"<br /> new-item -type directory -path $dbpath | Out-Null<br /> <br /> # Start mongod processes<br /> $port = 30000 + ($shard * 3) + $rs<br /> $args = "--replSet s$shard --logpath $rootpath/s${shard}_r${rs}.log --dbpath $dbpath --port $port --oplogSize 64 --smallfiles"<br /> $process = start-process mongod.exe $args <br /> }</pre><pre></pre><pre><div style="font-family: 'Times New Roman'; white-space: normal;"><br />The only trickiness here is the variable substitution, leading to paths like "data/shard0/r1", and the logic to create the port numbers, 30000,30001,30002 for the shard 0 processes, 30003-30005 for s1, and 30006-30008 for s2. &nbsp;Of course, these are not yet replica sets; we handle that next.</div><br /><h4 style="font-family: 'Times New Roman'; white-space: normal;"><br />Creating the Replica Sets</h4><br /><div style="font-family: 'Times New Roman'; white-space: normal;"><br />This is done by creating a config document and passing it to rs.initialize.</div><br /><div><br /></div><br /><div><br /><pre>report "Configure replica sets"<br /> <br /> $port1 = 30000 + $shard * 3<br /> $port2 = 30000 + $shard * 3 + 1 <br /> $port3 = 30000 + $shard * 3 + 2<br /> <br /> $configBlock = "{_id: ""s$shard"", members: [ {_id:0, host:""localhost:$port1""}, {_id:1, host:""localhost:$port2""}, {_id:2, host:""localhost:$port3""}]}"<br /> echo "rs.initiate($configBlock)" | mongo --port $port1 </pre><pre></pre><span style="font-family: &quot;times new roman&quot;; white-space: normal;">The</span><span style="font-family: inherit;"> echo "javascript" | mongo </span><span style="font-family: &quot;times new roman&quot;;"><span style="white-space: normal;">is a nice bit of syntax I picked up from the course, and simplifies passing MongoDB commands from a script. &nbsp;Since it takes a little while for a server to win an election and become a PRIMARY, we set up a one second loop to look for this event:</span></span><span style="font-family: &quot;times new roman&quot;;"><span style="white-space: normal;"></span></span> <br /><pre>report "Check PRIMARY elected for each replica set"<br /> <br />while ($True)<br />{<br /> $response1 = (echo "rs.status()" | mongo -port 30000)<br /> $response2 = (echo "rs.status()" | mongo -port 30003)<br /> $response3 = (echo "rs.status()" | mongo -port 30006)<br /> <br /><br /> if (($response1 -clike "*PRIMARY*") -and ($response2 -clike "*PRIMARY*") -and ($response3 -clike "*PRIMARY*")) {<br /> break<br /> }<br /> Start-Sleep -s 1<br /> Write-Output "."<br />}<br /> <br />report "PRIMARY elected"</pre><pre></pre><pre></pre><span style="font-family: &quot;times new roman&quot;;"><span style="white-space: normal;">Note that redirected output creates an array of strings, and the comparison operator -clike checks for a case sensitive match for any member of such an array.</span></span> <span style="font-family: &quot;times new roman&quot;;"><span style="white-space: normal;"></span></span> <br /><h4><span style="font-family: &quot;times new roman&quot;;"><span style="white-space: normal;">Creating the Shards</span></span></h4></div><div><span style="font-family: &quot;times new roman&quot;;"><span style="white-space: normal;">Two steps are left to create the shards. &nbsp;First, we need to create the&nbsp;configuration&nbsp;servers that will store which&nbsp;records go where, and then we need to define each replica set as a shard. &nbsp;Finally, we need to specify the collection and key that will be used for sharding the data</span></span></div><div><span style="font-family: &quot;times new roman&quot;;"><span style="white-space: normal;"></span></span></div><div><pre>report "Create config servers"<br /> <br />$cfg_a = "${rootpath}/data/config_a"<br />$cfg_b = "${rootpath}/data/config_b"<br />$cfg_c = "${rootpath}/data/config_c"<br /><br />new-item -type directory -path $cfg_a<br />new-item -type directory -path $cfg_b<br />new-item -type directory -path $cfg_c<br /><br />$arg_a = "--dbpath $cfg_a --logpath ${rootpath}/cfg-a.log --configsvr --smallfiles --port 40000"<br />$arg_b = "--dbpath $cfg_b --logpath ${rootpath}/cfg-b.log --configsvr --smallfiles --port 40001"<br />$arg_c = "--dbpath $cfg_c --logpath ${rootpath}/cfg-c.log --configsvr --smallfiles --port 40002"<br /><br />start-process mongod $arg_a<br />start-process mongod $arg_b<br />start-process mongod $arg_c<br /><br />report "Config servers up"</pre><pre></pre><pre><span style="font-family: &quot;times new roman&quot;; white-space: normal;">Two configuration servers stores the definitive version of what data resides where; the mongos instances keep this data in memory.</span></pre><pre><span style="font-family: &quot;times new roman&quot;; white-space: normal;"><br /></span></pre><pre><span style="font-family: &quot;times new roman&quot;;"><span style="white-space: normal;">Once the configuration servers are set up, the next step is to add the shards. Note the step to make sure that port 50000 is on line. &nbsp;Basically, if the response does not contains a line with the word "failed", the server is treated as on-line.</span></span></pre><pre><span style="font-family: &quot;times new roman&quot;;"><span style="white-space: normal;"><br /></span></span></pre><pre>report "Launch mongos"<br /> <br />$args_s = "--port 50000 --logpath ${rootpath}/mongos-1.log --configdb localhost:40000,localhost:40001,localhost:40002"<br />start-process mongos $args_s<br /><br />report "Check mongos online on port 50000"<br /><br />while($true)<br />{<br /> <br /> $output = echo "" | mongo localhost:50000 2&gt; null<br /> <br /> if (-not ($output -like "*failed*")) {break} <br /> <br /> Start-Sleep -s 1<br /> Write-Output "."<br />}<br />report "Mongos avaiable at port 50000"<br /><br />report "Configure shards"<br /><br />echo "db.adminCommand( { addshard: ""s0/localhost:30000"" })" | mongo --quiet --port 50000 <br /><br />echo "db.adminCommand( { addshard: ""s1/localhost:30003"" })" | mongo --quiet --port 50000 <br /><br />echo "db.adminCommand( { addshard: ""s2/localhost:30006"" })" | mongo --quiet --port 50000 <br /><br />echo "db.adminCommand( { enableSharding:""school"" })" | mongo --port 50000 <br /><br />echo "db.adminCommand( { shardCollection:""school.students"", key:{student_id:1} })" | mongo --port 50000 <br /></pre></div><div></div><div><h3>Loading some data</h3></div><div><pre><span style="font-family: &quot;times new roman&quot;; white-space: normal;">To get some data, I use a short Javascript (from MongoDB University) that pushes a list of students and course grades. &nbsp; &nbsp;Once this is done, I display the counts for the combined shared collection, and for each of the specific shards, and the output of sh.status(), which shows the breakpoints that MongoDB is using to distribute data.&nbsp;</span></pre><pre><span style="font-family: &quot;times new roman&quot;; white-space: normal;"><br /></span></pre><pre>report "Generate 100,000 documents" <br /><br />$mongoUniversityScript = "db=db.getSiblingDB(`"school`");<br />types = ['exam', 'quiz', 'homework', 'homework'];<br />// 10,000 students<br />for (i = 0; i &lt; 10000; i++) {<br /><br /> // take 10 classes<br /> for (class_counter = 0; class_counter &lt; 10; class_counter ++) {<br /> scores = []<br /> // and each class has 4 grades<br /> for (j = 0; j &lt; 4; j++) {<br /> scores.push({'type':types[j],'score':Math.random()*100});<br /> }<br /><br /> // there are 500 different classes that they can take<br /> class_id = Math.floor(Math.random()*501); // get a class id between 0 and 500<br /><br /> record = {'student_id':i, 'scores':scores, 'class_id':class_id};<br /> db.students.insert(record);<br /><br /> }<br /><br />}"<br /><br />echo $mongoUniversityScript | mongo --port 50000 --quiet<br /><br />report "Total records, records in shard 1, 2, and 3"<br /><br />echo "db.students.count()" | mongo school --port 50000<br /><br />echo "db.students.count()" | mongo school --port 30000<br /><br />echo "db.students.count()" | mongo school --port 30003<br /><br />echo "db.students.count()" | mongo school --port 30006<br /><br />report "sh.status() output" <br /><br />echo "sh.status()" | mongo --port 50000</pre><pre></pre><pre></pre><pre><span style="font-family: &quot;times new roman&quot;; white-space: normal;">Again, I have a link to the full script at the top of the page. &nbsp;I'm a rank beginner at PowerShell, so please feel free to make suggestions about how style and substance could be improved. &nbsp;</span></pre><pre></pre></div></pre></div>http://www.dansolovay.com/2015/11/creating-mongodb-shards-and-replica.htmlnoreply@blogger.com (Dan Solovay)0tag:blogger.com,1999:blog-5589343447323430312.post-7455206155974929224Wed, 18 Nov 2015 11:00:00 +00002015-11-18T06:00:01.961-05:00SitecoreTips & TricksCustom Sitecore Log filesStarting with Sitecore 6.2, and especially since Sitecore 7 we've seen an increase in specific log files: WebDAV, Crawling, Search, Publishing, FXM. This is completely configurable, and can be leveraged to keep your application logic separate from the Sitecore log file while still using Sitecore's logging capabilities.<br /><a name='more'></a>To create a custom log file, you need to do the following three things:<br /><br />1. Add an appender in the &lt;log4net&gt; section of web.config (or Sitecore.config after 8.1).<br /><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; &nbsp;&lt;appender name="MyAppender" type="log4net.Appender.SitecoreLogFileAppender, Sitecore.Logging"&gt;</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; &nbsp; &nbsp; &lt;file value="$(dataFolder)/logs/MyLog.log.{date}.txt" /&gt;</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; &nbsp; &nbsp; &lt;appendToFile value="true" /&gt;</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; &nbsp; &nbsp; &lt;layout type="log4net.Layout.PatternLayout"&gt;</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; &nbsp; &nbsp; &nbsp; &lt;conversionPattern value="%4t %d{ABSOLUTE} %-5p %m%n" /&gt;</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; &nbsp; &nbsp; &lt;/layout&gt;</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; &nbsp; &nbsp; &lt;encoding value="utf-8" /&gt;</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; &nbsp; &lt;/appender&gt;</span><br /><br />These are the values Sitecore uses for its log file. &nbsp;You might want to add &lt;immediateFlush&gt;true&lt;/immediateFlush&gt; if you want to avoid waiting for the buffer to flush.<br /><br />2. Reference the Appender in a logger element.<br /><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &lt;logger name="My.Namespace" additivity="false"&gt;</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><span class="Apple-tab-span" style="white-space: pre;"> </span> &nbsp;&lt;level value="INFO" /&gt;</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><span class="Apple-tab-span" style="white-space: pre;"> </span> &nbsp;&lt;appender-ref ref="MyAppender" /&gt;</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><span class="Apple-tab-span" style="white-space: pre;"> </span> &nbsp;&lt;/logger&gt;</span><br /><br />Name the logger after your root namespace. If you want to divide different parts of your application into separate logs, you can drive this by being more specific in your namespace. &nbsp;The level value control what types of messages get written. You can use Debug messages in your classes, and only show this information if you set level to DEBUG. &nbsp;INFO is the normal setting, which also shows FATAL, ERROR, and WARNING messages, as well as Sitecore's custom AUDIT message.<br /><br />3. Write messages using Sitecore's log class, passing a reference to the current object.<br /><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">Log.Info("Your message", this);</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">Log.Debug("Your message", this);</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">Log.Error(exception, this); &nbsp;</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">Log.Audit("Your message", this);</span><br /><br />The Log.Debug message will only show up if logging is set to DEBUG, which makes this setting ideal for detailed instrumentation. &nbsp;In addition, AUDIT captures information about who performed the action, writing the results as an INFO messages with the domain and username of the user performing the action.<br /><br />A few additional notes:<br />1. &nbsp;With Sitecore 8.1, the Log4Net section is moved inside the &lt;sitecore&gt; configuration node, which means it can be added to using Sitecore App_Config/Include patch config files.<br />2. &nbsp;You can have several loggers point to the same appender, which might allow you to group your info with that of a specific Sitecore namespace, of to run part of your application with DEBUG logging, and the rest with INFO logging.<br />3. Sitecore Log Analyzer is a great tool for working with log data once you've collected it. &nbsp;For example, it allows you to view just errors, or just one kind of error.<br />4. Log4Net <a href="https://logging.apache.org/log4net/release/manual/internals.html" target="_blank">documentation </a>recommends checking if a log message is enabled before writing to it, to save on milliseconds of processing time for creating the string message (e.g. for concatenating a string). This is no longer possible with Sitecore's wrapper around log4net, as Log.IsDebugEnabled(), for example, is a static method, so there is no way to examine the level setting for a given namespace.<br /><br />Props to Mark Ursino for blogging about custom log files way back in 2011, and to Kamraz Juman &nbsp; for pointing out the Log.Info("message", this) syntax in the comments. &nbsp;<a href="http://firebreaksice.com/write-to-a-custom-sitecore-log-with-log4net/" target="_blank">http://firebreaksice.com/write-to-a-custom-sitecore-log-with-log4net/ </a>http://www.dansolovay.com/2015/11/custom-sitecore-log-files.htmlnoreply@blogger.com (Dan Solovay)0tag:blogger.com,1999:blog-5589343447323430312.post-5291911961307812530Wed, 11 Nov 2015 11:00:00 +00002015-11-12T19:54:46.621-05:00Dependency InjectionDeveloper ToolsTDDAutofac Aggregate ServiceThere is a nice little feature of Autofac called Aggregate Service, that works very well with NSubstitute's recursive mock functionality to make tests much more maintainable and resilient to dependency modifications.<br /><a name='more'></a><br />Let's take a useful if contrived example of dependency injection at work, a Greeter class that writes "Good Morning" if the hour is before 13:<br /><br /><pre><code><br />{<br /> public class Greeter<br /> {<br /> private readonly IDateTime _dateTime;<br /> private readonly IWriter _writer;<br /> <br /> public Greeter(IDateTime dateTime, IWriter writer)<br /> {<br /> _dateTime = dateTime;<br /> _writer = writer;<br /> }<br /> <br /> public void WriteDate()<br /> {<br /> if (_dateTime.Now.Hour &gt;= 13)<br /> {<br /> _writer.WriteLine("Good afternoon");<br /> }<br /> else<br /> {<br /> _writer.WriteLine("Good morning");<br /> } <br /> }<br /> }<br /><br /></code></pre><p>IDateTime is a simple wrapper for DateTime.Now, and IWriter for Console.WriteLine, so that the logic is testable:</p><pre><code><br /><br /> [Fact]<br /> public void Greeter_Hour0_WritesGoodMorning()<br /> {<br /> var fakeTime = Substitute.For&lt;IDateTime&gt;();<br /> fakeTime.Now.Returns(new DateTime(1, 1, 1, 0, 0, 0));<br /> var writer = Substitute.For&lt;IWriter&gt;();<br /> var sut = new Greeter(fakeTime, writer);<br /> <br /> sut.WriteDate();<br /> <br /> writer.Received().WriteLine("Good morning");<br /> <br /> }<br /> <br /> [Fact]<br /> public void Greeter_Hour13_WritesGoodAfternoon()<br /> {<br /> var fakeTime = Substitute.For&lt;IDateTime&gt;();<br /> fakeTime.Now.Returns(new DateTime(1, 1, 1, 13, 0, 0));<br /> var writer = Substitute.For&lt;IWriter&gt;();<br /> var sut = new Greeter(fakeTime, writer);<br /> <br /> sut.WriteDate();<br /> <br /> writer.Received().WriteLine("Good afternoon");<br /> <br /> }<br /></code></pre><br /><p>The injection is wired up at application start:</p><pre><code><br />using System.Collections.Generic;<br />using System.Linq;<br />using System.Text;<br />using System.Threading.Tasks;<br />using Autofac;<br /><br />namespace DIDemo<br />{<br /> class Program<br /> {<br /> static void Main(string[] args)<br /> {<br /> var builder = new ContainerBuilder();<br /> builder.RegisterType&lt;DateTimeProvider&gt;().As&lt;IDateTime&gt;();<br /> builder.RegisterType&lt;ConsoleWriter&gt;().As&lt;IWriter&gt;();<br /> builder.RegisterType&lt;Greeter&gt;().AsSelf();<br /> var container = builder.Build();<br /><br /> using (var c = container.BeginLifetimeScope())<br /> {<br /> var obj = c.Resolve&lt;Greeter&gt;();<br /> obj.WriteDate();<br /> }<br /> }<br /> }<br />}<br /></code></pre><p>Okay, this works reasonably well. The class with the logic has no direct contact with the implementation of the operating system calls DateTime.Now and Console.WriteLine, so we can script and monitor the interactions with the operating system, and isolate our code. But what happens if we want to add another dependency to the constructor later on? We will need to modify all the places where Greeter is instantiated, which may be in a number of tests.</p><p>Autofac's AggregateService module can be helpful here. This is a separate NuGet download from the rest of Autofac, Autofac.Extras.AggregateService, and pulls in a dependency on Castle.Core, It lets you move your dependencies into properties of a single interface, which I like to nest inside the class it supports. After this change, Greeter.cs looks like this:</p><pre><code><br />using System;<br /> <br />namespace DIDemo<br />{<br /> public class Greeter<br /> {<br /> private readonly IDepend _depend;<br /> <br /> public interface IDepend<br /> {<br /> IDateTime DateTime { get; set; }<br /> IWriter Writer { get; set; }<br /> }<br /> <br /> public Greeter(IDepend depend)<br /> {<br /> _depend = depend;<br /> }<br /> <br /> public void WriteDate()<br /> {<br /> if (_depend.DateTime.Now.Hour &gt;= 13)<br /> {<br /> _depend.Writer.WriteLine("Good afternoon");<br /> }<br /> else<br /> {<br /> _depend.Writer.WriteLine("Good morning");<br /> }<br /> <br /> }<br /> }<br />}<br /></code></pre><p>The nice thing here is that constructor will not change if a new dependency is added, and most of your unit tests can ignore a new dependency, because NSubstitute's default recursive behavior will hydrate the properties with NSubstitute implementations. So you can script IDepend.DateTime.Now if you care about it, or you can leave it alone confident that NSubstitute will not return null. This keeps the tests loosely specified.<br /><br />Here is what one of the tests looks like now with the IDepend construction. Note how the DateTime property gets filled in automagically by NSubstitute:<br /></p><pre><code><br />[Fact]<br />public void Greeter_HourIsZero_ReturnsGoodMorning()<br />{<br /> var d = Substitute.For&lt;Greeter.IDepend&gt;();<br /> d.DateTime.Now.Returns(new DateTime(1, 1, 1, 0, 0, 0));<br /> var sut = new Greeter(d);<br /> <br /> sut.WriteDate();<br /> <br /> d.Writer.Received().WriteLine("Good morning");<br /> <br />}<br /></code></pre><p>To wire this up, you need to add one line of code to register the aggregation interface at application start:</p><pre><code>builder.RegisterAggregateService(typeof(Greeter.IDepend));</code></pre><br /><p>If you start employing this technique widely, you will probably want to standardize on convention, such as always using an IDepend nested interface. You can rewrite the above line to generalize it:</p><pre><code><br />var assembly = Assembly.GetExecutingAssembly();<br />var dependencyInterfaces = assembly.GetTypes()<br /> .Where( t =&gt; t.IsInterface && t.Name == "IDepend" && t.IsNested);<br /> <br />dependencyInterfaces.ForEach(a =&gt; builder.RegisterAggregateService(a));<br /></code></pre><br /><p>And if you have ReSharper, you can create a code template (Resharper->Tools->TemplateExplorer) to make adding the IDepend interface and constructor instantiation a one-word process. First,the template:</p><pre><code><br />private IDepend _depend;<br /> <br />public interface IDepend<br />{<br /> $END$<br />}<br /> <br /> <br />public $CLASS$(IDepend depend)<br />{<br /> _depend = depend;<br />}<br /></code></pre><br /><p>I've named this "depend", and configured $CLASS$ to take the name of the containing type. This makes adding the dependency a simple six-letter-plus-tab operation:</p><a href="https://media.giphy.com/media/3o85xuA3wQdIyCyWDm/giphy.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://media.giphy.com/media/3o85xuA3wQdIyCyWDm/giphy.gif" /></a> <i>Thanks to Chris Smith of Velir for showing me the AggregateService feature. And for getting me started on DI in the first place.</i>http://www.dansolovay.com/2015/11/autofac-aggregate-service.htmlnoreply@blogger.com (Dan Solovay)3tag:blogger.com,1999:blog-5589343447323430312.post-5141964960479290833Wed, 04 Nov 2015 11:00:00 +00002015-11-04T20:55:37.655-05:00Developer ToolsMongoDBSitecorexDBA RoboMongo Trick<a href="http://robomongo.org/" target="_blank">RoboMongo </a>is a nice tool for digging around xDB databases. It provides the full feature set of the JavaScript shell (you can save variables, run help commands, etc.) and allows you to do things like edit documents directly through the JSON display.<br /><a name='more'></a><br /><br />One small annoyance, however, is that it is difficult to search by GUIDs. &nbsp;Guids in MongoDB are a tricky business. &nbsp;They are represented as binary objects, displayed by the format<br /><br /><b>BinData(3,"n1UNEaXe6kKcHIpd9+cO+Q==")</b><br /><br />You can get at the underlying hex value by typing<br /><br /><b>BinData(3,"n1UNEaXe6kKcHIpd9+cO+Q==").hex()</b><br /><br />which will give you<br /><br /><b>9f550d11a5deea429c1c8a5df7e70ef9</b><br /><br />which looks a lot like a GUID. But is not one, for arcane historical reasons. &nbsp;(Basically, Microsoft has a somewhat idiosyncratic way of arranging the bytes in the Guid.ToString() method. &nbsp;More info here:&nbsp;<a href="http://stackoverflow.com/questions/3320501/why-isnt-guid-tostringn-the-same-as-a-hex-string-generated-from-a-byte-arra">http://stackoverflow.com/questions/3320501/why-isnt-guid-tostringn-the-same-as-a-hex-string-generated-from-a-byte-arra</a><br /><br />The value BinData(3,"n1UNEaXe6kKcHIpd9+cO+Q==") actually corresponds 110d559f-dea5-42ea-9c1c-8a5df7e70ef9, the ID of the Sitecore Home item. &nbsp;There are two ways you can make this conversion. &nbsp;While using the Mongo shell, you can load the JavaScript file <a href="https://github.com/mongodb/mongo-csharp-driver/blob/master/uuidhelpers.js" target="_blank">uuidhelpers.js </a>(Save it to c:\tools\uuidhelpers.js, and then run load("c:\\tools\\uuidhelpers.js"). &nbsp;This will give you two useful functions:<br /><br />CSUUID("some guid"), which generates a proper mongo BinData guid for your .NET Guid, and .toCSUUID(), which gives a GUID display for a BinData item:<br /><br /><b>BinData(3,"n1UNEaXe6kKcHIpd9+cO+Q==").toCSUUID()</b><br /><br />returns<br /><br /><b>CSUUID("110d559f-dea5-42ea-9c1c-8a5df7e70ef9")</b><br /><b><br /></b> It's a nice feature of this script, and very keeping in the way MongoDB does things, that the representation of the GUID is in the form of the constructor for the BinData object. It makes round-tripping painless, since you can now do things like:<br /><br />db.Interactions.findOne({"Page.Item._id":&nbsp;CSUUID("110d559f-dea5-42ea-9c1c-8a5df7e70ef9") })<br /><br />Which makes a lot more sense then<br /><br />db.Interactions.findOne({"Page.Item._id":&nbsp;BinData(3,"n1UNEaXe6kKcHIpd9+cO+Q==")&nbsp;})<br /><br />to us Sitecore/.Net people.<br /><br />Or you could use RoboMongo, which has a nifty Legacy UUID menu:<br /><br /><img src="http://i.imgur.com/HGzdzIh.png" height="249" width="640" /><br /><br /><div>The .Net setting will display Guids like this:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://i.imgur.com/VZp1Op0.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://i.imgur.com/VZp1Op0.png" height="160" width="640" /></a></div><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />This is an improvement, but still not ideal, as the NUUID value is not a JS constructor, so you can't use the Guid to query for docuemnts.<br /><br />For example, this would not work to find all interactions that visited the home item:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://i.imgur.com/Gkuo5zX.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://i.imgur.com/Gkuo5zX.png" height="98" width="640" /></a></div><br /><br />Similarly, you cannot go from a contact ID here to one in the Contacts table, since you cannot use the displayed GUID field. &nbsp; This is even a problem if you use Do Not Encode, which still displays the ID in a non-queryable format LUUID(" guid "), which will again return a not-defined JS error:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://i.imgur.com/zdULqYx.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://i.imgur.com/zdULqYx.png" height="138" width="640" /></a></div><br /><br />The way around this is simple. These formats only come into play if the variable is part of a larger JSON display. &nbsp;If you query the value in isolation, you get the original BinData value:<br /><br />db.Interactions.findOne().Pages[0].Item._id<br /><br />returns a BinData representation:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://i.imgur.com/kYsOWrJ.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://i.imgur.com/kYsOWrJ.png" height="184" width="640" /></a></div><br />You can also store this to a variable, like this:<br /><br /><b>var homeId = db.Interactions.findOne().Pages[0].Item._id</b><br /><br />Which allows you to run queries like this one:<br /><br /><b>db.Interactions.find({"Pages.Item._id": homeId})</b><br /><br />And now we have round-tripping. Also useful, as I mentioned above, for looking up contacts by ID in the db.Contacts collection.<br /><br /></div>http://www.dansolovay.com/2015/11/a-robomongo-trick.htmlnoreply@blogger.com (Dan Solovay)0tag:blogger.com,1999:blog-5589343447323430312.post-5906674678738350935Wed, 28 Oct 2015 10:00:00 +00002015-10-28T06:00:02.698-04:00Developer ToolsTDDTestingNSubstituteOne of the key benefits of unit testing is that it pushes you to code to abstractions. Each class has a clear inside and outside, and your tests document the inside bits. The outside stays wrapped in an interface or abstract class so we can change its behavior in tests. When you go down this path, having a tool to make fake versions of interfaces in a simple and painless way is key, which is why I recommend <a href="http://nsubstitute.github.io/">NSubstitute</a>. It is elegantly simple, very flexible, and makes it hard to get into trouble. In this post I will go over some fundamentals of NSubstitute, and compare them to how things work in the more well known <a href="https://github.com/Moq/moq4">Moq</a> library.<br /><a name='more'></a><br /><br /><h3>A Sample Interface</h3>In this article, I'm going to make use of an interface <code>ISample</code>, and two simple classes <strong><code>AllVirtual</code></strong> and <strong><code>NonVirtual</code></strong><br /><pre><code><br />public interface ISample<br />{<br /> DateTime GetCurrentDateTime();<br /> <br /> int GetCurrentCount();<br /><br /> string SomeString { get; set; }<br /><br /> IEnumerable&lt;string&gt; Names { get; }<br /><br /> ISample SingleChild { get; set; }<br /><br /> IEnumerable&lt;ISample&gt; GetChildren();<br /><br /> AllVirtual GetAllVirtual();<br /><br /> NonVirtual GetNonVirtual();<br />}<br /><br />public class AllVirtual<br />{<br /> public virtual string Name { get; set; }<br />}<br /><br />public class NonVirtual<br />{<br /> public string Name { get; set; }<br />}<br /></code></pre><br />A few things to point out: this interface has a mixture of properties and methods, value and reference types, single values and enumerations. By and large it sticks to returning interfaces (avoiding a "leaky abstractions"), but I have a couple of direct class references, GetAllVirtual and GetNonVirtual, which I'll come back to in a bit. Finally, the interface is public. NSubstitute requires that out of the box, but there is a way around that too. Okay, now we have our interface. Let's get started faking it.<br /><br /><h3>Making the Fake</h3><br />With NSubstitute, this is a one line operation.<br /><pre><code><br />var fakeSample = Substitute.For&lt;ISample&gt;();<br /></code></pre><br />Contrast this with the two step operation with Moq:<br /><pre><code><br />Mock mock = new Mock&lt;ISample&gt;();<br />// do mock setup stuff<br />var fakeSample = (ISample)mock.Object;<br /></code></pre><br />I realize this is an aesthetic point, but I find the former much cleaner, especially the use of the generic call to remove the need for an explicit cast. Repeated for every interface in every test you write, this reduction in cruft makes a real difference.<br /><br /><h3>Default Behaviors</h3>Without any further setup, your fake ISample is set to do some basic things:<br /><pre><code><br />[Fact]<br />public void NSub_NoSPecialSetup_ReturnsRecursiveFakes()<br />{<br /> <br /> var fakeSample = Substitute.For&lt;ISample&gt;();<br /><br /> Assert.Equal(DateTime.MinValue, fakeSample.GetCurrentDateTime());<br /> Assert.Equal(0, fakeSample.GetCurrentCount());<br /> Assert.Equal("", fakeSample.SomeString);<br /> Assert.NotNull(fakeSample.SingleChild);<br /> Assert.NotNull(fakeSample.SingleChild.SingleChild.SingleChild); // Recursive behavior.<br /> Assert.NotNull(fakeSample.Names);<br /> Assert.Equal(0, fakeSample.Names.Count());<br /> Assert.NotNull(fakeSample.GetChildren());<br /> Assert.Equal(0, fakeSample.GetChildren().Count());<br /><br /> Assert.NotNull(fakeSample.GetAllVirtual());<br /> Assert.Null(fakeSample.GetNonVirtual());<br />}<br /></code></pre>As you can see, except for the last method, this object never returns null. For simple values, it returns logical defaults: empty strings, DateTime.MinValue, or zero. For IEnumerable it returns empty collections, and for interfaces and classes that have all virtual methods, it returns NSubstitute fakes. This allows you to do a lot with virtually no setup, leading to much more expressive tests, in which you only set up stuff you care about in that test.<br /><br />The restriction to all virtual methods is useful. NSubstitute generates proxy subclasses of the objects it tests, and since non-virtual methods cannot be overridden, this prevents you from accidentally calling real code. You can always get around this by explicitly defining a return value for this property, presumably after inspecting its methods to make sure they don't reformat hard drives or launch missiles. <br /><br />Moq will provide defaults for simple values (but not strings), but for classes and interfaces, you need to explicitly enable this behavior. And Moq does not make a distinction between all virtual classes and classes with non-overridable logic, as NSubstitute does.<br /><pre><code><br />[Fact]<br />public void Moq_WithDefaultValues_ReturnsRecursiveMocks()<br />{<br /> var mock = new Mock&lt;ISample&gt;{ DefaultValue = DefaultValue.Mock }; // DefaultValue setting allows recursive mocks.<br /> var fakeSample = (ISample)mock.Object;<br /><br /> Assert.Equal(DateTime.MinValue, fakeSample.GetCurrentDateTime());<br /> Assert.Equal(0, fakeSample.GetCurrentCount());<br /> Assert.Null(fakeSample.SomeString); // differs from NSubstitute.<br /> Assert.NotNull(fakeSample.SingleChild.SingleChild.SingleChild);<br /> Assert.NotNull(fakeSample.Names);<br /> Assert.Equal(0, fakeSample.Names.Count());<br /> Assert.NotNull(fakeSample.GetChildren());<br /> Assert.Equal(0, fakeSample.GetChildren().Count());<br /><br /> Assert.NotNull(fakeSample.GetAllVirtual());<br /> Assert.NotNull(fakeSample.GetNonVirtual()); // differs from NSubstitute.<br />}<br /></code></pre><br /><h3>Scripting Output</h3>Of course, sometimes you want to define the output the class will provide. This is done with the "Returns" extension method:<br /><pre><code><br />fakeSample.SomeString.Returns("hello");<br />fakeSample.GetCurrentDateTime().Returns(new DateTime(2016, 2, 29));<br />fakeSample.GetCurrentCount().Returns(-1);<br />fakeSample.Names.Returns(new [] {"Larry", "Curly", "Moe"});<br /></code></pre>The use of an extension method syntax allows you to script the test object directly, rather than have to work with an intermediate "Mock" object:<br /><pre><code><br />var mock = new Mock<ISample>() ;<br />mock.Setup(s => s.SomeString).Returns("test");<br />var fakeSample = (ISample)mock.Object;<br /></code></pre><h3>Verifying Calls</h3>You can verify calls on your substitutes with the "Received()" and "ReceivedWithAnyArgs() methods.<br /><pre><code><br />fakeSample.Received().Log("This happened."); <br />fakeSample.Recieved().Log(Arg.Any<string>);<br />fakeSample.Received().Log(Arg.Is<string>(s => s.Contains("happened"));<br /></code></pre>The first option is most expressive, the second is the loosest (loose is good with fakes), and the third, using argument specific predicates, allows you to verify only the argument you care about. The ability to fine tune your verification down to a single argument, or single fact about an argument, supports the notion of having each test support a single logical concept. It is noteworthy that NSubstitute does not have a "VerifyAll()" method, which in Moq tests that a series of calls were made in a specific order, which leads to creating very brittle, unmaintainable tests. Sometimes omitting a feature is a good thing. This is the prerogative of a newwe framework.<br /><br /><h3>Supporting the complex</h3>NSubstitute gets you a lot with very little set up, but sometimes you want to create more complex scenarios, especially if you are reproducing a bug in a test. To give a flavor for the more esoteric things NSubstitute can do, let's create a fake that throws an exception on the third time it's called:<br /><pre><code><br />[Fact]<br />public void Fake_CalledThreeTimes_Throws()<br />{<br /> var fakeSample = Substitute.For&lt;ISample&gt;();<br /> <br /> int callCount = 0;<br /> fakeSample.When(x => x.GetChildren()).Do(info =><br /> {<br /> callCount++;<br /> if (callCount == 3)<br /> {<br /> throw new Exception("Boom!");<br /> }<br /> });<br /><br /> fakeSample.GetChildren();<br /> fakeSample.GetChildren();<br /> Assert.Throws<Exception>(() => fakeSample.GetChildren());<br />}<br /></code></pre>The <a href="http://nsubstitute.github.io/help.html">docs</a>, which are exceptionally clear and well written, cover these features well. <br /><br /><h3>Supporting Arrange/Act/Assert</h3>By allowing you to work directly with the fake object for both scripting output and verifying input, NSubstitute lends itself to the arrange/act/assert pattern, where you set up a scenario, call the code under test, and then state, in simple, limited terms, the expected state of the system. This is much more expressive than stating expected results at the beginning of the test, and then uttering the magic invocation "Verify!". It's much more of a coherent narrative: this was the scenario, this happened, and this was the result. Simple tests are much more likely to be read, used, and maintained.<br /><br />Because of its simplicity, I recommend using NSubstitute even on projects and on teams that use Moq. No need to rewrite old tests, but complex prior choices shouldn't hold back simpler options going forward. <br /><br /><h3>Fake it till you make it</h3>The real power of fake-based testing is that your development starts to follow a mini-kanban approach, where you limit your work in progress to a simple piece of logic, and wrap complex stuff behind interfaces, which you can get to later. Once you have the current class working, you pull the next interface off the shelf, and build its implementation. This leads to producing code in a steady stream of robust small components. <br /><br /><i>Thanks to Roy Osherove, whose post <a href="http://osherove.com/blog/2012/6/27/the-future-of-isolation-frameworks-and-how-moq-isnt-it-for-n.html">The Future of Isolation Frameworks</a> inspired my original interest in NSubstitute.</i><br /><br /><br /> http://www.dansolovay.com/2015/10/nsubstitute.htmlnoreply@blogger.com (Dan Solovay)1tag:blogger.com,1999:blog-5589343447323430312.post-5645037461661669667Wed, 21 Oct 2015 10:00:00 +00002016-07-27T11:27:31.417-04:00Developer ToolsSitecoreTestingUnit Testing an Event Handler with Sitecore.FakeDBIn my last <a href="http://www.dansolovay.com/2015/10/testable-event-handlers-using-sitecores.html" target="_blank">post</a>, we looked at how to make an event handler unit testable by wrapping an interface around the Sitecore log, so that we could use NSubstitute to verify the calls made to it.<br /><br />This only part of the story, of course. For our tests to be meaningful, we need to be able to "<a href="http://c2.com/cgi/wiki?ArrangeActAssert" target="_blank">arrange</a>" the input that goes into our code, before we make it "act" and before we "assert" how it should behave. Since we are building an ItemSaved handler, we need a way to create Sitecore items in our unit tests. &nbsp;There is a tool, <a href="https://github.com/sergeyshushlyapin/Sitecore.FakeDb" target="_blank">Sitecore.FakeDB,</a> that makes this straigtforward.<br /><a name='more'></a><br /><h3>Sitecore.FakeDB</h3>This library, developed by Sitecore Ukraine tech lead&nbsp;Sergey Shushlyapin, allows you to create Sitecore items within a&nbsp;unit test. These items are created in memory, and so no datebase connection is required or used. &nbsp;Here's a sample from the project's Github <a href="https://github.com/sergeyshushlyapin/Sitecore.FakeDb/wiki" target="_blank">wiki</a>:<br /><pre><code><br />[Fact]<br />public void HowToCreateSimpleItem()<br />{<br /> using (var db = new Db<br /> {<br /> new DbItem("Home") { { "Title", "Welcome!" } }<br /> })<br /> {<br /> Sitecore.Data.Items.Item home = db.GetItem("/sitecore/content/home");<br /> Xunit.Assert.Equal("Welcome!", home["Title"]);<br /> }<br />}<br /></code></pre><br />In this sample, a fake database is created, an item named "home" is added to it, which has a field "Title" with a value "Welcome!". I find the crisp syntax this library uses very appealing. The use of collection initializers (the bit in the curly braces) and the fact that a template definition is not required means there is little required ceremony to set up this up, which will make your tests more expressive.<br /><br /><h3>Avoiding Brittleness</h3>In my last post, I had a somewhat silly test that would fail if any change was made to the syntax of the log message.<br /><pre><code><br /> [Fact]<br /> public void Handler_WhenCalled_LogsMessage()<br /> {<br /> handler.OnItemSaved(null, null);<br /><br /> log.Received().Info("Item saved and this code was tested!", handler);<br /> }<br /></code></pre>In real life, you don't want to do this. Having tests this tightly specified will mean that you will be getting a steady stream of "false positives" whenever you edit your code. It's better to make loosely specified claims about what your code does (returns a non-empty message, returns a string containing the name of the item). This will give you better information about what the code is currently doing, and will result in more meaningful failures. So let's change that assertion to this looser one:<br /><pre><code><br />[Fact]<br /> public void Handler_WhenCalled_LogsMessage()<br /> {<br /> var args = GetSitecoreEventArgs(new DbItem("test"));<br /><br /> handler.OnItemSaved(null, args);<br /><br /> log.Received().Info(Arg.Is<string>(s =&gt; !string.IsNullOrWhiteSpace(s)), handler);<br /> }<br /></string></code></pre>NSubstitute's Arg.Is&lt;T&gt;(function&lt;T, bool&gt; predicate) allows us to define our assertion as a fact that must be true--for example, that the message is not empty--rather than requiring a fixed value for the entire string.. We will use this approach going forward to define our tests.<br /><br /><h3>Arranging Sitecore.FakeDB</h3>I will not cover the fairly straightforward steps necessary to add FakeDB to a test project; this is explained very well on their <a href="https://github.com/sergeyshushlyapin/Sitecore.FakeDb/wiki/Installation">wiki</a>. However, I will talk about how I set this up in the current xUnit test project. xUnit does not use SetUp or TearDown methods; instead it instantiates the class before each test, and if the test class implements IDisposable, it runs Dispose after every test. That feature allows us to do away with the <code>using</code> syntax:<br /><pre><code><br />public class ItemSavedLoggerTests: IDisposable<br />{<br /> private TestableItemSavedHandler handler;<br /> private ILogWrapper log;<br /> private Db db;<br /><br /> public ItemSavedLoggerTests()<br /> {<br /> log = Substitute.For<ilogwrapper>();<br /> handler = new TestableItemSavedHandler(log);<br /> db = new Db();<br /> }<br /><br /> public void Dispose()<br /> {<br /> if (db!= null)<br /> {<br /> db.Dispose();<br /> }<br /> }<br /></ilogwrapper></code></pre>And since Sitecore's ItemSaved handler expects EventArgs of type SitecoreEventArgs, with the saved item attached to the first Properties element, we can setup a convenience method to arrange these:<br /><pre><code><br /> private SitecoreEventArgs GetSitecoreEventArgs(DbItem dbItem)<br /> {<br /> this.db.Add(dbItem);<br /> var item = db.GetItem("/sitecore/content/" + dbItem.Name);<br /> var args = new SitecoreEventArgs("name", new object[] { item }, new EventResult());<br /> return args;<br /> }<br /></code></pre>This code accepts a FakeDB item, which can be defined according to the requirements of each test; looks up the corresponding Sitecore Item, and inserts it into a freshly minted SitecoreEventsArgs object, which can be used to exercise the handler code. This keeps the test itself focused on the contents o the item.<br /><br /><h3>Putting It Together</h3>Here is the entire test suite, and the code it supports. A couple of things to point out. I try to keep the arrange part of the test squarely focused on the requirements of the test at hand. If the test doesen't require a template name or ID, one is not provided. This makes it clear what data the system is interacting with in a given test. So no templates if the test is about item names, and no item names if the test is about templates. (I do break this pattern for the name and result fields of the SitecoreEventArgs, because values are required by the constructor.) Finally, as is usually the case, the test suite is a lot longer than the implementation code. This is normal; the test suite is an inventory of all the functionality delivered by the main-line code. Often as the tests get longer, the main code gets shorter. As Robert Martin <a href="http://blog.cleancoder.com/uncle-bob/2014/12/17/TheCyclesOfTDD.html">put it</a>, "As the tests get more specific, the code gets more generic."<br /><br /><script src="https://gist.github.com/dsolovay/dd2cd49c1faa8ec00256.js"></script>http://www.dansolovay.com/2015/10/unit-testing-event-handler-with.htmlnoreply@blogger.com (Dan Solovay)2tag:blogger.com,1999:blog-5589343447323430312.post-5837277480604537622Wed, 14 Oct 2015 10:00:00 +00002015-10-14T07:23:41.185-04:00ConfigSitecoreTestingTestable Event Handlers using Sitecore's Configuration FactoryIt's a not widely know feature of Sitecore configuration that it allows you to specify parameters for event handlers and pipeline processors. This can be used in combination with wrapper interfaces to write unit testable Sitecore code.<br /><a name='more'></a><br />Let's take as an example an event handler for the item saved event. Say we want to add functionality to write a log message whenever an item is saved. This is pretty straight forward to write, if not to test:<br /><br /><script src="https://gist.github.com/dsolovay/6bf1ec0e9750f91a1237.js"></script><br /><br />This will work, but it is impossible to unit test, because the Sitecore Log class is static, and does not implement an interface, so we cannot replace it in our test with a test double to inspect whether it was called. <br /><br />One way around this is to create a light-weight wrapper object that implements an interface. If our code only talks to that interface, then we can use NSubstitute to confirm that the message is logged. And we use the Sitecore configuration factory to wire in the log wrapper class.<br /><br />Here's what the wrapper class and interface look like. Our code will only have knowledge of the interface.<br /><br /><script src="https://gist.github.com/dsolovay/f345b311b7e2d1796ffb.js?file=LogWrapper.cs"></script><br /><br />And here's our new event handler:<br /><br /><script src="https://gist.github.com/dsolovay/f345b311b7e2d1796ffb.js?file=TestableHandler.cs"></script><br /><br />We can use NSubstitute to create a fake version of the interface and confirm it was called with the message we expect:<br /><br /><script src="https://gist.github.com/dsolovay/f345b311b7e2d1796ffb.js?file=ItemSavedLoggerTests.cs"></script><br /><br />And finally, we use Sitecore's configuration factory to slot in the LogWrapper class into the constructor parameter:<br /><br /><script src="https://gist.github.com/dsolovay/f345b311b7e2d1796ffb.js?file=TestableWrapper.config"></script><br /><br />And now in our log file:<br /><br /><span style="font-family: Courier New, Courier, monospace;">20212 00:04:31 INFO &nbsp;Item saved and fingers crossed.</span><br /><span style="font-family: Courier New, Courier, monospace;">20212 00:04:31 INFO &nbsp;Item saved and this code was tested!</span><br /><div><br /></div>The same approach works for any object created by the configuration factory, such as pipeline processors. &nbsp;Of course, to make handlers and processors fully unit testable, you will need a way to create items in unit tests. I will come back to how to do that using <a href="https://github.com/sergeyshushlyapin/Sitecore.FakeDb">Sitecore.FakeDB</a> in a later post.<br /><br />&nbsp;To learn more about the configuration factory, here are a few links:<br /><br /><ul><li><a href="http://www.sitecore.net/learn/blogs/technical-blogs/john-west-sitecore-blog/posts/2011/02/the-sitecore-aspnet-cms-configuration-factory.aspx" target="_blank">The Sitecore Configuration Factory</a>&nbsp;by John West</li><li><a href="http://sitecore-community.github.io/docs/documentation/Sitecore%20Fundamentals/Sitecore%20Configuration%20Factory/" target="_blank">Sitecore Community Docs</a></li><li>Two posts by Mike Reynolds:&nbsp;</li><ul><li><a href="http://sitecorejunkie.com/2014/07/06/leverage-the-sitecore-configuration-factory-inject-dependencies-through-class-constructors/" target="_blank">Leverage the Sitecore Configuration Factory: Inject Dependencies Through Class Constructors</a></li><li><a href="http://sitecorejunkie.com/2014/07/09/leverage-the-sitecore-configuration-factory-inject-value-types-as-dependencies/" target="_blank">Leverage the Sitecore Configuration Factory: Inject Value Types as Dependencies</a></li></ul></ul><br />http://www.dansolovay.com/2015/10/testable-event-handlers-using-sitecores.htmlnoreply@blogger.com (Dan Solovay)0tag:blogger.com,1999:blog-5589343447323430312.post-5953480356555124313Sun, 27 Sep 2015 06:19:00 +00002015-10-10T15:01:27.273-04:00Developer ToolsTDDTestingTest Driven Sitecore LinksI'm honored to be speaking at the <a href="http://sugcon.com/" target="_blank">SUGCON </a>conference in New Orleans next Thursday on Test Driven Development with Sitecore. &nbsp;Here are some links related to the talk. <br /><a name='more'></a><h4></h4><h4>Update:</h4><div>Here's a video recording of the talk. Thanks to Chris Auer for recording this.<br /><br /><iframe allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/LqkjpF2Vy3o" width="560"></iframe> </div><h4></h4><h4>Test Driven Development</h4><div><ul><li>I learned Unit Testing from this book by Roy Osherove:&nbsp;https://www.manning.com/books/the-art-of-unit-testing-second-edition (well the first edition). &nbsp;It stresses three concepts very well: add seams to your code to allow for tests, write your tests in a loosely specified way so they don't all fail at once, and write in memory unit tests. &nbsp;It offers a classic definition of a unit test, also available <a href="http://artofunittesting.com/definition-of-a-unit-test/" target="_blank">here.</a></li><li>I've just started looking into Kent Beck's original <a href="http://www.amazon.com/Test-Driven-Development-By-Example/dp/0321146530" target="_blank">book</a>, but I'm finding a lot of promising stuff in there. &nbsp;(I like going back to the classics.). &nbsp;A few tidbits: "Write tests until paranoia turns into boredom." &nbsp;He recently wrote on <a href="https://www.quora.com/Why-does-Kent-Beck-refer-to-the-rediscovery-of-test-driven-development" target="_blank">Quora </a>about his original inspiration for TDD.</li><li>I just came upon this <a href="http://stackoverflow.com/a/7876055/402949" target="_blank">discussion </a>of the relative strengths of Unit and Integration testing on Stack Overflow. &nbsp;Key idea: integration tests show you something is wrong, Unit tests show you where it is wrong.</li><li>One of the hardest aspects of getting started is developing the "muscle memory" of writing tests for small bits of functionality at a time, getting a feeling for the rhythm of it, and gaining confidence that this will quickly grow complex features. &nbsp;The Kata <a href="http://osherove.com/tdd-kata-1/" target="_blank">exercise </a>on Roy's blog is a really good way to get started here. &nbsp;As a workshop exercise, it works very well with <a href="http://c2.com/cgi/wiki?PairProgrammingPingPongPattern" target="_blank">Pair Programming Ping Pong</a>.</li><li>I talk about Robert Martin as an exponent of the power of this technique. Here is a classic, if strongly phrased, <a href="http://butunclebob.com/ArticleS.UncleBob.TheThreeRulesOfTdd" target="_blank">vision </a>of the Red/Green/Recycle flow, and a passionate defense of TDD, from Uncle Bob Martin, and a more recent <a href="http://blog.cleancoder.com/uncle-bob/2014/12/17/TheCyclesOfTDD.html" target="_blank">article </a>discussing the different levels of granularity.</li><li>So there was this TDD is Dead thing last year. &nbsp;David Heinemeier Hansson's <a href="http://david.heinemeierhansson.com/2014/tdd-is-dead-long-live-testing.html" target="_blank">post</a>, and the video <a href="http://martinfowler.com/articles/is-tdd-dead/" target="_blank">debate </a>with DHH, Kent Beck and Martin Fowler. Great stuff, very engaging.</li></ul><div><h4>Sitecore Integration Testing</h4></div></div><div><br /><ul><li>The idea of testing Sitecore functionality from a browser application was <a href="https://adeneys.wordpress.com/2008/09/19/automated-testing-and-sitecore-part-1/" target="_blank">proposed </a>by Alistair Deneys in 2008. &nbsp;His blog, and talks, continue to be a rich source of information on automated testing of Sitecore.&nbsp;https://adeneys.wordpress.com/category/testing/</li><li>In 2010, Mike Edwards <a href="http://www.glass.lu/Blog/Archive/Unit%20Testing%20Sitecore%20in%20the%20NUnit%20GUI%20-%20Part%201%20-%20Setting%20Up" target="_blank">showed </a>it was possible to run unit tests from a console app if configuration was put in the correct locations. &nbsp;</li><li>In order to get this to work you need to tell Sitecore to look for configuration files inside the project, since the code that locates App_Config will point it to c:\ if not run inside a web application. &nbsp;I wrote a post recommending a technique (setting Sitecore.Context.IsUnitTesting to true) that does not work for Sitecore 8. Fortunately, an approach <a href="http://getfishtank.ca/blog/unit-testing-in-sitecore-with-context-and-app-config" target="_blank">recommend </a>by Dan Cruikshank still works (setting <span style=font-family:Mongo>State.HttpRuntimeAppDomainAppPath</span> to the project root). &nbsp;My <a href="http://www.dansolovay.com/2013/01/sitecore-nunit-testing-simplified.html" target="_blank">post </a>is still useful for showing how to set up the AfterBuild step, which I learned, and tweaked a bit, from Alistair's writings.&nbsp;</li></ul></div><div><h4>Sitecore Fake DB</h4></div><div><ul><li>The best source here is the <a href="https://github.com/sergeyshushlyapin/Sitecore.FakeDb" target="_blank">github </a>repository and wiki. &nbsp;The range of features of this tool is extensive, and goes far beyond what I show in my talk.&nbsp;</li></ul></div><div><h4>Unit Testing with Glass</h4></div><div><ul><li>The idea of testability and how to test glass code comes up from time to time as a theme in Mike Edwards's Glass blog and tutorials, for example: <a href="http://www.glass.lu/Blog/TestingPipelines">http://www.glass.lu/Blog/TestingPipelines</a>,&nbsp;</li></ul><h4>Tools</h4></div><div><ul><li>The <a href="https://xunit.github.io/docs/why-did-we-build-xunit-1.0.html" target="_blank">thinking </a>behind XUnit. In a word, creating a new object for every test run gives better isolation.</li><li><a href="http://osherove.com/blog/2012/6/27/the-future-of-isolation-frameworks-and-how-moq-isnt-it-for-n.html" target="_blank">Roy Osherove</a> on why you should use <a href="http://nsubstitute.github.io/" target="_blank">NSubstitute</a>or FakeItEasy over MOQ.&nbsp;</li><li><a href="http://www.ncrunch.net/" target="_blank">NCrunch</a>, the TDD power tool. &nbsp;Costly, but will get you hooked on test coverage. &nbsp;Instant feedback as you write code.</li></ul></div><div><br /></div><blockquote class="twitter-tweet" lang="en"><div dir="ltr" lang="en">just got to teach a bright kid extreme tdd. only change functionality with a red test, only change design when all green. dang that was fun.</div>— Kent Beck (@KentBeck) <a href="https://twitter.com/KentBeck/status/294965673921245184">January 26, 2013</a></blockquote><script async="" charset="utf-8" src="//platform.twitter.com/widgets.js"></script>http://www.dansolovay.com/2015/09/test-driven-sitecore-links.htmlnoreply@blogger.com (Dan Solovay)0tag:blogger.com,1999:blog-5589343447323430312.post-1874273754814845067Sat, 04 Jul 2015 04:46:00 +00002015-07-05T21:29:39.884-04:00AgileBooksScrumA Scrum Reading ListOn a recent project I played a pretty active, if informal, role as an agile coach, in addition to my development duties. Which meant I did a lot of reading on plane flights, watched a lot of Jeff Sutherland <a href="http://bit.ly/jeffonscrum">videos</a>, and got to the point where I could quote the Scrum Guide in chapter and verse. I'm not a certified anything (well at least as far as Agile is concerned), but the material started to make sense to me in a fairly coherent way, so I thought it might be useful to assemble a list of resources that influenced my thinking, and what I learned from them.<br /><a name='more'></a><br /><h2>The Scrum Guide</h2>We've got to start <a href="http://scrumguides.org/scrum-guide.html">here</a>. If you are doing daily stand-ups and have someone on the team with the title of Scrum Master, you owe it to yourself and your team to read the rules of the game you are playing. A few things are not obvious.<br /><ul><li>The Product Owner is god of the backlog. She is not a committee, and she cannot be overruled. This allows the developers to proceed with confidence, and not manage the mixed messages and swirling tides of change that are endemic to an organization of any size. Just as a well defined interface allows you to write clean, maintainable code, an empowered product owner allows a development team to act with boldness and efficiency.</li><li>The Development Team is god of the sprint. The authority of the product owner ends when the sprint begins. (" Only the Development Team can change its Sprint Backlog during a Sprint.") This allows the development team to make decisions about how to operate at peek efficiency in both the short and the long term. Business priorities influence what comes into the sprint, but then development priorities take over for the length of the iteration.</li><li>The Scrum Master is god of the time-box. And this is really important. The key thing about Scrum is that everything is time-boxed: the daily scrum, the planning session, the review, the retro. And the sprint. And the workday. And this is important because the point of Scrum is to gain knowledge, to find out what is hard and what is not. And developers do not like to walk away form problems or discussions. This is where the Scrum Master needs to blow the whistle and say: "Done, cooking challenge over. Time for the judges to taste what you've made."</li><li>The Development Team is a thing. "Scrum recognizes no sub-teams in the Development Team, regardless of particular domains that need to be addressed like testing or business analysis; there are no exceptions to this rule." In other words, if there are certain tasks that only a "front-end" dev can do, or only a QA person is allowed to do, you are not doing Scrum. You may have experts in each area, but in a pinch, anyone can do anything. As Jeff Sutherland pungently put it on Twitter: <br /><blockquote class="twitter-tweet" lang="en"><div dir="ltr" lang="en">If you are the only person on your <a href="https://twitter.com/hashtag/Scrum?src=hash">#Scrum</a> team that can do a particular job, you should be fired. <a href="https://twitter.com/hashtag/Crossfunctional?src=hash">#Crossfunctional</a> <a href="https://twitter.com/hashtag/Agile?src=hash">#Agile</a> <a href="https://twitter.com/hashtag/XMfg?src=hash">#XMfg</a> <a href="https://twitter.com/hashtag/ScrumLab?src=hash">#ScrumLab</a></div>— Jeff Sutherland (@jeffsutherland) <a href="https://twitter.com/jeffsutherland/status/586161392396312577">April 9, 2015</a></blockquote><script async="" charset="utf-8" src="//platform.twitter.com/widgets.js"></script><br /></li><li>The Sprint Goal is a thing. This is what the team commits to doing, and this is what the team reports on each day. And this is not the same as the list of backlog items that the team plans to get done in the sprint. The fact that the team commits to a goal and not a list of items is <a href="https://www.scrum.org/About/All-Articles/articleType/ArticleView/articleId/95/Commitment-vs-Forecast-A-subtle-but-important-change-to-Scrum">intentional</a>, since the team may learn that the specific backlog items are not feasible in the sprint, setting up a constructive negotiation with the PO over how to satisfy the goal. Again, this is important. Scrum is predicated on the notion that we don't know how hard things will turn out to be: we do sprints to learn that. And for the data coming out of sprints to be of maximum usefulness, it's important to be trying to do specific, focused things. "We found out Search was a lot harder than we thought" is useful data. "We found out tickets 3144, 3299, and 4417 are a lot harder than we thought" is not. Goals provide focus and clarity. As Roman Pichler <a href="http://www.romanpichler.com/blog/effective-sprint-goals/">writes</a>:<br /><blockquote>When selecting your sprint goal, remember that trying out new things requires failure. Failure creates the empirical data required to make informed assumptions about what should and can be done next. Failing early helps you succeed in the long term.</blockquote></li><li>It's a Sprint Review, not just a demo. The point is to get the developers and the stakeholders interacting and collaborating. The scrum guide section on the Sprint Review contains a pretty detailed recipe for making this happen.</li><li>Transparency is the whole point of Scrum, and transparency is hard. <br /><blockquote>The Scrum Master’s job is to work with the Scrum Team and the organization to increase the transparency of the artifacts. This work usually involves learning, convincing, and change. Transparency doesn’t occur overnight, but is a path.</blockquote></li></ul><h2>Software in 30 Days</h2>Written by the two creators of Scrum, this <a href="http://www.amazon.com/Software-30-Days-Customers-Competitors/dp/1118206665">book</a> can be considered authoritative. I found the first few chapters extremely useful, especially the first two chapters, which lay out a case for empiricism ("let's find out how doable this is") over predictive ("we'll get it done by date X") approaches. If anyone on your team uses the phrase "failed a sprint," have them read chapter two: <br /><blockquote>Empiricism means that we will not be sure of how much work we will get done until it is done.</blockquote>The development team's job is, in addition to writing software, to provide data to the organization. The organization can then decide whether the product is the right thing for the team to be doing. The book is written in a somewhat dry, case-study focused manner, but is dense with information on how to build a scrum capacity in an organization.<br /><h2>Scrum: The Art of Doing Twice the Work in Half the Time</h2>This is a much more personal <a href="http://www.amazon.com/gp/product/038534645X/">book</a>, written by Jeff Sutherland in partnership with his son JJ Sutherland, who used scrum to organize the news-gathering operations of the NPR Cairo bureau during the Tahrir Square uprising. The book focuses on the psychological and philosophical aspects of Scrum, and has a strongly autobiographical focus. it begins with the story of how Jeff Sutherland turned around a platoon at West Point that had more than a century of tradition as the worst of the school, through focused, actionable feedback. This lays a foundation for the belief that any team can be turned around, if given the right conditions for success. It's a great read, and full of actionable insights:<br /><ul><li>The Fundamental Attribution Error: You view your own actions as influenced by circumstances, but those of other people as revealing of their fundamental character. You are right about yourself, and wrong about other people. Change the environment, and you will change behaviors.<br /></li><li>Cross functionality. Really good teams don't have single person skill sets. They fluidly adept to circumstances. He uses the example of special forces teams, where each role is cross trained so the team is not vulnerable to losing any single member.<blockquote>Each team has all the capabilities to carry out the mission from start to finish. And they cross train each skill set. They want to make sure, for example, that if both of the medics get killed, the communications specialist can patch up the weapons specialist.</blockquote>Your scrum team will hopefully not be under direct fire, but you can be assured that your QA person will have more than he or she can do at the end of the sprint. If you can all pitch in, you will get vastly more done.</li><li>The ordered backlog as the key to the value of Scrum. Get people working on the high value stuff, and declare victory and move on when the high value/low effort stuff is done. Live by the 80/20 rule.</li><li>Stop the waste (Chapter 5, "Waste is a Crime"). Don't multitask, and don't start things you won't finish. The cost of context switching is deadly. After reading this, I stopped thinking it was a good thing to get started on a story I couldn't finish in a sprint. That's just adding to work in progress, and work in progress is the killer of teams. I also started pushing back on ceremonies like having the entire team on hand on the evening of a deployment. That's just stealing from the next day's productivity. These kinds of things are (politically) easy to accept, and are destructive of team effectiveness. Getting religion on waste is a powerful corrective.</li><li>There are a number of nice practical tips on running Scrum, such as using dog sizes (Is this feature a Chihuahua, a Labrador, or a Great Dane?) to introduce story points, and putting metrics on how valuable a story point is, which creates a healthy pressure to find simple, high value stories. &nbsp;An effective Product Owner will maximize the dollar value of a story point.</li></ul><div></div><h2>Extreme Programming Explained</h2>Kent Beck's <a href="http://www.amazon.com/Extreme-Programming-Explained-Embrace-Edition/dp/0321278658">book</a> is from the early days of Agile, but it is very helpful in anchoring the key concepts. I'll call out a few:<br /><ul><li>He tells the story of being brought on to a project as a Small Talk consultant, but being mystified at why the code was so incoherent. Then he realized that the four senior devs on the project had all staked out corner offices, and never interacted directly. He fixed this with the first bullpen, and mandatory pair programming. After reading this, I started making it a habit to move my seating every time I started on a project. (As a remote worker now, I make liberal use of Skype, screen shares, and humor to bridge this gap. The main point is that human connections lead to coherent code.)</li><li>He talks in terms of listening, to the customer, to other programmers, and to the system. <blockquote>More coaching phrases are "Don't ask me, ask the system" and "Have you written a test case for that yet?" Concrete feedback about the current state of the system is absolutely priceless.</blockquote></li><li>Assume simplicity. "We should assume that the simplest design we can imagine possibly working will work." This became the Agile Manifesto principle: "Simplicity--the art of maximizing the amount of work not done--is essential."</li><li>Most importantly, he talks about fostering a culture of mutual support, a "mentality of sufficiency," drawing on the work of anthropologist Colin Turnbull, whose books "The Forest People" and "The Mountain People" contrast worlds of sufficiency, where everyone shares, and scarcity, where everyone fights. This is an important thing to get right. This is the difference between developers who share or hoard knowledge, who write clean, maintainable code, or who seek security in silos and obscurity.</li></ul><h2>Other Resources</h2><ul><li>Jeff Sutherland did a series of videos discussing elements of scrum, which I have put into a <a href="http://bit.ly/jeffonscrum">playlist</a>. I particularly recommend "<a href="https://www.youtube.com/watch?v=1yZ3J8C4MK0&amp;index=1&amp;list=PLCPNf7xk6TjXaDciW0gKy7c9m3LQT0Cad">The Nokia Test</a>," which gives a list of eight actionable levers for improving the efficiency of a team (ordering the backlog, avoiding disrupting sprints, testing during the sprint).</li><li>I haven't read Ken Schwaber's <a href="http://www.amazon.com/Agile-Project-Management-Developer-Practices/dp/073561993X/ref=sr_1_1?s=books&amp;ie=UTF8&amp;qid=1435982913&amp;sr=1-1&amp;pebp=1435982919897&amp;perid=1BYMSFVBYM8HJTZ03AEM">Agile Project Management with Scrum</a>, but have consulted parts of it, and it looks useful. He describes the role of the Scrum Master as a sheep dog, "keeping the flock together and the wolves away," and describes doing just that by informing a stakeholder that he can ask to have the sprint cancelled if he wants to get his pet feature in. Making the cost of disruption plain to the organization, rather than something that is only felt by developers, is a key output of Scrum.</li><li><a href="http://www.amazon.com/Agile-Retrospectives-Making-Teams-Great/dp/0977616649">Agile Retrospectives: Making Good Teams Great</a> is a very useful source book for running retros. A really valuable point is the importance of getting all members of the team on the same page by gathering and visualizing input from the team, so that everyone is working from the same set of facts and experiences.</li><li>David Starr has an excellent Pluralsight <a href="http://www.pluralsight.com/courses/scrum-fundamentals">course</a>&nbsp;on Scrum. Among other things, it really makes the point well that the Scrum backlog is not a commitment, because you cannot commit to scope. There are no crystal balls, just experiments and data.</li></ul>http://www.dansolovay.com/2015/07/a-scrum-reading-list.htmlnoreply@blogger.com (Dan Solovay)0tag:blogger.com,1999:blog-5589343447323430312.post-5708053457005483984Sun, 08 Feb 2015 07:18:00 +00002017-06-29T10:45:24.312-04:00MongoDBSitecoreMongoDB for SitecoreStarting with Sitecore 7.5, MongoDB has become an integral part of the Sitecore ecosystem. This post will walk developers through the process of installing MongoDB, and cover some of the basics of MongoDB CRUD operations, and then look at how to access Sitecore data with MongoDB.<br /><a name='more'></a><br /><h2>Installation</h2>If you fire up a fresh install of Sitecore 8, and you do not have MongoDB installed, you will see the following errors in your log file:<br /><pre><code>5004 14:07:43 ERROR MongoDbDictionary.Store() has failed.<br />Exception: Sitecore.Analytics.DataAccess.DatabaseNotAvailableException<br />Message: Database not available<br />Source: Sitecore.Analytics.MongoDB<br /> at Sitecore.Analytics.Data.DataAccess.MongoDb.MongoDbCollection.Execute(Action action, ExceptionBehavior exceptionBehavior)<br /> at Sitecore.Analytics.Data.DataAccess.MongoDb.MongoDbCollection.Save(Object value)<br /> at Sitecore.Analytics.Data.DataAccess.MongoDb.MongoDbDictionary.Store(Object value)<br /></code></pre><br />Fortunately, this is pretty easy error to resolve. To make this go away, do the following:<br /><ol><li>Go to http://mongodbB.org/downdloads, and download the latest MongoDB zip file for windows. Unzip the file to a folder of your choosing. In this walkthrough, I will use <code>C:\mongo\</code> You should see a <code>bin</code> directory, and a few text documents (a readme, a license, etc.)</li><li>Create a folder with the path <code>C:\data\db</code>. This is the built in default location that MongoDB uses to store database files, but the can be changed if desired.</li><li>Open a command prompt, and run the following: <code>C:\mongo\bin\mongod.exe</code>. Recycle your Sitecore 8 app pool. You should see the error gone, and the mongod console window will show &nbsp;some requests from Sitecore:<br /><code><pre>2015-02-07T14:45:45.402-0500 [conn2] CMD: dropIndexes sc8hacking_analytics.Autom<br />ationStates<br />2015-02-07T14:45:45.636-0500 [conn2] CMD: dropIndexes sc8hacking_analytics.Inter<br />actions</pre></code> Congratulations, you are now running MongoDB! </li></ol><br /><h2>Running MongoDB as as Windows Service</h2>You probably don't want to open a command prompt every time you work with Sitecore, so let's set this up as a Windows Service. This is just about as easy as running from the command prompt. The only extra wrinkle is that you need to specify a log file location:<br /><ol><li>Close the existing command window if you still have it open. If you enter Ctrl-C, MongoDB will shut down gracefully:<br /><code><pre>2015-02-07T14:51:23.594-0500 [consoleTerminate] got CTRL_C_EVENT, will terminate<br /> after current cmd ends<br />2015-02-07T14:51:23.597-0500 [consoleTerminate] now exiting<br /> [...]<br />2015-02-07T14:51:23.849-0500 [consoleTerminate] dbexit: really exiting now<br /></pre></code></li><li>Choose a location and name for your MongoDB log file. Let's put ours in c:\data\mongolog.txt.</li><li>Open a command prompt, and run this command: <code>c:\mongo\bin\mongod --logpath c:\data\mongolog.txt --install</code></li><li>Go to your Local Services, and you should find a new called MongoDB. It will be set to Automatic, so will start on the next reboot. Click Start to get it running. You can check the log file to confirm it is running.</li></ol><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-IKEz5nlCvqo/VNaCj5mosRI/AAAAAAAADZs/llJBKhb7f2I/s1600/MongoDB%2B%2BService.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://1.bp.blogspot.com/-IKEz5nlCvqo/VNaCj5mosRI/AAAAAAAADZs/llJBKhb7f2I/s1600/MongoDB%2B%2BService.png" width="284" /></a></div><ol></ol><br />Note you can specify a number of options when creating the service, such as --dbpath to specify where you want databases to be stored, or --port to change from the MongoDB default of 28017. You can also define a configuration file, so that you can change options without having to recreate the service, using the &nbsp;--config option. &nbsp;The MongoDB configuration format is described <a href="http://docs.mongodb.org/manual/reference/configuration-options/">here</a>. <br /><br /><h2>Exploring MongoDB</h2>Now that we have MongoDB running, let's take a look at how we can write and read data from it. I recommend getting started with the shell that is provided by the MongoDB distribution. When you are confident with the mechanics of direct interaction with MongoDB, you can select a tool you prefer for day-to-day work. (<a href="http://robomongo.org/">Robomongo </a>is my personal preference, because it respects the JavaScript focus of the shell, allowing you to define variables and functions.)<br /><h3></h3><h3>The MongoDB Shell</h3><br />You can interact with the mongod process by launching the mongo shell <code>c:\mongo\bin\mongo</code> (note "mongo", not "mongod"). By default, this will make a connection to localhost:27107, so will connect to your local MongoDB instance. The MongoDB shell is a JavaScript interpreter, so you can do things like create functions and define variables. You will find it is handy to be able to store interim results to variables, and to create convenience methods. It also had good online documentation, both at a root level <br /><br /><blockquote class="tr_bq"><code>&gt;help</code></blockquote><br />and at an object level <br /><code><br /></code> <br /><blockquote class="tr_bq"><code>&gt;db.help()</code>&nbsp;</blockquote><blockquote class="tr_bq"><code>&gt;db.collection.help()</code></blockquote><code><br /></code> <br /><h3>Creating Data</h3><br />Let's take a look at how we can store data in MongoDB. First thing we need to do is create a database. As with much in MongoDB, this is a purely declarative operation. Simply refer to it and start using it, and MongoDB takes care of the rest:<br /><br /><blockquote class="tr_bq"><code>&gt;use mynewdb</code></blockquote><br />If you were to look at your c:\data\db folder, you would not notice anything new, but if you now do this:<br /><blockquote class="tr_bq"><code>&gt; db.mynewcollection.insert({name: "Dan"})</code></blockquote>you would see two new files, "mynewdb.0" and "mynewdb.ns", to store data and indexes.<br /><br />It's interesting to compare this to the syntax to create a database in SQL Server, which requires specifying a location for the data and log files when you issue the command. With MongoDB, this configuration is owned by the MongoDB instance, so that from the application's point of view, MongoDB database creation is frictionless. (I was struck by this when I enabled the MongoDB session state provider with Sitecore 7.5. All I had to do was make the configuration change and add a connection string, and the Sitecore application created the database and schema on its own.) In a similar way, we created a collection (roughly equivalent to a "table" in SQL) simply by referring to it.<br /><br />Let's go take another look at the insert command. You'll notice that the thing being inserted was a JSON document. MongoDB stores JSON documents (hence the name "document database"), and does not impose any rules on how they are structured. You can view it as a place to park JSON, but one with really powerful querying and indexing capabilities. <br /><br />The one requirement that MongoDB does impose on documents is that each one must have a field called "_id", with a unique value per document. All MongoDB drivers, including the shell, will add this value in if you do not supply it. So if I were to query for this document, I would find an _id created for me:<br /><code><br /></code> <code>&gt; db.mynewcollection.find()<br /><br />{ "_id" : ObjectId("54d6d4c5b054372742f03578"), "name" : "Dan" }</code><br /><br />There is no requirement to use the ObjectID data type for IDs; you could use a Guid, a product code, or even a subdocument. &nbsp;ObjectIDs are guaranteed to be unique per collection, and have the neat property of containing an embedded, accessible creation timestamp, so each document has an embedded "CreatedTimestamp" field, which you can access with getTimestamp():<br /><br /><blockquote class="tr_bq">&gt; &nbsp;&nbsp;<code>db.mynewcollection.findOne()._id.getTimestamp()</code></blockquote>Let's update this document by adding a few interests:<br /><code><br />&gt;db.mynewcollection.update({name: "Dan"}, {$push: {interests: "Sitecore"} })<br />WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })<br />&gt;db.mynewcollection.update({name: "Dan"}, {$push: {interests: "MongoDB"} })<br />WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })<br />&gt;db.mynewcollection.findOne()</code><br /><code><br /></code> And some address information:<br /><code></code><br /><pre><code>&gt; db.mynewcollection.update({name: "Dan"}, {$set: {address: {work: {city: "Some<br />ville", state: "MA"}}}})<br />WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })<br /></code></pre><pre><code><br /></code></pre><pre><code><span style="font-family: &quot;times new roman&quot;; white-space: normal;">So now we have a rich document, with an array and a nested sub-document:</span></code></pre><pre><code>&gt; db.mynewcollection.findOne()<br />{<br /> "_id" : ObjectId("54d6d4c5b054372742f03578"),<br /> "name" : "Dan",<br /> "interests" : [<br /> "Sitecore",<br /> "MongoDB"<br /> ],<br /> "address" : {<br /> "work" : {<br /> "city" : "Somerville",<br /> "state" : "MA"<br /> }<br /> }<br />}<br /></code></pre><pre><code><br /></code></pre>It's in querying for documents like this that MongoDB really shines. &nbsp;We could find documents for people named "Dan":<br /><blockquote class="tr_bq"><code>&gt;db.mynewcollection.find({name: "Dan" })</code></blockquote>or use a regular expression to find names beginning with "D":<br /><blockquote class="tr_bq"><code>&gt;db.mynewcollection.find({name: /^D/ })</code></blockquote>or who have an interest in Sitecore:<br /><blockquote class="tr_bq"><code>&gt;db.mynewcollection.find({interests: "Sitecore"})</code></blockquote>or work in Massachusetts:<br /><blockquote class="tr_bq"><code>&gt;db.mynewcollection.find({"address.work.state": "MA"})</code></blockquote>or get a count of people who work in MA and have an interest in MongoDB:<br /><blockquote class="tr_bq"><code>&gt;db.mynewcollection.count({interests: "MongoDB", "address.work.state": "MA"})</code></blockquote>Here we see a number of MongoDB querying features:<br /><br /><ul><li>Arrays are treated like tags, so if you specify an array element, then all documents that have that array element will match.</li><li>You can use "dot notation" to specify subdocuments.&nbsp;</li><li>You can use regular expressions, which are especially efficient if an index is in place on the field and the expression is specified to match strings beginning with a pattern ("prefix query").</li></ul><div>For a full comparison of MongoDB and SQL querying syntax, I recommend the MongoDB comparison chart, avalable at <a href="http://bit.ly/mongo2sql">bit.ly/mongo2sql</a><br /><br /></div><h2>Looking at Sitecore data</h2>Ok, lets now use the Mongo shell to explore Sitecore analytics data. &nbsp;First, we can use "show db" to find the analytics database, and "show collections" to see what document collections are in this database:<br /><br /><code></code><br /><pre><code>&gt; show dbs<br />admin (empty)<br />local 0.078GB<br />mynewdb 0.078GB<br />sc8hacking_analytics 0.078GB<br />sc8hacking_tracking_contact 0.078GB<br />sc8hacking_tracking_live 0.078GB<br />&gt; use sc8hacking_analytics<br />switched to db sc8hacking_analytics<br />&gt; show collections<br />ClassificationsMap<br />Contacts<br />Devices<br />Interactions<br />OperationStatuses<br /></code></pre><br /><br />Sitecore uses the "Interactions" collection to store information about a user visit. Let's take a look at one of these documents:<br /><br /><pre><code><br />&gt; db.Interactions.findOne()<br />{<br /> "_id" : BinData(3,"iUI0c5nx30mRxsHzpiUgqA=="),<br /> "_t" : "VisitData",<br /> "ContactId" : BinData(3,"a1k5n+FLBkG6myUtAeBn4A=="),<br /> "StartDateTime" : ISODate("2015-02-07T19:45:40.258Z"),<br /> "EndDateTime" : ISODate("2015-02-07T19:45:40.258Z"),<br /> "SaveDateTime" : ISODate("2015-02-07T20:13:45.312Z"),<br /> "ChannelId" : BinData(3,"8uQYtBMQQkugU7bU3KmIvw=="),<br /> "Browser" : {<br /> "BrowserVersion" : "42.0",<br /> "BrowserMajorName" : "Chrome",<br /> "BrowserMinorName" : "42.0"<br /> },<br /> "Screen" : {<br /> "ScreenHeight" : 480,<br /> "ScreenWidth" : 640<br /> },<br /> "ContactVisitIndex" : 2,<br /> "Ip" : BinData(0,"fwAAAQ=="),<br /> "Language" : "en",<br /> "LocationId" : BinData(3,"1B2M2Y8AsgTpgAmY7PhCfg=="),<br /> "MvTest" : {<br /> "ValueAtExposure" : 0<br /> },<br /> "OperatingSystem" : {<br /> "_id" : "WinNT"<br /> },<br /> "Pages" : [<br /> {<br /> "DateTime" : ISODate("2015-02-07T19:45:40.259Z"),<br /> "Duration" : 0,<br /> "Item" : {<br /> "_id" : BinData(3,"n1UNEaXe6kKcHIpd9+cO+Q=="),<br /> "Language" : "en",<br /> "Version" : 2<br /> },<br /> "PageEvents" : [<br /> {<br /> "Name" : "Error",<br /> "ItemId" : BinData(3,"n1UNEaXe6kKcHIpd9+<br />cO+Q=="),<br /> "Timestamp" : NumberLong(0),<br /> "Data" : "Sitecore.Analytics.DataAccess.<br />DatabaseNotAvailableException: Database not available\r\n at Sitecore.Analytic<br />s.Data.DataAccess.MongoDb.MongoDbCollection.Execute(Action action, ExceptionBeha<br />vior exceptionBehavior)\r\n at Sitecore.Analytics.Data.DataAccess.MongoDb.Mong<br />oDbCollection.Save(Object value)\r\n at Sitecore.Analytics.Data.DataAccess.Mon<br />goDb.MongoDbDictionary.Store(Object value)",<br /> "DataKey" : "Sitecore.Analytics.DataAcce<br />ss.DatabaseNotAvailableException: Database not available",<br /> "Text" : "Sitecore.Analytics.DataAccess.<br />DatabaseNotAvailableException: Database not available",<br /> "PageEventDefinitionId" : BinData(3,"SiW<br />/yMycFk6QCYK3zTPkvg=="),<br /> "DateTime" : ISODate("2015-02-07T19:45:4<br />0.397Z"),<br /> "Value" : 0<br /> }<br /> ],<br /> "SitecoreDevice" : {<br /> "_id" : BinData(3,"339d/sCJmU2ao7X70AnJ8w=="),<br /> "Name" : "Default"<br /> },<br /> "MvTest" : {<br /> "ValueAtExposure" : 0<br /> },<br /> "Url" : {<br /> "Path" : "/"<br /> },<br /> "VisitPageIndex" : 1<br /> }<br /> ],<br /> "SiteName" : "website",<br /> "TrafficType" : 20,<br /> "UserAgent" : "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (K<br />HTML, like Gecko) Chrome/42.0.2292.0 Safari/537.36",<br /> "Value" : 0,<br /> "VisitPageCount" : 1<br />}<br />&gt; <br /></code></pre><br />A few things are worth noting here:<br /><br /><ul><li>Note the number of BinData values. This is how MongoDB represents GUIDs. &nbsp;I will come back to how you can translate these into .NET Guids in the next section.</li><li>Sitecore uses a number of subdocuments. For example, the Browser is stored as a subdocument, so you can query for sessions using Chrome with "Browser.MajorName", or a specific version with "Browser.BrowserVersion."</li><li>Pages are stored in a nested array, and page events are stored as a nested array within Pages. &nbsp;This nesting of object hierarchies is typical of how MongoDB stores data, and avoids the performance hit of doing table seeks that are required when joining tables. In DMS, this data would be stored in five tables: Visitors, Visits, Pages, PageEvents, and PageEventDefinitions.</li><li>Session data is written to MongoDB when the SessionEnd event is fired. We can see amusing proof of this in the fact that the page event captured is the exception I referred to at the beginning of this post. Obviously, the error that MongoDB is unavailable must have been persisted in memory until MongoDB became available. &nbsp;You can work around this while doing development by recycling your app pool, or wiring up a button to call Session.Abandon() to end the ASP.NET session.</li><li>The xDB Overview and Architecture <a href="http://sdn.sitecore.net/upload/sitecore7/75/xdb_overview_and_architecture_sc75-usletter.pdf">document </a>states: "In the xDB, currently the only type of interaction is an online visit but in the future this will be expanded to include offline interactions." &nbsp;We can see this provided for in the field "_t", set to "VisitData". This refers the Sitecore analytics class (Sitecore.Analytics.Model.VisitData) that has been serialized into this document. By specifying the type, Sitecore provides for storing other sorts of interactions in this collection. &nbsp;This will allow doing things like aggregating engagement value (in the "Value" field) across multiple interaction types, seeing for example whether web, in store, or telephone contacts are generating the most engagement value. &nbsp;Here we see how MongoDB's lack of defined schema is used to allow storing a variety of classes that share the same base class, allowing for querying at either a base or subclass level. &nbsp;This sort of open-ended design, allowing for future evolution of the product without requiring schema changes, shows why MongoDB was a compelling choice as a foundational technology for the xDB.</li></ul><br /><h3>Guids and MongoDB</h3>As I mentioned above, Guids are a bit of a challenge to work with in the MongoDB shell, since the binary object is represented in a manner completely different from in .NET. Fortunately, MongoDB provides for that in a JavaScript file, <a href="https://github.com/mongodb/mongo-csharp-driver/blob/master/uuidhelpers.js">uuidhelpers.js</a>, available on the C# driver github repository. If you download this file, and reference it when you start the mongo shell (<code>&gt;c:\mongo\bin\mongo c:\uuidhelpers.js --shell</code>) you now have two methods available, .toCSUUID() which converts a BinData object into a .NET Guid, and CSUUID("some guid") which creates a BinData object from a string representing a C# GUID. So with this file loaded, we can now see what Sitecore item the user viewed on the first page of the session, represented by the MongoDB shell as BinData(3,"n1UNEaXe6kKcHIpd9+cO+Q==").<br /><br /><code> &gt; db.Interactions.findOne().Pages[0].Item._id.toCSUUID()<br />CSUUID("110d559f-dea5-42ea-9c1c-8a5df7e70ef9")<br /></code><br /><br />Because there's no place like home. :)http://www.dansolovay.com/2015/02/mongodb-for-sitecore.htmlnoreply@blogger.com (Dan Solovay)10