tl;dr: I'm back to ReSharper 8.2.3 and am a bit worried about the state of the 9.x series of ReSharper. Ordinarily, JetBrains has eliminated performance, stability and functional issues by the first minor version-update (9.1), to say nothing of the second (9.2).

Test Runner

In the previous article, my main gripe was with the unit-test runner, which was unusable due to flakiness in the UI, execution and change-detection. With the release of 9.2, the UI and change-detection problems have been fixed, but the runner is still quite flaky at executing tests.

What follows is the text of the report that I sent to JetBrains when they asked me why I uninstalled R# 9.2.

As with 9.0 and 9.1, I am unable to productively use the 9.2 Test Runner with many of my NUnit tests. These tests are not straight-up, standard tests, but R# 8.2.3 handled them without any issues whatsoever.

What's special about my tests?

There are quite a few base classes providing base functionality. The top layers provide scenario-specific input via a generic type parameter.

The test runner in 9.2 is not happy with this at all. The test explorer shows all of the tests correctly, with the test counts correct. If I select a node for all tests for ProviderB and ProtocolA (696 tests in 36 fixtures), R# loads 36 non-expandable nodes into the runner and, after a bit of a wait, marks them all as inconclusive. Running an individual test-fixture node does not magically cause the tests to load or appear and also shows inconclusive (after a while; it seems the fixture setup executes as expected but the results are not displayed).

If I select a specific, concrete fixture and add or run those tests, R# loads and executes the runner correctly. If I select multiple test fixtures in the explorer and add them, they also show up as expandable nodes, with the correct test counts, and can be executed individually (per fixture). However, if I elect to run them all by running the parent node, R# once again marks everything as inconclusive.

As I mentioned, 8.2.3 handles this correctly and I feel R# 9.2 isn't far off -- the unit-test explorer does, after all, show the correct tests and counts. In 9.2, it's not only inconvenient, but I'm worried that my tests are not being executed with the expected configuration.

Also, I really missed the StyleCop plugin for 9.2. There's a beta version for 9.1 that caused noticeable lag, so I'm still waiting for a more unobtrusive version for 9.2 (or any version at all).

While it's possible that there's something I'm doing wrong, or there's something in my installation that's strange, I don't think that's the problem. As I mentioned, test-running for the exact same solution with 8.2.3 is error-free and a pleasure to use. In 9.2, the test explorer shows all of the tests correctly, so R# is clearly able to interpret the hierarchy and attributes (noted above) as I've intended them to be interpreted. This feels very much like a bug or a regression for which JetBrains doesn't have test coverage. I will try to work with them to help them get coverage for this case.

Real-Time StyleCop rules

Additionally, the StyleCop plugin is absolutely essential for my workflow and there still isn't an official release for any of the 9.x versions. ReSharper 9.2 isn't supported at all yet, even in prerelease form. The official Codeplex page shows the latest official version as 4.7, released in January of 2012 for ReSharper 8.2 and Visual Studio 2013. One would imagine that VS2015 support is in the works, but it's hard to say. There is a page for StyleCop in the ReSharper extensions gallery but that shows a beta4, released in April of 2015, that only works with ReSharper 9.1.x, not 9.2. I tested it with 9.1.x, but it noticeably slowed down the UI. While typing was mostly unaffected, scrolling and switching file-tabs was very laggy. Since StyleCop is essential for so many developers, it's hard to see why the plugin gets so little love from either JetBrains or Microsoft.

GoTo Word

The "Go To Word" plugin is not essential but it is an extremely welcome addition, especially with so much more client-side work depending on text-based bindings that aren't always detected by ReSharper. In those cases, you can find -- for example -- all the references of a Knockout template by searching just as you would for a type or member. Additionally, you benefit from the speed of the ReSharper indexing engine and search UI instead of using the comparatively slow and ugly "Find in Files" support in Visual Studio. Alternatives suggested in the comments to the linked issue above all depend on building yet another index of data (e.g. Sando Code Search Tool). JetBrains has pushed off integrating go-to-word until version 10. Again, not a deal-breaker, but a shame nonetheless, as I'll have to do without it in 9.x until version 10 is released.

With so much more client-side development going on in Visual Studio and with dynamic languages and data-binding languages that use name-matching for data-binding, GoToWord is more and more essential. Sure, ReSharper can continue to integrate native support for finding such references, but until that happens, we're stuck with the inferior Find-in-Files dialog or other extensions that increase the memory pressure for larger solutions.

Encodo first published a Git Handbook for employees in September 2011 and last updated it in July of 2012. Since then, we've continued to use Git, refining our practices and tools. Although a lot of the content is still relevant, some parts are quite outdated and the overall organization suffered through several subsequent, unpublished updates.

What did we change from the version 2.0?

We removed all references to the Encodo Git Shell. This shell was a custom environment based on Cygwin. It configured the SSH agent, set up environment variables and so on. Since tools for Windows have improved considerably, we no longer need this custom tool. Instead, we've moved to PowerShell and PoshGit to handle all of our Git command-line needs.

We removed all references to Enigma. This was a Windows desktop application developed by Encodo to provide an overview, eager-fetching and batch tasks for multiple Git repositories. We stopped development on this when SmartGit included all of the same functionality in versions 5 and 6.

We removed all detailed documentation for Git submodules. Encodo stopped using submodules (except for one legacy project) several years ago. We used to use submodules to manage external binary dependencies but have long since moved to NuGet instead.

We reorganized the chapters to lead off with a quick overview of Basic Concepts followed by a focus on Best Practices and our recommended Development Process. We also reorganized the Git-command documentation to use a more logical order.

Chapter 3, Basic Concepts and chapter 4, Best Practices have been included in their entirety below.

3 Best Practices

3.1 Focused Commits

Focused commits are required; small commits are highly recommended. Keeping the number of changes per commit tightly focused on a single task helps in many cases.

They are easier to resolve when merge conflicts occur

They can be more easily merged/rebased by Git

If a commit addresses only one issue, it is easier for a reviewer or reader to decide whether it should be examined.

For example, if you are working on a bug fix and discover that you need to refactor a file as well, or clean up the documentation or formatting, you should finish the bug fix first, commit it and then reformat, document or refactor in a separate commit.

Even if you have made a lot of changes all at once, you can still separate changes into multiple commits to keep those commits focused. Git even allows you to split changes from a single file over multiple commits (the Git Gui provides this functionality as does the index editor in SmartGit).

3.2 Snapshots

Use the staging area to make quick snapshots without committing changes but still being able to compare them against more recent changes.

For example, suppose you want to refactor the implementation of a class.

Make some changes and run the tests; if everythings ok, stage those changes

Make more changes; now you can diff these new changes not only against the version in the repository but also against the version in the index (that you staged).

If the new version is broken, you can revert to the staged version or at least more easily figure out where you went wrong (because there are fewer changes to examine than if you had to diff against the original)

If the new version is ok, you can stage it and continue working

3.3 Developing New Code

Where you develop new code depends entirely on the project release plan.

Code for releases should be committed to the release branch (if there is one) or to the develop branch if there is no release branch for that release

If the new code is a larger feature, then use a feature branch. If you are developing a feature in a hotfix or release branch, you can use the optional base parameter to base the feature on that branch instead of the develop branch, which is the default.

3.4 Merging vs. Rebasing

Follow these rules for which command to use to combine two branches:

If both branches have already been pushed, then merge. There is no way around this, as you wont be able to push a non-merged result back to the origin.

If you work with branches that are part of the standard branching model (e.g. release, feature, etc.), then merge.

If both you and someone else made changes to the same branch (e.g. develop), then rebase. This will be the default behavior during development

4 Development Process

A branching model is required in order to successfully manage a non-trivial project.

Whereas a trivial project generally has a single branch and few or no tags, a non-trivial project has a stable releasewith tags and possible hotfix branchesas well as a development branchwith possible feature branches.

A common branching model in the Git world is called Git Flow. Previous versions of this manual included more specific instructions for using the Git Flow-plugin for Git but experience has shown that a less complex branching model is sufficient and that using standard Git commands is more transparent.

However, since Git Flow is a very widely used branching model, retaining the naming conventions helps new developers more easily understand how a repository is organized.

4.1 Branch Types

The following list shows the branch types as well as the naming convention for each type:

master is the main development branch. All other branches should be merged back to this branch (unless the work is to be discarded). Developers may apply commits and create tags directly on this branch.

feature/name is a feature branch. Feature branches are for changes that require multiple commits or coordination between multiple developers. When the feature is completed and stable, it is merged to the master branch after which it should be removed. Multiple simultaneous feature branches are allowed.

release/vX.X.X is a release branch. Although a project can be released (and tagged) directly from the master branch, some projects require a longer stabilization and testing phase before a release is ready. Using a release branch allows development on the develop branch to continue normally without affecting the release candidate. Multiple simultaneous release branches are strongly discouraged.

hotfix/vX.X.X is a hotfix branch. Hotfix branches are always created from the release tag for the version in which the hotfix is required. These branches are generally very short-lived. If a hotfix is needed in a feature or release branch, it can be merged there as well (see the optional arrow in the following diagram).

The main difference from the Git Flow branching model is that there is no explicit stable branch. Instead, the last version tag serves the purpose just as well and is less work to maintain. For more information on where to develop code, see 3.3 Developing New Code.

4.2 Example

To get a better picture of how these branches are created and merged, the following diagram depicts many of the situations outlined above.

The diagram tells the following story:

Development began on the master branch

v1.0 was released directly from the master branch

Development on feature B began

A bug was discovered in v1.0 and the v1.0.1 hotfix branch was created to address it

Development on feature A began

The bug was fixed, v1.0.1 was released and the fix was merged back to the master branch

Development continued on master as well as features A and B

Changes from master were merged to feature A (optional merge)

Release branch v1.1 was created

Development on feature A completed and was merged to the master branch

v1.1 was released (without feature A), tagged and merged back to the master branch

We've recently set up a few new workstations with Windows 8.1 and wanted to share the process we use, in case it might come in handy for others.

Windows can take a long time to install, as can Microsoft Office and, most especially, Visual Studio with all of its service packs. If we installed everything manually every time we needed a new machine, we'd lose a day each time.

To solve this problem, we decided to define the Encodo Windows Base Image, which includes all of the standard software that everyone should have installed. Using this image saves a lot of time when you need to either install a new workstation or you'd like to start with a fresh installation if your current one has gotten a bit crufty.

Encodo doesn't have a lot of workstations, so we don't really need anything too enterprise-y, but we do want something that works reliably and quickly.

After a lot of trial and error, we've come up with the following scheme.

Maintain a Windows 8.1 image in a VMDK file

Use VirtualBox to run the image

Use Chocolatey for (almost) all software installation

Use Ubuntu Live on a USB stick (from which to boot)

Use Clonezilla to copy the image to the target drive

Installed Software

The standard loadout for developers comprises the following applications.

These are updated by Windows Update.

Windows 8.1 Enterprise

Excel

Powerpoint

Word

Visio

German Office Proofing Tools

Visual Studio 2013

These applications must be updated manually.

ReSharper Ultimate

Timesnapper

The rest of the software is maintained with Chocolatey.

beyondcompare (file differ)

conemu (PowerShell enhancement)

fiddler4 (HTTP traffic analyzer)

firefox

flashplayerplugin

git (source control)

googlechrome

greenshot (screenshot tool)

jitsi (VOIP/SIP)

jre8 (Java)

keepass (Password manager)

nodejs

pidgin (XMPP chat)

poshgit (Powershell/Git integration)

putty (SSH)

smartgit (GIT GUI)

stylecop (VS/R# extension)

sublimetext3 (text editor)

sumatrapdf (PDF viewer)

truecrypt (Drive encryption)

vlc (video/audio player/converter)

winscp (SSH file-copy tool)

wireshark (TCP traffic analyzer)

Maintaining the Image

This part has gotten quite simple.

Load the VM with the Windows 8.1 image

Apply Windows Updates

Update ReSharper, if necessary

Run choco upgrade all to update all Chocolatey packages

Shut down the VM cleanly

Writing the image to a new SSD

The instructions we maintain internally are more detailed, but the general gist is to do the following,

Install the SSD in the target machine

Plug in the Ubuntu Live USB stick

Plug in the USB drive that has the Windows image and Clonezilla on it

Boot to the Ubuntu desktop

Make sure you have network access

Install VirtualBox in Ubuntu from the App Center

Create a VMDK file for the target SSD

Start VirtualBox and create a new VM with the Windows image and SSD VMDK as drives and Clonezilla configured as a CD

Use sysprep /generalize to reset Windows to an OOB (Out-of-box) experience for the new owner

Conclusion

We're pretty happy with this approach and the loadout but welcome any feedback or suggestions to improve them. We've set up two notebooks in the last three weeks, but that's definitely a high-water mark for us. We expect to use this process one more time this year (in August, when a new hire arrives), but it's nice to know that we now have a predictable process.

Added support for parameterized custom SQL queries with the ICustomCommandBuilder. This was added by customer request, for applications that formulate queries that are beyond what the Quino ORM is currently capable of mapping. A blog post with more detail on how this works is forthcoming. (QNO-4802)

Further cleanup and consolidation in the data-driver hierarchy. This work was a direct result of Daniel Roth's Bachelor's thesis work in which he integrated NHibernate as an alternative ORM for Quino. (QNO-4808; still to do by RTM: QNO-4749)

Discontinued support for DataContract and DataMember attributes in metadata and generated code. (QNO-4823, QNO-4826)

Goodbye, old friends

This release addressed some issues that have been bugging us for a while (almost 3 years in one case).

QNO-3765 (32 months): After a schema migration caused by a DatabaseException on login, restart the application

QNO-4507 (14 months): Business objects for modules should not rely on GlobalContext in generated code

You will not be missed.

Breaking changes

As we've mentioned before, this release is absolutely merciless in regard to backwards compatibility. Old code is not retained as obsolete Obsolete. Instead, a project upgrading to 2.0 will encounter compile errors.

That said, if you arm yourself with a bit of time, ReSharper and the release notes (and possibly keep an Encodo employee on speed-dial), the upgrade is not difficult. It consists mainly of letting ReSharper update namespace references for you. In cases where the update is not so straightforward, we've provided release notes.

V1 generated code support

One of the few things you'll be able to keep (at least for a minor version or two) is the old-style generated code. We made this concession because, while even a large solution can be upgraded from 1.13.0 to 2.0 relatively painlessly in about an hour (we've converted our own internal projects to test), changing the generated-code format is potentially a much larger change. Again, an upgrade to the generated-code format isn't complicated but it might require more than an hour or two's worth of elbow grease to complete.

Therefore, you'll be able to not only retain your old generated code, but the code generator will continue support the old-style code-generation format for further development. Expect the grace period to be relatively short, though.

Regardless of whether you elect to keep the old-style generated code, you'll have to do a little bit of extra work just to be able to generate code again.

Manually update a couple of generated files, as shown below.

Compile the solution

Generate code with the Quino tools

Before you can regenerate, you'll have to manually update your previously generated code in the main model file, as shown below.

Still to do by RTM

As you can see, we've already done quite a bit of work in beta1 and beta2. We have a few more tasks planned for the feature-complete release candidate for 2.0

Move the schema-migration metadata table to a module.

The Quino schema-migration extracts most of the information it needs from database schema itself. It also stores extra metadata in a special table. This table has been with Quino since before modules were supported (over seven years) and hence was built in a completely custom manner. Moving this support to a Quino metadata module will remove unnecessary implementation and make the migration process more straightforward. (QNO-4888)
Separate collection algorithm from storage/display method in IRecorder and descendants.

The recording/logging library has a very good interface but the implementation for the standard recorders has become too complex as we added support for multi-threading, custom disposal and so on. We want to clean this up to make it easier to extend the library with custom loggers. (QNO-4888)
Split up Encodo and Quino assemblies based on functionality.

There are only a very dependencies left to untangle (QNO-4678, QNO-4672, QNO-4670); after that, we'll split up the two main Encodo and Quino assemblies along functional lines. (QNO-4376)
Finish integrating building and publishing NuGet and symbol packages into Quino's release process.

And, finally, once we have the assemblies split up to our liking, we'll finalize the Nuget packages for the Quino library and leave the direct-assembly-reference days behind us, ready for Visual Studio 2015.
(QNO-4376)

That's all we've got for now. See you next month for the next (and, hopefully, final update)!

part I discusses the history of the configuration system in Quino as well as a handful of principles we kept in mind while designing the new system

part II discusses the basic architectural changes and compares an example from the old configuration system to the new.

part III takes a look at configuring the "execution order" -- the actions to execute during application startup and shutdown

Introduction

Registering with an IOC is all well and good, but something has to make calls into the IOC to get the ball rolling.

Something has to actually make calls into the IOC to get the ball rolling.

Even service applications -- which start up quickly and wait for requests to do most of their work -- have basic operations to execute before declaring themselves ready.

Things can get complex when starting up registered components and performing basic checks and non-IOC configuration.

In which order are the components and configuration elements executed?

How do you indicate dependencies?

How can an application replace a piece of the standard startup?

What kind of startup components are there?

Part of the complexity of configuration and startup is that developers quickly forget all of the things that they've come to expect from a mature product and start from zero again with each application. Encodo and Quino applications take advantage of prior work to include standard behavior for a lot of common situations.

Configuration Patterns

Some components can be configured once and directly by calling a method like UseMetaTranslations(string filePath), which includes all of the configuration options directly in the composition call. This pattern is perfect for options that are used only by one action or that it wouldn't make sense to override in a subsequent action.

So, for simple actions, an application can just replace the existing action with its own, custom action. In the example above, an application for which translations had already been configured would just call UseMetaTranslations() again in order to override that behavior with its own.

Most application will replace standard actions or customize standard settings

Some components, however, will want to expose settings that can be customized by actions before they are used to initialize the component.

For example, there is an action called SetUpLoggingAction, which configures logging for the application. This action uses IFileLogSettings and IEventLogSettings objects from the IOC during execution to determine which types of logging to configure.

An application is, of course, free to replace the entire SetUpLoggingAction action with its own, completely custom behavior. However, an application that just wanted to change the log-file behavior or turn on event-logging could use the Configure<TService>() method1, as shown below.

Actions

A Quino application object has a list of StartupActions and a list of ShutdownActions. Most standard middleware methods register objects with the IOC and add one or more actions to configure those objects during application startup.

Actions have existed for quite a while in Quino. In Quino 2, they have been considerably simplified and streamlined to the point where all but a handful are little more than a functional interface2.

The list below will give you an idea of the kind of configuration actions we're talking about.

Load configuration data

Process command line

Set up logging

Upgrade settings/configuration (e.g. silent upgrade)

Log a header (e.g. user/date/file locations/etc.; for console apps. this might be mirrored to the console)

Load plugins

Set up standard locations (e.g. file-system locations)

For installed/desktop/mobile applications, there's also:

Initialize UI components

Provide loading feedback

Check/manage multiple running instances

Check software update

Login/authentication

Quino applications also have actions to configure metadata:

Configure expression engine

Load metadata

Load metadata-overlays

Validate metadata

Check data-provider connections

Check/migrate schema

Generate default data

Application shutdown has a smaller set of vital cleanup chores that:

dispose of connection managers and other open resources

write out to the log, flush it and close it

show final feedback to the user

Anatomy of an Action

The following example3 is for the 1.x version of the relatively simple ConfigureDisplayLanguageAction.

As you can see, quite a bit of code and declaration text was removed, all without sacrificing any functionality. The final form is quite simple, inheriting from a simple base class that manages the name of the action and overrides a single parameter-less method. It is now much easier to see what an action does and the barrier to entry for customization is much lower.

In the following sections, we'll take a look at each of the problems indicated above in more detail.

Remove the ConfigurationOptions parameter

These options are a simple enumeration with values like Client, Testing, Service and so on. They were used only by a handful of standard actions.

These options made it more difficult to decide how to implement the action for a given task. If two tasks were completely different, then a developer would know to create two separate actions. However, if two tasks were similar, but could be executed differently depending on application type (e.g. testing vs. client), then the developer could still have used two separate actions, but could also have used the configuration options. Multiple ways of doing the exact same thing is all kinds of bad.

Multiple ways of doing the exact same thing is all kinds of bad.

Parameters like this conflict conceptually with the idea of using composition to build an application. To keep things simple, Quino applications should be configured exclusively by composition. Composing an application with service registrations and startup actions and then passing options to the startup introduced an unneeded level of complexity.

Instead, an application now defines a separate action for each set of options. For example, most applications will need to set up the display language to use -- be it for a GUI, a command-line or just to log messages in the correct language. For that, the application can add a ConfigureDisplayLanguageAction to the startup actions or call the standard method UseCore(). Desktop or single-user applications can use the ConfigureGlobalDisplayLanguageAction or call UseGlobalCore() to make sure that global language resources are also configured.

Remove the TApplication generic parameter

The generic parameter to this interface complicates the IApplication<TApplication> interface and causes no end of trouble in MetaApplication, which actually inherits from IApplication<IMetaApplication> for historical reasons.

There is no need to maintain statelessness for a single-use object.

Originally, this parameter guaranteed that an action could be stateless. However, each action object is attached to exactly one application (in the IApplication<TApplication>.StartupActions list. So the action that is attached to an application is technically stateless, and a completely different application than the one to which the action is attached could be passed to the IApplcationAction.Execute...which makes no sense whatsoever.

Luckily, this never happens, and only the application to which the action is attached is passed to that method. If that's the case, though, why not just create the action with the application as a constructor parameter when the action is added to the StartupActions list? There is no need to maintain statelessness for a single-use object.

This way, there is no generic parameter for the IApplication interface, all of the extension methods are much simpler and applications are free to create custom actions that work with descendants of IApplication simply by requiring that type in the constructor parameter.

Debugging is important

A global exception handler is terrible for debugging

The original startup avoided exceptions, preferring an integer return result instead.

In release mode, a global exception handler is active and is there to help the application exit more or less smoothly -- e.g. by logging the error, closing resources where possible, and so on.

A global exception handler is terrible for debugging, though. For exceptions that are caught, the default behavior of the debugger is to stop where the exception is caught rather than where it is thrown. Instead, you want exceptions raised by your application to to stop the debugger from where they are thrown.

So that's part of the reason why the startup and shutdown in 1.x used return codes rather than exceptions.

Multiple valid code paths

The other reason Quino used result codes is that most non-trivial applications actually have multiple paths through which they could successfully run.

Exactly which path the application should take depends on startup conditions, parameters and so on. Some common examples are:

Show command-line help

Migrate an application schema

Import, export or generate data

To show command-line help, an application execute its startup actions in order. It reaches the action that checks whether the user requested command-line help. This action processes the request, displays that help and then wants to smoothly exit the application. The "main" path -- perhaps showing the user a desktop application -- should no longer be executed.

Non-trivial applications have multiple valid run profiles.

Similarly, the action that checks the database schema determines that the schema in the data provider doesn't match the model. In this case, it would like to offer the user (usually a developer) the option to update the schema. Once the schema is updated, though, startup should be restarted from the beginning, trying again to run the main path.

Use exceptions to indicate errors

Whereas the Quino 1.x startup addressed the design requirements above with return codes, this imposes an undue burden on implementors. There was also confusion as to when it was OK to actually throw an exception rather than returning a special code.

Instead, the Quino 2.x startup always uses exceptions to indicate errors. There are a few special types of exceptions recognized by the startup code that can indicate whether the application should silently -- and successfully -- exit or whether the startup should be attempted again.

Conclusion

There is of course more detail into which we could go on much of what we discussed in these three articles, but that should suffice for an overview of the Quino configuration library.

If C# had them, that it is. See Java 8 for an explanation of what they are.↩

In this article, we'll continue the discussion about configuration started in part I. We wrapped up that part with the following principles to keep in mind while designing the new system.

Consistency

Opt-in configuration

Inversion of Control

Configuration vs. Execution

Common Usage

Borrowing from ASP.NET vNext

Quino's configuration inconsistencies and issues have been well-known for several versions -- and years -- but the opportunity to rewrite it comes only now with a major-version break.

Luckily for us, ASP.NET has been going through a similar struggle and evolution. We were able to model some of our terminology on the patterns from their next version. For example, ASP.NET has moved to a pattern where an application-builder object is passed to user code for configuration. The pattern there is to include middleware (what we call "configuration") by calling extension methods starting with "Use".

Quino has had a similar pattern for a while, but the method names varied: "Integrate", "Add", "Include"; these methods have now all been standardized to "Use" to match the prevailing .NET winds.

Begone configuration and feedback

Additionally, Quino used to make a distinction between an application instance and its "configuration" -- the template on which an application is based. No more. Too complicated. This design decision, coupled with the promotion of a platform-specific "Feedback" object to first-level citizen, led to an explosion of generic type parameters.1

The distinction between configuration (template) and application (instance) has been removed. Instead, there is just an application object to configure.

The feedback object is now to be found in the service locator. An application registers a platform-specific feedback to use as it would any other customization.

Hello service locator

ASP.NET vNext has made the service locator a first-class citizen. In ASP.NET, applications receive an IApplicationBuilder in one magic "Configure" method and receive an IServiceCollection in another magic "ConfigureServices" method.

In Quino 2.x, the application is in charge of creating the service container, though Quino provides a method to create and configure a standard one (SimpleInjector). That service locator is passed to the IApplication object and subsequently accessible there.

Services can of course be registered directly or by calling pre-packaged Middleware methods. Unlike ASP.NET vNext, Quino 2.x makes no distinction between configuring middleware and including the services required by that middleware.

Begone configuration hierarchy

Quino's configuration library has its roots in a time before we were using an IOC container. The configuration was defined as a hierarchy of configuration classes that modeled the following layers.

A base implementation that makes only the most primitive assumptions about an application. For example, that it has a RunMode ("debug" or "release") or an exit code or that it has a logging mechanism (e.g. IRecorder).

The "Core" layer comprises application components that are very common, but do not depend on Quino's metadata.

And, finally, the "Meta" layer includes configuration for application components that extend the core with metadata-dependent versions as well as specific components required by Quino applications.

While these layers are still somewhat evident, the move to middleware packages has blurred the distinction between them. Instead of choosing a concrete configuration base class, an application now calls a handful of "Use" methods to indicate what kind of application to build.

There are, of course, still helpful top-level methods -- e.g. UseCore() and UseMeta() methods -- that pull in all of the middleware for the standard application types. But, crucially, the application is free to tweak this configuration with more granular calls to register custom configuration in the service locator.

This is a flexible and transparent improvement over passing esoteric parameters to monolithic configuration methods, as in the previous version.

An example: Configure a software updater

Just as a simple example, whereas a Quino 1.x standalone application would set ICoreConfiguration.UseSoftwareUpdater to true, a Quino 2.x application calls UseSoftwareUpdater(). Where a Quino 1.x Winform application would inherit from the WinformFeedback in order to return a customized ISoftwareUpdateFeedback, a Quino 2.x application calls UseSoftwareUpdateFeedback().

The software-update feedback class is defined below and is used by both versions.

That's where the similarities end, though. The code samples below show the stark difference between the old and new configuration systems.

Quino 1.x

As explained above, Quino 1.x did not allow registration of a sub-feedback like the software-updater. Instead, the application had to inherit from the main feedback and override a method to create the desired sub-feedback.

The method-override in the feedback was hideous and scared off a good many developers. not only that, the pattern was to use a magical, platform-specific WinformDxMetaConfigurationTools.Run method to create an application, run it and dispose it.

Quino 2.x

Software-update feedback-registration in Quino 2.x adheres to the principles outlined at the top of the article: it is consistent and uses common patterns (functionality is included and customized with methods named "Use"), configuration is opt-in, and the IOC container is used throughout (albeit implicitly with these higher-level configuration methods).

Additionally, the program has complete control over creation, running and disposal of the application. No more magic and implicit after-the-fact configuration.

What comes after configuration?

In the next and (hopefully) final article, we'll take a look at configuring execution -- the actions to execute during startup and shutdown. Registering objects in a service locator is all well and good, but calls into the service locator have to be made in order for anything to actually happen.

Keeping this system flexible and addressing standard application requirements is a challenging but not insurmountable problem. Stay tuned.

The CustomWinformFeedback in the Quino 1.x code at the end of this article provides a glaring example.↩

In this article, I'll continue the discussion about configuration improvements mentioned in the release notes for Quino 2.0-beta1. With beta2 development underway, I thought I'd share some more of the thought process behind the forthcoming changes.

Software Libraries

what sort of patterns integrate and customize the functionality of libraries in an application?

An application comprises multiple tasks, only some of which are part of that application's actual domain. For those parts not in the application domain, software developers use libraries. A library captures a pattern or a particular way of doing something, making it available through an abstraction. These simplify and smooth away detail irrelevant to the application.

A runtime and its standard libraries provide many such abstractions: for reading/writing files, connecting to networks and so on. Third-party libraries provide others, like logging, IOC, task-scheduling and more.

Because Encodo's been writing software for a long time, we have a lot of patterns that we've come up with for our applications. These libraries are split into two main groups:

Encodo.*: extensions to the .NET framework or third-party libraries that don't depend on Quino metadata.

Quino.*: extensions to the .NET framework, third-party libraries or Encodo libraries that depend on Quino metadata.

A sort of "meta" library that lies on top of all of this is configuration and startup of applications that use these libraries. That is, what sort of patterns integrate and customize the functionality of libraries in an application?

Balancing K.I.S.S. and D.R.Y

Almost nowhere in an application is the balance between K.I.S.S. and D.R.Y. more difficult to maintain than in configuration and startup.

So if we already know all of that, why does Quino need a new configuration library?

As mentioned above, there is a lot of commonality between applications in this area. An application will definitely want to incorporate such common configuration from a library. Updates and improvements to that library will then be applied as for any other. This is a good thing.

However, an application will also want to be able to tweak almost any given facet of this shared configuration. That is: just keep the good parts, have those upgraded when they're changed, but apply customization and extend functionality for the application's domain. Easy, right?

It is here that a good configuration library will find just the right level of granularity for customization. Too coarse? Then an application ends up throwing out too much common configuration in order to customize a small part of it. Too fine? Then the configuration system is too verbose or complex and the application avoids using it.

Instead, a configuration system should establish clear patterns -- optimally, just one -- for how to apply customization.

The builder of the underlying configuration library has to consider the myriad situations that might face a library developer and distill those requirements to a common pattern.

The library developer needs to think about which parts an application might want to customize and think about how to expose them.

So if we already know all of that, then why does Quino need a new configuration library? Well...

History of Quino's Configuration Library

It's really easy to make things over-complicated and muddy. It's really easy to end up growing several different kinds of extension systems over the years. Quino ended up with a generics-heavy API that made declaring new configuration components very wordy.

The core of Quino is the metadata definition for an application domain. That part has barely changed at all since we first wrote it lo so many years ago. We declared it to be our core business -- the part that we are better than others at -- the part we wanted to have under our own control. Our first draft1 has held up remarkably well.

Many of the other components have undergone quite a bit of flux: changes in requirements and the components themselves as well as new development processes and patterns all contributed to change. Over time, various applications had different needs and made adjustments to a different iteration of the configuration library. We moved from supporting only single-threaded, single-user desktop applications to also supporting multi-user, multi-threaded services and web servers.

...we were left with an ugly configuration system that no-one wanted to extend or
use -- so yet another would be invented.

For all of these different applications, we naturally wanted to maintain the common configuration where possible -- but customizations for new platforms stretched the capabilities of the configuration library.

Customization would be made to a new version of that library, but applications that couldn't be upgraded immediately forced backwards-compatibility and thus resulted in several different concurrent ways of configuring a particular facet of an application.

In order to keep things in one place, we ended up breaking the interface-separation rule. Dependencies started clumping drastically, but it was OK because nobody was trying to use one thing without the other ten. But it was hard to see what was going on; customization became a black box for all but one or two gurus. On and on it went, until we were left with an ugly configuration system that no-one wanted to extend or use -- so yet another would be invented, ad-hoc. And so it went.

Principles for Quino 2.0 Configuration

With Quino 2.0, we examined the existing system and came up with a list of principles.

Consistency: there should be only be one way of customizing settings and components. When a developer asks how to change something, the answer should always be the same pattern. If not, there better be a damned good reason (see "Configuration vs. Execution" below).

Opt-in configuration: No more magic methods or base classes that automatically add components and settings in black boxes. Even if the application has to call one or two more methods, it's better to be declarative than clever(tm).

Inversion of Control: Standardize configuration to use an IOC container or service locator wherever possible. Instead of clumping settings in configuration or application objects, create discrete settings and put them in the container. Make dependencies explicit (constructor parameters!) and resolved through the container wherever possible.

Configuration vs. Execution: Be very aware of the difference between the "configuration" phase and the "execution" phase. During configuration, the service locator is used in write-only mode; during execution, the service locator is in read-only mode. Code executed during configuration must rely only on explicit dependency-injection via constructor.

Common Usage: Establish a pattern for calling configuration methods, from least to most specific. E.g. call Quino's base configuration methods before any application-specific customization. Establish patterns for how to configure a single startup action or how to create settings for a larger component that could be further customized in subsequent phases.

In the next part, we'll take a look at some concrete examples and documentation for the new patterns.2

To be fair, it wasn't our first attempt at metadata. In one way or another, we'd been defining metadata structures for generic programming for more years than we'd be comfortable divulging. A h/t of course to Opus Software's Atlas libraries -- 1 and 2 -- where many of us contributed. Also, I had experience with cross-platform, generic libraries in C++ stretching all the way back to the late 90s as well as the generalized/meta elements of the earthli WebCore. So it was more like the fourth or fifth shot at it, if we're going to be honest -- but at least we got it right. :-)↩

In particular, I'll add more detail about "Common Usage" for those who might feel I've left them hanging a bit in the last bullet point. Sorry 'bout that. The day is only so long. See you next time...↩

Added support for RunInTransaction attribute. Specify the attribute on any IMetaTestFixture to wrap a test or every test in a fixture in a transaction. (QNO-4682)

Shared connection manager is now disposed when an application is disposed. (QNO-4752)

Breaking changes

Oh yeah. You betcha. This is a major release and we've knowingly made a decision not to maintain backwards-compatibility at all costs. Good news, though, the changes to make are relatively straightforward and easy to make if you've got a tool like ReSharper that can update using statements automatically.

Namespace changes

As we saw in part I and part II of the guide to using NDepend, Quino 2.0 has unsnarled quite a few dependency issues. A large number of classes and interfaces have been moved out of the Encodo.Tools namespace. Many have been moved to Encodo.Core but others have been scattered into more appropriate and more specific namespaces.

This is one part of the larger changes, easily addressed by using ReSharper to Alt + Enter your way through the compile errors.

Logging changes

Another large change is in renaming IMessageRecorder to IRecorder and IMessageStore to IInMemoryRecorder. Judicious use of search/replace or just a bit of elbow grease will get you through these as well.

Configuration changes

Finally, probably the most far-reaching change is in merging IConfiguration into IApplication. In previous versions of Quino, applications would create a configuration object and pass that to a platform-dependent Quino Run() method. Some configuration was provided by the application and some by the platform-specific method.

The example for Quino 1.13.0 below comes from the JobVortex Winform application.

As you can see, instead of creating a configuration, the program creates an application object. Instead of using configuration packages mixed with extension methods named "Integrate", "Configure" and so on, the new API uses "Use" everywhere. This should be comfortable for people familiar with the OWIN/Katana configuration pattern.

It does, however, mean that the IConfiguration, ICoreConfiguration and IMetaConfiguration don't exist anymore. Instead, use IApplication, ICoreApplication and IMetaApplication Again, a bit of elbow grease will be needed to get through these compile errors, but there's little to no risk or need for high-level decisions.

There are a lot of these prepackaged methods to help you create common kinds of applications:

UseCoreConsole() (a non-Quino application that uses the console)

UseMetaConsole() (a Quino application that uses the console)

UseCoreWinformDx() (a non-Quino application that uses Winform)

UseMetaWinformDx() (a Quino application that uses Winform)

UseReporting()

UseRemotingServer()

Etc.

I think you get the idea. Once we have a final release for Quino 2.0, we'll write more about how to use this new pattern.

Looking ahead to 2.0 Final

This is still just an internal beta of the 2.0 final version. More changes are on the way, including but not limited to:

Remove IConfigurationPackage and standardize the configuration API to be named "Use" everywhere (QNO-4771)

Microsoft has recently made a lot of their .NET code open-source. Not only is the code for many of the base libraries open-source but also the code for the runtime itself. On top of that, basic .NET development is now much more open to community involvement.

C# 6 Recap

You may be surprised at the version number "7" -- aren't we still waiting for C# 6 to be officially released? Yes, we are.

If you'll recall, the primary feature added to C# 5 was support for asynchronous operations through the async/await keywords. Most .NET programmers are only getting around to using this rather far- and deep-reaching feature, to say nothing of the new C# 6 features that are almost officially available.

Auto-Property Initializers: initialize a property in the declaration rather than in the constructor or on an otherwise unnecessary local variable.

Out Parameter Declaration: An out parameter can now be declared inline with var or a specific type. This avoids the ugly variable declaration outside of a call to a Try* method.

Using Static Class: using can now be used with with a static class as well as a namespace. Direct access to methods and properties of a static class should clean up some code considerably.

String Interpolation: Instead of using string.Format() and numbered parameters for formatting, C# 6 allows expressions to be embedded directly in a string (á la PHP): e.g. "{Name} logged in at {Time}"

nameof(): This language feature gets the name of the element passed to it; useful for data-binding, logging or anything that refers to variables or properties.

Null-conditional operator: This feature reduces conditional, null-checking cruft by returning null when the target of a call is null. E.g. company.People?[0]?.ContactInfo?.BusinessAddress.Street includes three null-checks

Looking ahead to C# 7

If the idea of using await correctly or wrapping your head around the C# 6 features outlined above doesn't already make your poor head spin, then let's move on to language features that aren't even close to being implemented yet.

Pattern-matching: C# has been ogling its similarly named colleague F# for a while. One of the major ideas on the table for C# is improving the ability to represent as well as match against various types of pure data, with an emphasis on immutable data.

Metaprogramming: Another focus for C# is reducing boilerplate and capturing common code-generation patterns. They're thinking of delegation of interfaces through composition. Also welcome would be an improvement in the expressiveness of generic constraints.

Controlling Nullability: Another idea is to be able to declare reference types that can never be null at compile-time (where reasonable -- they do acknowledge that they may end up with a "less ambitious approach").

Readonly parameters and locals: Being able to express when change is allowed is a powerful form of expressiveness. C# 7 may include the ability to make local variables and parameters readonly. This will help avoid accidental side-effects.

Lambda capture lists: One of the issues with closures is that they currently just close over any referenced variables. The compiler just makes this happen and for the most part works as expected. When it doesn't work as expected, it creates subtle bugs that lead to leaks, race conditions and all sorts of hairy situations that are difficult to debug.

If you throw in the increased use of and nesting of lambda calls, you end up with subtle bugs buried in frameworks and libraries that are nearly impossible to tease out.

The idea of this feature is to allow a lambda to explicitly capture variables and perhaps even indicate whether the capture is read-only. Any additional capture would be flagged by the compiler or tools as an error.Contracts(!): And, finally, this is the feature I'm most excited about because I've been waiting for integrated language support for Design by Contract for literally decades1, ever since I read the Object-Oriented Software Construction 2 (OOSC2) for the first time. The design document doesn't say much about it, but mentions that ".NET already has a contract system", the weaknesses of which I've written about before. Torgersen writes:

When you think about how much code is currently occupied with arguments and result checking, this certainly seems like an attractive way to reduce code bloat and improve readability.

...and expressiveness and provability!

There are a bunch of User Voice issues that I can't encourage you enough to vote for so we can finally get this feature:

* [Integrate Code Contracts more deeply in the .NET Framework](http://visualstudio.uservoice.com/forums/121579-visual-studio/suggestions/2304022-integrate-code-contract-keywords-into-the-main-ne)
* [Integrate Code Contract Keywords into the main .Net Languages](http://visualstudio.uservoice.com/forums/121579-visual-studio/suggestions/2304022-integrate-code-contract-keywords-into-the-main-ne)

With some or all of these improvements, C# 7 would move much closer to a provable language at compile-time, an improvement over being a safe language at run-time.

We can already indicate that instance data or properties are readonly. We can already mark methods as static to prevent the use of this. We can use ReSharper [NotNull] attributes to (kinda) enforce non-null references without using structs and incurring the debt of value-passing and -copying semantics.

I'm already quite happy with C# 5, but if you throw in some or all of the stuff outlined above, I'll be even happier. I'll still have stuff I can think of to increase expressiveness -- covariant return types for polymorphic methods or anchored types or relaxed contravariant type-conformance -- but this next set of features being discussed sounds really, really good.

I love the features of the language Eiffel, but haven't ever been able to use it for work. The tools and IDE are a bit stuck in the past (very dated on Windows; X11 required on OS X). The language is super-strong, with native support for contracts, anchored types, null-safe programming, contravariant type-conformance, covariant return types and probably much more that C# is slowly but surely including with each version. Unfair? I've been writing about this progress for years (from newest to oldest):

In part I of these series, we discussed applications, which provide the model and data provider, and sessions, which encapsulate high-level data context. In part II, we covered command types and inputs to the data pipeline.

In this article, we're going to take a look at the data pipeline itself.

Overview

The primary goal of the data pipeline is, of course, to correctly execute each query to retrieve data or command to store, delete or refresh data. The diagram to the right shows that the pipeline consists of several data handlers. Some of these refer to data sources, which can be anything: an SQL database or a remote service.1

The name "pipeline" is only somewhat appropriate: A command can jump out anywhere in the pipeline rather than just at the opposite end. A given command will be processed through the various data handlers until one of them pronounces the command to be "complete".

Command context: recap

In the previous parts, we learned that the input to the pipeline is an IDataCommandContext. To briefly recap, this object has the following properties:

Session: Defines the context within which to execute the command

Handler: Implements an abstraction for reading/writing values and flags to the objects (e.g. SetValue(IMetaProperty)); more detail on this later

Objects: The sequence of objects on which to operate (e.g. for save commands) or to return (e.g. for load commands)

ExecutableQuery: The query to execute when loading or deleting objects

MetaClass: The metadata that describes the root object in this command; more detail on this later as well

Handlers

Where the pipeline metaphor holds up is that the command context will always start at the same end. The ordering of data handlers is intended to reduce the amount of work and time invested in processing a given command.

Analyzers

The first stage of processing is to quickly analyze the command to handle cases where there is nothing to do. For example,

The command is to save or delete, but the sequence of Objects is empty

The command is to save or reload, but none of the objects in the sequence of Objects has changed

The command is to load data but the query restricts to a null value in the primary key or a foreign key that references a non-nullable, unique key.

It is useful to capture these checks in one or more analyzers for the following reasons,

All drivers share a common implementation for efficiency checks

Optimizations are applied independent of the data sources used

Driver code focuses on driver-specifics rather than general optimization

Caches

If the analyzer hasn't categorically handled the command and the command is to load data, the next step is to check caches. For the purposes of this article, there are two things that affect how long data is cached:

If the session is in a transacted state, then only immutable data, data that was loaded before the transaction began or data loaded within that transaction can be used. Data loaded/saved by other sessions -- possibly to global caches -- is not visible to a session in a transaction with an isolationLevel stricter than RepeatableRead.

The metadata associated with the objects can include configuration settings that control maximum caching lifetime as well as an access-timeout. The default settings are good for general use but can be tweaked for specific object types.

The ValueListDataHandler returns immutable data. Since the data is immutable, it can be used independent of the transaction-state of the session in which the command is executed.

The SessionCacheDataHandler returns data that's already been loaded or saved in this session, to avoid a call to a possibly high-latency back-end. This data is safe to use within the session with transactions because the cache is rolled back when a transaction is rolled back.

Data sources

If the analyzer and cache haven't handled a command, then we're finally at a point where we can no longer avoid a call to a data source. Data sources can be internal or external.

Databases

The most common type is an external database:

PostgreSql 8.x and higher (PostgreSql 9.x for schema migration)

Sql Server 2008 and higher (w/schema migration)

Mongo (no schema; no migration)

SQlite (not yet released)

Remoting

Another standard data source is the Quino remote application server, which provides a classic interface- and method-based service layer as well as mapping nearly the full power of Quino's generalized querying capabilities to an application server. That is, an application can smoothly switch between a direct connection to a database to using the remoting driver to call into a service layer instead.

The remoting driver supports both binary and JSON protocols. Further details are also beyond the scope of this article, but this driver has proven quite useful for scaling smaller client-heavy applications with a single database to thin clients talking to an application server.

Custom/Aspect-based

And finally, there is another way to easily include "mini" data drivers in an application. Any metaclass can include an IDataHandlerAspect that defines its own data driver as well as its capabilities. Most implementations use this technique to bind in immutable lists of data. But this technique has also been used to load/save data from/to external APIs, like REST services. We can take a look at some examples in more detail in another article.

The mini data driver created for use with an aspect can relatively easily be converted to a full-fledged data handler.

Local evaluation

The last step in a command is what Quino calls "local evaluation". Essentially, if a command cannot be handled entirely within the rest of the data pipeline -- either entirely by an analyzer, one or more caches or the data source for that type of object -- then the local analyzer completes the command.

What does this mean? Any orderings or restrictions in a query that cannot be mapped to the data source (e.g. a C# lambda is too complex to map to SQL) are evaluated on the client rather than the server. Therefore, any query that can be formulated in Quino can also be evaluated fully by the data pipeline -- the question is only of how much of it can be executed on the server, where it would (usually) be more efficient to do so.