The summary below describes major new features, items of note and breaking changes. The full list of issues is also available for those with access to the Encodo issue tracker.

Highlights

Quino 2 is finally ready and will go out the door with a 2.1 rather than a 2.0 version number. The reason being that we released 2.0 internally and tested the hell out of it. 2.1 is the result of that testing. It includes a lot of bug fixes as well as API tweaks to make things easier for developers.

Addressed some performance regressions from 1.13 and added a suite of performance tests to keep better track of performance targets.

On top of that, I've gone through the backlog and found many issues that had either been fixed already, were obsolete or had been inadequately specified. The Quino backlog dropped from 682 to 542 issues.

18 issues marked as won't fix and 46 issues marked as obsolete

Stop supporting Glimpse, although there is a Quino.Web.Glimpse package to use the support we do have (QNO-4560)

Breaking changes

The following changes are marked with Obsolete attributes, so you'll get a hint as to how to fix the problem. Since these are changes from an unreleased version of Quino, they cause a compile error.

UseMetaSchemaWinformDxFeedback() has been renamed to UseMetaschemaWinformDx()

UseSchemaMigrationSupport() has been renamed to UseIntegratedSchemaMigration()

MetaHttpApplicationBase.MetaApplication has been renamed to BaseApplication

The IServer.Run() extension method is no longer supported.

GetStandardFilters, GetStandardFiltersForFormsAuthentication() and GetStandardFiltersForUnrestrictedAuthentication are no longer supported. Instead, you should register filters in the IOC and use the IWebFilterAttributeFactory.CreateFilters() to get the list of supported filters

The ToolRequirementAttribute is no longer supported or used.

AssemblyExtensions.GetLoadableTypesWithInterface() is no longer supported

AssemblyTools.GetValidAssembly() has been replaced with AssemblyTools.GetApplicationAssembly(); GetExecutableName() and GetExecutablePath() have removed.

All of the constant expressions on the MetaBuilderBase (e.g. EndOfTimeExpression) are obsolete. Instead, use MetaBuilderBase.ExpressionFactory.Constants.EndOfTime instead.

All of the global values on MetaObjectDescriptionExtensions are obsolete; instead, use the IMetaObjectFormatterSettings from the IOC to change settings on startup.

Similarly, the set of extension methods that included GetShortDescription() has been moved to the IMetaObjectFormatter. Obtain an instance from the IOC, as usual.

The summary below describes major new features, items of note and breaking changes. The full list of issues is also available for those with access to the Encodo issue tracker.

Highlights

In the beta1 and beta2 release notes, we read about changes to configuration, dependency reduction, the data driver architecture, DDL commands, security and access control in web applications and a new code-generation format.

In 2.0 final -- which was actually released internally on November 13th, 2015 (a Friday) -- we made the following additional improvements:

Moved the metadata table maintained for the schema-migrator to a proper Quino module. (QNO-4741)

Rebuilt the logging and messaging API and drastically simplified the implementation throughout (QNO-4688 w/sub-tasks, QNO-4954)

These notes are being published for completeness and documentation. The first publicly available release of Quino 2.x will be 2.1 or higher (release notes coming soon).

Breaking changes

As we've mentioned before, this release is absolutely merciless in regard to backwards compatibility. Old code is not retained as Obsolete. Instead, a project upgrading to 2.0 will encounter compile errors.

The following notes serve as an incomplete guide that will help you upgrade a Quino-based product.

As I wrote in the release notes for beta1 and beta2, if you arm yourself with a bit of time, ReSharper and the release notes (and possibly keep an Encodo employee on speed-dial), the upgrade is not difficult. It consists mainly of letting ReSharper update namespace references for you.

Global Search/Replace

Instead of going through the errors (example shown to the right) one by one, you can take care of a lot of errors with the following search/replace pairs.

Standard Modules (e.g. Reporting, Security, etc.)

Because of how the module class-names have changed, the standard module ORM classes all have different names. The formula is that the ORM class-name is no longer prepended its module name.

ReportsReportDefinition => ReportDefinition

SecurityUser => User

And so on...

Furthermore, all modules have been converted to use the v2 code-generation format, which has the metadata separate from the ORM object. Therefore, instead of referencing metadata using the ORM class-name as the base, you use the module name as the base.

Using NuGet

If you're already using Visual Studio 2015, then the NuGet UI is a good choice for managing packages. If you're still on Visual Studio 2013, then the UI there is pretty flaky and we recommend using the console.

The examples below assume that you have configured a source called "Local Quino" (e.g. a local folder that holds the nupkg files for Quino).

Debugging Support

We recommend using Visual Studio 2015 if at all possible. Visual Studio 2013 is also supported, but we have all migrated to 2015 and our knowhow about 2013 and its debugging idiosyncrasies will deteriorate with time.

These are just brief points of interest to get you set up. As with the NuGet support, these instructions are subject to change as we gain more experience with debugging with packages as well.

Hook up to a working symbol-source server (e.g. TeamCity)

Get the local sources for your version

If you don't have a source server or it's flaky, then get the PDBs for the Quino version you're using (provided in Quino.zip as part of the package release)

Add the path to the PDBs to your list of symbol sources in the VS debugging options

Tell Visual Studio where the sources are when it asks during debugging

Tell R# how to map from the source folder (c:\BuildAgent\work\9a1bb0adebb73b1f for Quino 2.0.0-1765) to the location of your sources

Quino packages are no different than any other NuGet packages. We provide both standard packages as well as packages with symbols and sources. Any complications you encounter with them are due to the whole NuGet experience still being a bit in-flux in the .NET world.

An upcoming post will provide more detail and examples.

Creating Nuget Packages

We generally use our continuous integration server to create packages, but you can also create packages locally (it's up to you to make sure the version number makes sense, so be careful). These instructions are approximate and are subject to change. I provide them here to give you an idea of how packages are created. If they don't work, please contact Encodo for help.

Open PowerShell

Change to the %QUINO_ROOT%\src directory

Run nant build pack to build Quino and packages

Set up a local NuGet Source name "Local Quino" to %QUINO_ROOT%\nuget (one-time only)

Change to the directory where your Quino packages are installed for your solution.

Delete all of the Encodo/Quino packages

Execute nant nuget from your project directory to get the latest Quino build from your local folder

If you're like us at Encodo, you moved to SSDs years ago...and never looked back. However, SSDs are generally smaller because the price (still) ramps up quickly as you increase size. We've almost standardized on 512GB, but some of us still have 256GB drives.

Unfortunately, knowing that we all have giant hard drives started a trend among manufacturers to just install everything, just in case you might need it. This practice didn't really cause problems when we were still using by-then terabyte-sized HDs. But now, we are, once again, more sensitive to unnecessary installations.

If you're a Windows .NET developer, you'll feel the pinch more quickly as you've got a relatively heavyweight Visual Studio installation (or three...) as well as Windows 8.1 itself, which weighs in at about 60GB after all service packs have been installed.

Once you throw some customer data and projects and test databases on your drive, you might find that you need, once again, to free up some space on your drive.

System Cleanup is back

One additional tip I have is to use Win + S to search for "Free up disk space by deleting unnecessary files"1 and run that application in "clean up system files" mode: the latest version will throw out as much Windows Update detritus as it can, which can clean up gigabytes of space.

Remove Old Visual Studios

The other measure you can take is to remove programs that you don't use anymore: for .NET developers that means you should finally toss out Visual Studio 2010 -- and possibly even 2013, if you've made the move to the new and improved 2015 already.2 Removing these versions also has the added benefit that extensions and add-ons will no longer try to install themselves into these older Visual Studios anymore.

However, even if you do remove VS2010, for example, you might find that it just magically reappears again. Now, I'm not surprised when I see older runtimes and redistributables in my list of installed programs -- it makes sense to keep these for applications that rely on them -- but when I see the entire VS2010 SP1 has magically reappeared, I'm confused.

Imagine my surprise when I installed SQL Server Management Studio 2016 -- the November 2015 Preview -- and saw the following installation item:

However, if you do remove this item again, then SQL Server Management Studio will no longer run (no surprise there, now that we know that it installed it). However, if you're just doing cleanup and don't know about this dependency3, you might accidentally break tools. So be careful; if you're too aggressive, you'll end up having to re-install some stuff.4

The reason I write that "it's back" is that for a couple of versions of Windows, Microsoft made it an optional download/feature instead of installing it by default.↩

Be careful about removing Visual Studio 2013 if you have web projects that still rely on targets installed with VS2013 but not included in VS2015. I uninstalled 2013 on my laptop and noticed a warning about an MS target that the compiler could no longer find.↩

The fact that Windows still can't tell you about dependencies is a story for another day. We should have had a package manager on Windows years ago. And, no, while Choco is a lovely addition, it's not quite the full-fledged package manager that aptitude is on Ubuntu.↩

These days nobody who's anybody in the software-development world is writing software without tests. Just writing them doesn't help make the software better, though. You also need to be able to execute tests -- reliably and quickly and repeatably.

That said, you'll have to get yourself a test runner, which is a different tool from the compiler or the runtime. That is, just because your tests compile (satisfy all of the language rules) and could be executed doesn't mean that you're done writing them yet.

Testing framework requirements

Every testing framework has its own rules for how the test runner selects methods for execution as tests. The standard configuration options are:

Which classes should be considered as test fixtures?

Which methods are considered tests?

Where do parameters for these methods come from?

Is there startup/teardown code to execute for the test or fixture?

Each testing framework will offer different ways of configuring your code so that the test runner can find and execute setup/test/teardown code. To write NUnit tests, you decorate classes, methods and parameters with C# attributes.

The standard scenario is relatively easy to execute -- run all methods with a Test attribute in a class with a TestFixture attribute on it.

Test-runner Requirements

There are legitimate questions for which even the best specification does not provide answers.

When you consider multiple base classes and generic type arguments, each of which may also have NUnit attributes, things get a bit less clear. In that case, not only do you have to know what NUnit offers as possibilities but also whether the test runner that you're using also understands and implements the NUnit specification in the same way. Not only that, but there are legitimate questions for which even the best specification does not provide answers.

At Encodo, we use Visual Studio 2015 with ReSharper 9.2 and we use the ReSharper test runner. We're still looking into using the built-in VS test runner -- the continuous-testing integration in the editor is intriguing1 -- but it's quite weak when compared to the ReSharper one.

So, not only do we have to consider what the NUnit documentation says is possible, but we must also know what how the R# test runner interprets the NUnit attributes and what is supported.

Getting More Complicated

Where is there room for misunderstanding? A few examples,

What if there's a TestFixture attribute on an abstract class?

How about a TestFixture attribute on a class with generic parameters?

Ok, how about a non-abstract class with Tests but no TestFixture attribute?

And, finally, a non-abstract class with Tests but no TestFixture attribute, but there are non-abstract descendants that do have a TestFixture attribute?

In our case, the answer to these questions depends on which version of R# you're using. Even though it feels like you configured everything correctly and it logically should work, the test runner sometimes disagrees.

Sometimes it shows your tests as expected, but refuses to run them (Inconclusive FTW!)

Or other times, it obstinately includes generic base classes that cannot be instantiated into the session, then complains that you didn't execute them. When you try to delete them, it brings them right back on the next build. When you try to run them -- perhaps not noticing that it's those damned base classes -- then it complains that it can't instantiate them. Look of disapproval.

Throw the TeamCity test runner into the mix -- which is ostensibly the same as that from R# but still subtly different -- and you'll have even more fun.

Improving Integration with the R# Test Runner

At any rate, now that you know the general issue, I'd like to share how the ground rules we've come up with that avoid all of the issues described above. The text below comes from the issue I created for the impending release of Quino 2.

Environment

Windows 8.1 Enterprise

Visual Studio 2015

ReSharper 9.2

Expected behavior

Non-leaf-node base classes should never appear as nodes in test runners. A user should be able to run tests in descendants directly from a fixture or test in the base class.

Observed behavior

Non-leaf-node base classes are shown in the R# test runner in both versions 9 and 10. A user must navigate to the descendant to run a test. The user can no longer run all descendants or a single descendant directly from the test.

Analysis

Relatively recently, in order to better test a misbehaving test runner and accurately report issues to JetBrains, I standardized all tests to the same pattern:

Do not use abstract anywhere (the base classes don't technically need it)

Use the TestFixture attribute only on leaf nodes

This worked just fine with ReSharper 8.x but causes strange behavior in both R# 9.x and 10.x. We discovered recently that not only did the test runner act strangely (something that they might fix), but also that the unit-testing integration in the files themselves behaved differently when the base class is abstract (something JetBrains is unlikely to fix).

You can see that R# treats a non-abstract class with tests as a testable entity, even when it doesn't actually have a TestFixture attribute and even expects a generic type parameter in order to instantiate.

Here it's not working well in either the source file or the test runner. In the source file, you can see that it offers to run tests in a category, but not the tests from actual descendants. If you try to run or debug anything from this menu, it shows the fixture with a question-mark icon and marks any tests it manages to display as inconclusive. This is not surprising, since the test fixture may not be abstract, but does require a type parameter in order to be instantiated.

Here it looks and acts correctly:

I've reported this issue to JetBrains, but our testing structure either isn't very common or it hasn't made it to their core test cases, because neither 9 nor 10 handles them as well as the 8.x runner did.

Now that we're also using TeamCity a lot more to not only execute tests but also to collect coverage results, we'll capitulate and just change our patterns to whatever makes R#/TeamCity the happiest.

Solution

Make all testing base classes that include at least one {{Test}} or {{Category}} attribute {{abstract}}. Base classes that do not have any testing attributes do not need to be made abstract.

Once more to recap our ground rules for making tests:

Include TestFixture only on leafs (classes with no descendants)

You can put Category or Test attributes anywhere in the hierarchy, but need to declare the class as abstract.

Base classes that have no testing attributes do not need to be abstract

If you feel you need to execute tests in both a base class and one of its descendants, then you're probably doing something wrong. Make two descendants of the base class instead.

When you make the change, you can see the improvement immediately.

ReSharper 10.0 also offers continuous integration, but our experiments with the EAP builds and the first RTM build left us underwhelmed and we downgraded to 9.2 until JetBrains manages to release a stable 10.x.↩

As part of the final release process for Quino 2, we've upgraded 5 solutions1 from Quino 1.13 to the latest API in order to shake out any remaining API inconsistencies or even just inelegant or clumsy calls or constructs. A lot of questions came up during these conversions, so I wrote the following blog to provide detail on the exact workings and execution order of a Quino application.

Stage 2

Let's tackle this one last because it requires a bit more explanation.

Stage 3

Technically, an application can add code to this stage by adding an IApplicationAction before the ServicesConfigured action. Use the Configure<TService>() extension method in stage 1 to configure individual services, as shown below.

If your application is a service, like a daemon or a web server or whatever, then you'll want to execute stages 1--3 and then let the framework send requests to your application's running services. When the framework sends the termination signal, execute stage 5 by disposing of the application. Instead of calling Run, you'll call CreateAndStartupUp.

Stage 5

Every application has certain tasks to execute during shutdown. For example, an application will want to close down any open connections to external resources, close file (especially log files) and perhaps inform the user of shutdown.

Instead of exposing a specific "shutdown" method, a Quino 2.0 application can simply be disposed to shut it down.

If you use ApplicationManager.Run() as shown above, then you're already sorted -- the application will be disposed and the user will be informed in case of catastrophic failure; otherwise, you can shut down and get the final application transcript from the disposed object.

Stage 2 Redux

An IOC has two phases: in the first phase, the application registers services with the IOC; in the second phase, the application uses services from the IOC.

An application should use the IOC as much as possible, so Quino keeps stage 2 as short as possible. Because it can't use the IOC during the registration phase, code that runs in this stage shares objects via a poor-man's IOC built into the IApplication that allows modification and only supports singletons. Luckily, very little end-developer application code will ever need to run in this stage. It's nevertheless interesting to know how it works.

Obviously, any code in this stage that uses the IOC will cause it to switch from phase one to phase two and subsequent attempts to register services will fail. Therefore, while application code in stage 2 has to be careful, you don't have to worry about not knowing you've screwed up.

Why would we have this stage? Some advocates of using an IOC claim that everything should be configured in code. However, it's not uncommon for applications to want to run very differently based on command-line or other configuration parameters. The Quino startup handles this by placing the following actions in stage 2:

Parse and apply command-line

Import and apply external configuration (e.g. from file)

An application is free to insert more actions before the ServicesInitialized action, but they have to play by the rules outlined above.

"Single" objects

Code in stage 2 shares objects by calling SetSingle() and GetSingle(). There are only a few objects that fall into this category.

The calls UseCore() and UseApplication() register most of the standard objects used in stage 2. Actually, while they're mostly used during stage 2, some of them are also added to the poor man's IOC in case of catastrophic failure, in which case the IOC cannot be assumed to be available. A good example is the IApplicationCrashReporter.

Executing Stages

Before listing all of the objects, let's take a rough look at how a standard application is started. The following steps outline what we consider to be a good minimum level of support for any application. Of course, the Quino configuration is modular, so you can take as much or as little as you like, but while you can use a naked Application -- which has absolutely nothing registered -- and you can call UseCore() to have a bit more -- it registers a handful of low-level services but no actions -- we recommend calling at least UseApplication() to adds most of the functionality outlined below.

Create application: This involves creating the IOC and most of the IOC registration as well as adding most of the application startup actions (stage 1)

Set debug mode: Get the final value of RunMode from the IRunSettings to determine if the application should catch all exceptions or let them go to the debugger. This involves getting the IRunSettings from the application and getting the final value using the IApplicationManagerPreRunFinalizer. This is commonly an implementation that can allows setting the value of RunMode from the command-line in debug builds. This further depends on the ICommandSetManager (which depends on the IValueTools) and possibly the ICommandLineSettings (to set the CommandLineConfigurationFilename if it was set by the user).

Process command line: Set the ICommandProcessingResult, possibly setting other values and adding other configuration steps to the list of startup actions (e.g. many command-line options are switches that are handled by calling Configure<TSettings>() where TSettings is the configuration object in the IOC to modify).

Read configuration file: Load the configuration data into the IConfigurationDataSettings, involving the ILocationManager to find configuration files and the ITextValueNodeReader to read them.

The ILogger is used throughout by various actions to log application behavior

If there is an unhandled error, the IApplicationCrashReporter uses the IFeedback or the ILogger to notify the user and log the error

The IInMemoryLogger is used to include all in-memory messages in the IApplicationTranscript

The next section provides detail to each of the individual objects referenced in the workflow above.

Available Objects

You can get any one of these objects from the IApplication in at least two ways, either by using GetSingle<TService>() (safe in all situations) or GetInstance<TService>() (safe only in stage 3 or later) or there's almost always a method which starts with "Use" and ends in the service name.

The example below shows how to get the ICommandSetManager2 if you need it.

All three calls return the exact same object, though. The first two from the poor-man's IOC; the last from the real IOC.

Only applications that need access to low-level objects or need to mess around in stage 2 need to know which objects are available where and when. Most applications don't care and will just always use GetInstance().

The objects in the poor-man's IOC are listed below.

Core

IValueTools: converts values; used by the command-line parser, mostly to translate enumerate values and flags

ILocationManager: an object that manages aliases for file-system locations, like "Configuration", from which configuration files should be loaded or "UserConfiguration" where user-specific overlay configuration files are stored; used by the configuration loader

ILogger: a reference to the main logger for the application

IInMemoryLogger: a reference to an in-memory message store for the logger (used by the ApplicationManager to retrieve the message log from a crashed application)

IMessageFormatter: a reference to the object that formats messages for the logger

Command line

ICommandSetManager: sets the schema for a command line; used by the command-line parser

ICommandProcessingResult: contains the result of having processed the command line

ICommandLineSettings: defines the properties needed to process the command line (e.g. the Arguments and CommandLineConfigurationFilename, which indicates the optional filename to use for configuration in addition to the standard ones)

Configuration

IConfigurationDataSettings: defines the ConfigurationData which is the hierarchical representation of all configuration data for the application as well as the MainConfigurationFilename from which this data is read; used by the configuration-loader

ITextValueNodeReader: the object that knows how to read ConfigurationData from the file formats supported by the application3; used by the configuration-loader

Run

IRunSettings: an object that manages the RunMode ("release" or "debug"), which can be set from the command line and is used by the ApplicationManager to determine whether to use global exception-handling

IApplicationManagerPreRunFinalizer: a reference to an object that applies any options from the command line before the decision of whether to execute in release or debug mode is taken.

IApplicationCrashReporter: used by the ApplicationManager in the code surrounding the entire application execution and therefore not guaranteed to have a usable IOC available

IApplicationDescription: used together with the ILocationManager to set application-specific aliases to user-configuration folders (e.g. AppData\{CompanyTitle}\{ApplicationTitle})

IApplicationTranscript: an object that records the last result of having run the application; returned by the ApplicationManager after Run() has completed, but also available through the application object returned by CreateAndStartUp() to indicate the state of the application after startup.

Each of these objects has a very compact interface and has a single responsibility. An application can easily replace any of these objects by calling UseSingle() during stage 1 or 2. This call sets the object in both the poor-man's IOC as well as the real one. For those rare cases where a non-IOC singleton needs to be set after the IOC has been finalized, the application can call SetSingle(), which does not touch the IOC. This feature is currently used only to set the IApplicationTranscript, which needs to happen even after the IOC registration is complete.

Quino has long included support for connecting to an application server instead of connecting directly to databases or other sources. The application server uses the same model as the client and provides modeled services (application-specific) as well as CRUD for non-modeled data interactions.

We wrote the first version of the server in 2008. Since then, it's acquired better authentication and authorization capabilities as well as routing and state-handling. We've always based it on the .NET HttpListener.

Old and Busted

As late as Quino 2.0-beta2 (which we had deployed in production environments already), the server hierarchy looked like screenshot below, pulled from issue QNO-4927:

This screenshot was captured after a few unneeded interfaces had already been removed. As you can see by the class names, we'd struggled heroically to deal with the complexity that arises when you use inheritance rather than composition.

The state-handling was welded onto an authentication-enabled server, and the base machinery for supporting authentication was spread across three hierarchy layers. The hierarchy only hints at composition in its naming: the "Stateful" part of the class name CoreStatefulHttpServerBase<TState> had already been moved to a state provider and a state creator in previous versions. That support is unchanged in the 2.0 version.

Implementation Layers

We mentioned above that implementation was "spread across three hierarchy layers". There's nothing wrong with that, in principle. In fact, it's a good idea to encapsulate higher-level patterns in a layer that doesn't introduce too many dependencies and to introduce dependencies in other layers. This allows applications not only to be able to use a common implementation without pulling in unwanted dependencies, but also to profit from the common tests that ensure the components works as advertised.

In Quino, the following three layers are present in many components:

Abstract: a basic encapsulation of a pattern with almost no dependencies (generally just Encodo.Core).

Standard: a functional implementation of the abstract pattern with dependencies on non-metadata assemblies (e.g. Encodo.Application, Encodo.Connections and so on)

Quino: an enhancement of the standard implementation that makes use of metadata to fill in implementation left abstract in the previous layer. Dependencies can include any of the Quino framework assemblies (e.g. Quino.Meta, Quino.Application and so on).

The hierarchy is now extremely flat. There is an IServer interface and a Server implementation, both generic in TListener, of type IServerListener. The server manages a single instance of an IServerListener.

The listener, in turn, has an IHttpServerRequestHandler, the main implementation of which uses an IHttpServerAuthenticator.

As mentioned above, the IServerStateProvider is included in this diagram, but is unchanged from Quino 2.0-beta3, except that it is now used by the request handler rather than directly by the server.

You can see how the abstract layer is enhanced by an HTTP-specific layer (the Encodo.Server.Http namespace) and the metadata-specific layer is nice encapsulated in three classes in the Quino.Server assembly.

Server Components and Flow

This type hierarchy has decoupled the main elements of the workflow of handling requests for a server:

The server manages listeners (currently a single listener), created by a listener factory

The listener, in turn, dispatches requests to the request handler

The request handler uses the route handler to figure out where to direct the request

The route handler uses a registry to map requests to response items

The request handler asks the state provider for the state for the given request

The state provider checks its cache for the state (the default support uses persistent states to cache sessions for a limited time); if not found, it creates a new one

Finally, the request handler checks whether the user for the request is authenticated and/or authorized to execute the action and, if so, executes the response items

It is important to note that this behavior is unchanged from the previous version -- it's just that now each step is encapsulated in its own component. The components are small and easily replaced, with clear and concise interfaces.

Note also that the current implementation of the request handler is for HTTP servers only. Should the need arise, however, it would be relatively easy to abstract away the HttpListener dependency and generalize most of the logic in the request handler for any kind of server, regardless of protocol and networking implementation. Only the request handler is affected by the HTTP dependency, though: authentication, state-provision and listener-management can all be re-used as-is.

Also of note is that the only full-fledged implementation is for metadata-based applications. At the bottom of the diagram, you can see the metadata-specific implementations for the route registry, state provider and authenticator. This is reflected in the standard registration in the IOC.

As you can see, the registration is extremely fine-grained and allows very precise customization as well as easy mocking and testing.

Any Men in Black fans out there? Tommy Lee Jones was "old and busted" while Will Smith was "the new hotness"? No? Just me? All righty then...↩

This diagram brought to you by the diagramming and architecture tools in ReSharper 9.2. Just select the files or assemblies you want to diagram in the Solution Explorer and choose the option to show them in a diagram. You can right-click any type or assembly to show dependent or referenced modules or types. For type diagrams , you can easily control which relationships are to be shown (e.g. I hide aggregations to avoid clutter) and how the elements are to be grouped (e.g. I grouped by namespace to include the boxes in my diagram).↩

Please note that what follows is a description of how I have used the tool -- so far -- to get my very specific tasks accomplished. If you're looking to solve other problems or want to solve the same problems more efficiently, you should take a look at the official NDepend documentation.

What were we doing?

To recap briefly: we are reducing dependencies among top-level namespaces in two large assemblies, in order to be able to split them up into multiple assemblies. The resulting assemblies will have dependencies on each other, but the idea is to make at least some parts of the Encodo/Quino libraries opt-in.

The plan of attack

On a high-level, I tackled the task in the following loosely defined phases.

Remove direct, root-level dependencies

This is the big first step -- to get rid of the little black boxes. I made NDepend show only direct dependencies at first, to reduce clutter. More on specific techniques below.

Remove indirect dependencies

Crank up the magnification to show indirect dependencies as well. This will will help you root out the remaining cycles, which can be trickier if you're not showing enough detail. On the contrary, if you turn on indirect dependencies too soon, you'll be overwhelmed by darkness (see the depressing initial state of the Encodo assembly to the right).

Examine dependencies between root-level namespaces

Even once you've gotten rid of all cycles, you may still have unwanted dependencies that hinder splitting namespaces into the desired constellation of assemblies.

For example, the plan is to split all logging and message-recording into an assembly called Encodo.Logging. However, the IRecorder interface (with a single method, Log()) is used practically everywhere. It quickly becomes necessary to split interfaces and implementation -- with many more potential dependencies -- into two assemblies for some very central interfaces and support classes. In this specific case, I moved IRecorder to Encodo.Core.

Even after you've conquered the black hole, you might still have quite a bit of work to do. Never fear, though: NDepend is there to help root out those dependencies as well.

Examine cycles in non-root namespaces

Because we can split off smaller assemblies regardless, these dependencies are less important to clean up for our current purposes. However, once this code is packed into its own assembly, its namespaces become root namespaces of their own and -- voila! you have more potentially nasty dependencies to deal with. Granted, the problem is less severe because you're dealing with a logically smaller component.

In Quino, use non-root namespaces more for organization and less for defining components. Still, cycles are cycles and they're worth examining and at least plucking the low-hanging fruit.

Removing root-level namespace cycles

With the high-level plan described above in hand, I repeated the following steps for the many dependencies I had to untangle. Don't despair if it looks like your library has a ton of unwanted dependencies. If you're smart about the ones you untangle first, you can make excellent -- and, most importantly, rewarding -- progress relatively quickly.1

Show the dependency matrix

Choose the same assembly in the row and column

Choose a square that's black

Click the name of the namespace in the column to show sub-namespaces

Do the same in a row

Keep zooming until you can see where there are dependencies that you don't want

Refactor/compile/run NDepend analysis to show changes

GOTO 1

Once again, with pictures!

The high-level plan of attack sounded interesting, but might have left you cold with its abstraction. Then there was the promise of detail with a focus on root-level namespaces, but alas, you might still be left wondering just how exactly do you reduce these much-hated cycles?

I took some screenshots as I worked on Quino, to document my process and point out parts of NDepend I thought were eminently helpful.

Show only namespaces

I mentioned above that you should "[k]eep zooming in", but how do you do that? A good first step is to zoom all the way out and show only direct namespace dependencies. This focuses only on using references instead of the much-more frequent member accesses. In addition, I changed the default setting to show dependencies in only one direction -- when a column references a row (blue), but not vice versa (green).

As you can see, the diagrams are considerably less busy than the one shown above. Here, we can see a few black spots that indicate cycles, but it's not so many as to be overwhelming.2 You can hover over the offending squares to show more detail in a popup.

Show members

If you don't see any more cycles between namespaces, switch the detail level to "Members". Another very useful feature is to "Bind Matrix", which forces the columns and rows to be shown in the same order and concentrates the cycles in a smaller area of the matrix.

As you can see in the diagram, NDepend then highlights the offending area and you can even click the upper-left corner to focus the matrix only on that particular cycle.

Drill down to classes

Once you're looking at members, it isn't enough to know just the namespaces involved -- you need to know which types are referencing which types. The powerful matrix view lets you drill down through namespaces to show classes as well.

If your classes are large -- another no-no, but one thing at a time -- then you can drill down to show which method is calling which method to create the cycle. In the screenshot to the right, you can see where I had to do just that in order to finally figure out what was going on.

In that screenshot, you can also see something that I only discovered after using the tool for a while: the direction of usage is indicated with an arrow. You can turn off the tooltips -- which are informative, but can be distracting for this task -- and you don't have to remember which color (blue or green) corresponds to which direction of usage.

Indirect dependencies

Once you've drilled your way down from namespaces-only to showing member dependencies, to focusing on classes, and even members, your diagram should be shaping up quite well.

On the right, you'll see a diagram of all direct dependencies for the remaining area with a problem. You don't see any black boxes, which means that all direct dependencies are gone. So we have to turn up the power of our microscope further to show indirect dependencies.

On the left, you can see that the scary, scary black hole from the start of our journey has been whittled down to a small, black spot. And that's with all direct and indirect dependencies as well as both directions of usage turned on (i.e. the green boxes are back). This picture is much more pleasing, no?

Queries and graphs

For the last cluster of indirect dependencies shown above, I had to unpack another feature: NDepend queries: you can select any element and run a query to show using/used by assemblies/namespaces.3 The results are shown in a panel, where you can edit the query see live updates immediately.

Even with a highly zoomed-in view on the cycle, I still couldn't see the problem, so I took NDepend's suggestion and generated a graph of the final indirect dependency between Culture and Enums (through Expression). At this zoom level, the graph becomes more useful (for me) and illuminates problems that remain muddy in the matrix (see right).

Crossing the finish line

In order to finish the job efficiently, here are a handful of miscellaneous tips that are useful, but didn't fit into the guide above.

I set NDepend to automatically re-run an analysis on a successful build. The matrix updates automatically to reflect changes from the last analysis and won't lose your place.

If you have ReSharper, you'll generally be able to tell whether you've fixed the dependencies because the usings will be grayed out in the offending file. You can make several fixes at once before rebuilding and rerunning the analysis.

At higher zoom levels (e.g. having drilled down to methods), it is useful to toggle display of row dependencies back on because the dependency issue is only clear when you see the one green box in a sea of blue.

Though Matrix Binding is useful for localizing, remember to toggle it off when you want to drill down in the row independently of the namespace selected in the column.

And BOOM! just like that4, phase 1 (root namespaces) for Encodo was complete! Now, on to Quino.dll...

Conclusion

Depending on what shape your library is in, do not underestimate the work involved. Even with NDepend riding shotgun and barking out the course like a rally navigator, you still have to actually make the changes. That means lots of refactoring, lots of building, lots of analysis, lots of running tests and lots of reviews of at-times quite-sweeping changes to your code base. The destination is worth the journey, but do not embark on it lightly -- and don't forget to bring the right tools.5

This can be a bit distracting: you might get struck trying to figure out which of all these offenders to fix first.↩

I'm also happy to report that my initial forays into maintaining a relatively clean library -- as opposed to cleaning it -- with NDepend have been quite efficient.↩

And much more: I don't think I've even scratched the surface of the analysis and reporting capabilities offered by this ability to directly query the dependency data.↩

In this case, in case it's not clear: NDepend for analysis and good ol' ReSharper for refactoring. And ReSharper's new(ish) architecture view is also quite good, though not even close to detailed enough to replace NDepend: it shows assembly-level dependencies only.↩

A lot of work has been put into Quino 2.01, with almost no stone left unturned. Almost every subsystem has been refactored and simplified, including but not limited to the data driver, the schema migration, generated code and metadata, model-building, security and authentication, service-application support and, of course, configuration and execution.

Two of the finishing touches before releasing 2.0 are to reorganize all of the code into a more coherent namespace structure and to reduce the size of the two monolithic assemblies: Encodo and Quino.

A Step Back

The first thing to establish is: why are we doing this? Why do we want to reduce dependencies and reduce the size of our assemblies? There are several reasons, but a major reason is to improve the discoverability of patterns and types in Quino. Two giant assemblies are not inviting -- they are, in fact, daunting. Replace these assemblies with dozens of smaller ones and users of your framework will be more likely to (A) find what they're looking for on their own and (B) build their own extensions with the correct dependencies and patterns. Neither of these is guaranteed, but smaller modules are a great start.

Another big reason is portability. The .NET Core was released as open-source software some time ago and more and more .NET source code is added to it each day. There are portable targets, non-Windows targets, Universal-build targets and much more. It makes sense to split code up into highly portable units with as few dependencies as possible. That is, the dependencies should be explicit and intended.

Not only that, but NuGet packaging has come to the fore more than ever. Quino was originally designed to keep third-party boundaries clear, but we wanted to make it as easy as possible to use Quino. Just include Encodo and Quino and off you went. However, with NuGet, you can now say you want to use Quino.Standard and you'll get Quino.Core, Encodo.Core, Encodo.Services.SimpleInjector, Quino.Services.SimpleInjector and other packages.

With so much interesting code in the Quino framework, we want to make it available as much as possible not only for our internal projects but also for customer projects where appropriate and, also, possibly for open-source distribution.

NDepend

I've used NDepend before2 to clean up dependencies. However, the last analysis I did about a year ago showed quite deep problems3 that needed to be addressed before any further dependency analysis could bear fruit at all. With that work finally out of the way, I'm ready to re-engage with NDepend and see where we stand with Quino.

As luck would have it, NDepend is in version 6, released at the start of summer 2015. As was the case last year, NDepend has generously provided me with an upgrade license to allow me to test and evaluate the new version with a sizable and real-world project.

I really, really like the depth of insight NDepend gives me into my code. I find myself thinking "SOLID" much more often when I have NDepend shaking its head sadly at me, tsk-tsking at all of the dependency snarls I've managed to build.

It's fast and super-reliable. I can work these checks into my workflow relatively easily.

I'm using the matrix view a lot more than the graphs because even NDepend recommends I don't use a graph for the number of namespaces/classes I'm usually looking at

Where the graph view is super-useful is for examining indirect dependencies, which are harder to decipher with the graph

I've found so many silly mistakes/lazy decisions that would lead to confusion for developers new to my framework

I'm spending so much time with it and documenting my experiences because I want more people at my company to use it

I haven't even scratched the surface of the warnings/errors but want to get to that, as well (the Dashboard tells me of 71 rules violated; 9 critical; I'm afraid to look :-)

Use Cases

Before I get more in-depth with NDepend, please note that there at least two main use cases for this tool4:

Clean up a project or solution that has never had a professional dependency checkup

Analyze and maintain separation and architectural layers in a project or solution

These two use cases are vastly different. The first is like cleaning a gas-station bathroom for the first time in years; the second is more like the weekly once-over you give your bathroom at home. The tools you'll need for the two jobs are similar, but quite different in scope and power. The same goes for NDepend: how you'll use it to claw your way back to architectural purity is different than how you'll use it to occasionally clean up an already mostly-clean project.

Quino is much better than it was the last time we peeked under the covers with NDepend, but we're still going to need a bucket of industrial cleaner before we're done.5

The first step is to make sure that you're analyzing the correct assemblies. Show the project properties to see which assemblies are included. You should remove all assemblies from consideration that don't currently interest you (especially if your library is not quite up to snuff, dependency-wise; afterwards, you can leave as many clean assemblies in the list as you like).6

Industrial-strength cleaner for Quino

Running an analysis with NDepend 6 generates a nice report, which includes the following initial dependency graph for the assemblies.

As you can see, Encodo and Quino depend only on system assemblies, but there are components that pull in other references where they might not be needed. The initial dependency matrices for Encodo and Quino both look much better than they did when I last generated one. The images below show what we have to work with in the Encodo and Quino assemblies.

It's not as terrible as I've made out, right? There is far less namespace-nesting, so it's much easier to see where the bidirectional dependencies are. There are only a handful of cyclic dependencies in each library, with Encodo edging out Quino because of (A) the nature of the code and (B) I'd put more effort into Encodo so far.

I'm not particularly surprised to see that this is relatively clean because we've put effort into keeping the external dependencies low. It's the internal dependencies in Encodo and Quino that we want to reduce.

Small and Focused Assemblies

The goal, as stated in the title of this article, is to split Encodo and Quino into separate assemblies. While removing cyclic dependencies is required for such an operation, it's not sufficient. Even without cycles, it's still possible that a given assembly is still too dependent on other assemblies.

Before going any farther, I'm going to list the assemblies we'd like to have. By "like to have", I mean the list that we'd originally planned plus a few more that we added while doing the actual splitting.7 The images on the right show the assemblies in Encodo, Quino and a partial overview of the dependency graph (calculated with the ReSharper Architecture overview rather than with NDepend, just for variety).

Of these, the following assemblies and their dependencies are of particular interest[^8]:

This seems like a good spot to stop, before getting into the nitty-gritty detail of how we used NDepend in practice. In the next article, I'll discuss both the high-level and low-level workflows I used with NDepend to efficiently clear up these cycles. Stay tuned!

If you already see the correct assemblies in the list, you should still check that NDepend picked up the right paths. That is, if you haven't followed the advice in NDepend's white paper and still have a different `bin` folder for each assembly, you may see something like the following in the tooltip when you hover over the assembly name:

Several valid .NET assemblies with the name have been found. They all have the same version. the one with the biggest file has been chosen.

If NDepend has accidentally found an older copy of your assembly, you must delete that assembly. Even if you add an assembly directly, NDepend will not honor the path from which you added it. This isn't as bad as it sounds, since it's a very strange constellation of circumstances that led to this assembly hanging around anyway:

* The project is no longer included in the latest Quino but lingers in my workspace
* The version number is unfortunately the same, even though the assembly is wildly out of date

I only noticed because I knew I didn't have that many dependency cycles left in the Encodo assembly.

You can see a lot of the issues associated with these changes in the release notes for Quino 2.0-beta1 (mostly the first point in the "Highlights" section) and Quino 2.0-beta2 (pretty much all of the points in the "Highlights" section).↩

I'm sure there are more, but those are the ones I can think of that would apply to my project (for now).↩

Here I'm going to give you a tip that confused me for a while, but that I think was due to particularly bad luck and is actually quite a rare occurrence.↩

Especially for larger libraries like Quino, you'll find that your expectations about dependencies between modules will be largely correct, but will still have gossamer filaments connecting them that prevent a clean split. In those cases, we just created new assemblies to hold these common dependencies. Once an initial split is complete, we'll iterate and refactor to reduce some of these ad-hoc assemblies.[^8]: Screenshots, names and dependencies are based on a pre-release version of Quino, so while the likelihood is small, everything is subject to change.↩

Stay tuned for an upcoming post on the details of starting up an application, which is the support provided in Encodo.Application.↩

In this article, I'm going to continue the discussion started in Part I, where we laid some groundwork about the state machine that is the startup/execution/shutdown feature of Quino. As we discussed, this part of the API still suffers from "several places where generic TApplication parameters [are] cluttering the API". In this article, we'll take a closer look at different design approaches to this concrete example -- and see how we decided whether to use generic type parameters.

Consistency through Patterns and API

Any decision you take with a non-trivial API is going to involve several stakeholders and aspects. It's often not easy to decide which path is best for your stakeholders and your product.

For any API you design, consider how others are likely to extend it -- and whether your pattern is likely to deteriorate from neglect.

For any API you design, consider how others are likely to extend it -- and whether your pattern is likely to deteriorate from neglect. Even a very clever solution has to be balanced with simplicity and elegance if it is to have a hope in hell of being used and standing the test of time.

In Quino 2.0, the focus has been on ruthlessly eradicating properties on the IApplication interface as well as getting rid of the descendant interfaces, ICoreApplication and IMetaApplication. Because Quino now uses a pattern of placing sub-objects in the IOC associated with an IApplication, there is far less need for a generic TApplication parameter in the rest of the framework. See Encodos configuration library for Quino: part I for more information and examples.

This focus raised an API-design question: if we no longer want descendant interfaces, should we eliminate parameters generic in that interface? Or should we continue to support generic parameters for applications so that the caller will always get back the type of application that was passed in?

Before getting too far into the weeds1, let's look at a few concrete examples to illustrate the issue.

Do Fluent APIs require generic return-parameters?

As discussed in Encodos configuration library for Quino: part III in detail, Quino applications are configured with the "Use*" pattern, where the caller includes functionality in an application by calling methods like UseRemoteServer() or UseCommandLine(). The latest version of this API pattern in Quino recommends returning the application that was passed in to allow chaining and fluent configuration.

For example, the following code chains the aforementioned methods together without creating a local variable or other clutter.

return new CodeGeneratorApplication().UseRemoteServer().UseCommandLine();

What should the return type of such standard configuration operations be? Taking a method above as an example, it could be defined as follows:

This seems like it would work fine, but the original type of the application that was passed in is lost, which is not exactly in keeping with the fluent style. In order to maintain the type, we could define the method as follows:

This style is not as succinct but has the advantage that the caller loses no type information. On the other hand, it's more work to define methods in this way and there is a strong likelihood that many such methods will simply be written in the style in the first example.

Generics definitely offer advantages, but it remains to be seen how much those advantages are worth.

Why would other coders do that? Because it's easier to write code without generics, and because the stronger result type is not needed in 99% of the cases. If every configuration method expects and returns an IApplication, then the stronger type will never come into play. If the compiler isn't going to complain, you can expect a higher rate of entropy in your API right out of the gate.

One way the more-derived type would come in handy is if the caller wanted to define the application-creation method with their own type as a result, as shown below:

If the library methods expect and return IApplication values, the result of UseCommandLine() will be IApplication and requires a cast to be used as defined above. If the library methods are defined generic in TApplication, then everything works as written above.

This is definitely an advantage, in that the user gets the exact type back that they created. Generics definitely offer advantages, but it remains to be seen how much those advantages are worth.2

Another example: The IApplicationManager

Before we examine the pros and cons further, let's look at another example.

In Quino 1.x, applications were created directly by the client program and passed into the framework. In Quino 2.x, the IApplicationManager is responsible for creating and executing applications. A caller passes in two functions: one to create an application and another to execute an application.

A standard application startup looks like this:

new ApplicationManager().Run(CreateApplication, RunApplication);[^3]

Generic types can trigger an avalanche of generic parameters(tm) throughout your code.

The question is: what should the types of the two function parameters be? Does CreateApplication return an IApplication or a caller-specific derived type? What is the type of the application parameter passed to RunApplication? Also IApplication? Or the more derived type returned by CreateApplication?

As with the previous example, if the IApplicationManager is to return a derived type, then it must be generic in TApplication and both function parameters will be generically typed as well. These generic types will trigger an avalanche of generic parameters(tm) throughout the other extension methods, interfaces and classes involved in initializing and executing applications.

That sounds horrible. This sounds like a pretty easy decision. Why are we even considering the alternative? Well, because it can be very advantageous if the application can declare RunApplication with a strictly typed signature, as shown below.

These are quite messy -- and kinda scary -- signatures.3 if these core methods are already so complex, any other methods involved in startup and execution would have to be equally complex -- including helper methods created by calling applications.4

The advantage here is that the caller will always get back the type of application that was created. The compiler guarantees it. The caller is not obliged to cast an IApplication back up to the original type. The disadvantage is that all of the library code is infected by a generic parameter with its attendant IApplication generic constraint.5

Don't add Support for Conflicting Patterns

The title of this section seems pretty self-explanatory, but we as designers must remain vigilant against the siren call of what seems like a really elegant and strictly typed solution.

But aren't properties on an application exactly what we just worked so hard to eliminate?

The generics above establish a pattern that must be adhered to by subsequent extenders and implementors. And to what end? So that a caller can attach properties to an application and access those in a statically typed manner, i.e. without casting?

But aren't properties on an application exactly what we just worked so hard to eliminate? Isn't the recommended pattern to create a "settings" object and add it to the IOC instead? That is, as of Quino 2.0, you get an IApplication and obtain the desired settings from its IOC. Technically, the cast is still taking place in the IOC somewhere, but that seems somehow less bad than a direct cast.

If the framework recommends that users don't add properties to an application -- and ruthlessly eliminated all standard properties and descendants -- then why would the framework turn around and add support -- at considerable cost in maintenance and readability and extendibility -- for callers that expect a certain type of application?

Wrapping up

Let's take a look at the non-generic implementation and see what we lose or gain. The final version of the IApplicationManager API is shown below, which properly balances the concerns of all stakeholders and hopefully will stand the test of time (or at least last until the next major revision).

These are the hard questions of API design: ensuring consistency, enforcing intent and balancing simplicity and cleanliness of code with expressiveness.

A predilection of mine, I'll admit, especially when writing about a topic about which I've thought quite a lot. In those cases, the instinct to just skip "the object" and move on to the esoteric details that stand in the way of an elegant, perfect solution, is very, very strong.↩

This more-realized typing was so attractive that we used it in many places in Quino without properly weighing the consequences. This article is the result of reconsidering that decision.↩

Yes, the C# compiler will allow you to elide generics for most method calls (so long as the compiler can determine the types of the parameters without it). However, generics cannot be removed from constructor calls. These must always specify all generic parameters, which makes for messier-looking, lengthy code in the caller e.g. when creating the ApplicationManager were it to have been defined with generic parameters. Yet another thing to consider when choosing how to define you API.↩

As already mentioned elsewhere (but it bears repeating): callers can, of course, eschew the generic types and use IApplication everywhere -- and most probably will, because the advantage offered by making everything generic is vanishingly small.. If your API looks this scary, entropy will eat it alive before the end of the week, to say nothing of its surviving to the next major version.↩

A more subtle issue that arises is if you do end up -- even accidentally -- mixing generic and non-generic calls (i.e. using IApplication as the extended parameter in some cases and TApplication in others). This issue is in how the application object is registered in the IOC. During development, when the framework was still using generics everywhere (or almost everywhere), some parts of the code were retrieving a reference to the application using the most-derived type whereas the application had been registered in the container as a singleton using IApplication. The call to retrieve the most derived type returned a new instance of the application rather than the pre-registered singleton, which was a subtle and difficult bug to track down.↩

In this article, we're going to discuss a bit more about the configuration library in Quino 2.0.

Other entries on this topic have been the articles about Encodos configuration library for Quino: part I, part II and part III.

The goal of this article is to discuss a concrete example of how we decided whether to use generic type parameters throughout the configuration part of Quino. The meat of that discussion will be in a part 2 because we're going to have to lay some groundwork about the features we want first. (Requirements!)

A Surfeit of Generics

As of Quino 2.0-beta2, the configuration library consisted of a central IApplication interface which has a reference to an IOC container and a list of startup and shutdown actions.

As shown in part III, these actions no longer have a generic TApplication parameter. This makes it not only much easier to use the framework, but also easier to extend it. In this case, we were able to remove the generic parameter without sacrificing any expressiveness or type-safety.

As of beta2, there were still several places where generic TApplication parameters were cluttering the API. Could we perhaps optimize further? Throw out even more complexity without losing anything?

Starting up an application

One of these places is the actual engine that executes the startup and shutdown actions. This code is a bit trickier than just a simple loop because Quino supports execution in debug mode -- without exception-handling -- and release mode -- with global exception-handling and logging.

As with any application that uses an IOC container, there is a configuration phase, during which the container can be changed and an execution phase, during which the container produces objects but can no longer be re-configured.

Until 2.0-beta2, the execution engine was encapsulated in several extension methods called Run(), StartUp() and so on. These methods were generally generic in TApplication. I write "generally" because there were some inconsistencies with extension methods for custom application types like Winform or Console applications.

While extension methods can be really useful, this usage was not really appropriate as it violated the open/closed principle. For the final release of Quino, we wanted to move this logic into an IApplicationManager so that applications using Quino could (A) choose their own logic for starting an application and (B) add this startup class to a non-Quino IOC container if they wanted to.

Application Execution Modes

So far, so good. Before we discuss how to rewrite the application manager/execution engine, we should quickly revisit what exactly this engine is supposed to do. As it turns out, not only do we wnat to make an architectural change to make the design more open for extension, but the basic algorithm for starting an application changed, as well.

What does it mean to run an application?

Quino has always acknowledged and kinda/sorta supported the idea that a single application can be run in different ways. Even an execution that results in immediate failure technically counts as an execution, as a traversal of the state machine defined by the application.

If we view an application for the state machine that it is, then every application has at least two terminal nodes: OK and Error.

But what does OK mean for an application? In Quino, it means that all startup actions were executed without error and the run() action passed in by the caller was also executed without error. Anything else results in an exception and is shunted to Error.

But is that true, really? Can you think of other ways in which an application could successfully execute without really having failed? For most applications, the answer is yes. Almost every application -- and certainly every Quino application -- supports a command line. One of the default options for the command line of a Quino application is -h, which shows a manual for the other command-line options.

If the application is running in a console, this manual is printed to the console; for a Winform application, a dialog box is shown; and so on.

This "help" mode is actually a successful execution of the application that did not result in the main event loop of the application being executed.

Thought of in this way, any command-line option that controls application execution could divert the application to another type of terminal node in the state machine. A good example is when an application provides support for importing or exporting data via the command line.

"Canceled" Terminal Nodes

A terminal node is also not necessarily only Crashed or Ok. Almost any application will also need to have a Canceled mode that is a perfectly valid exit state. For example,

If the application requires a login during execution (startup), but the user aborts authentication

If the application supports schema migration, but the user aborts without migrating the schema

These are two ways in which a standard Quino application could run to completion without crashing but without having accomplished any of its main tasks. It ran and it didn't crash, but it also didn't do anything useful.

Intermediate Nodes in the Application State Machine

This section title sounds a bit pretentious, but that's exactly what we want to discuss here. Instead of having just start and terminal nodes, the Quino startup supports cycles through intermediate nodes as well. What the hell does that mean? It means that some nodes may trigger Quino to restart in a different mode in order to handle a particular kind of error condition that could be repaired.1

A concrete example is desperately needed here, I think. The main use of this feature in Quino right now is to support on-the-fly schema-migration without forcing the user to restart the application. This feature has been in Quino from the very beginning and is used almost exclusively by developers during development. The use case to support is as follows:

Developer is running an application

Developer make change to the model (or pulls changes from the server)

Developer runs the application with a schema-change

Application displays migration tool; developer can easily migrate the schema and continue working

This workflow minimizes the amount of trouble that a developer has when either making changes or when integrating changes from other developers. In all cases in which the application model is different from the developer's database schema, it's very quick and easy to upgrade and continue working.

"Rescuing" an application in Quino 2.0

How does this work internally in Quino 2.0? The application starts up but somehow encounters an error that indicates that a schema migration might be required. This can happen in one of two ways:

The schema-verification step in the standard Quino startup detects a change in the application model vis à vis the data schema

Some other part of the startup accesses the database and runs into a DatabaseException that is indicative of a schema-mismatch

In all of these cases, the application that was running throws an ApplicationRestartException, that the standard IApplicationManager implementation knows how to handle. It handles it by shutting down the running application instance and asking the caller to create a new application, but this time one that knows how to handle the situation that caused the exception. Concretely, the exception includes an IApplicationCreationSettings descendant that the caller can use to decide how to customize the application to handle that situation.

The manager then runs this new application to completion (or until a new RestartApplicationException is thrown), shuts it down, and asks the caller to create the original application again, to give it another go.

In the example above, if the user has successfully migrated the schema, then the application will start on this second attempt. If not, then the manager enters the cycle again, attempting to repair the situation so that it can get to a terminal node. Naturally, the user can cancel the migration and the application also exits gracefully, with a Canceled state.

A few examples of possible application execution paths:

Standard => OK

Standard => Error

Standard => Canceled

Standard => Restart => Migrator => Standard => OK

Standard => Restart => Migrator => Canceled

The pattern is the same for interactive, client applications as for headless applications like test suites, which attempt migration once and abort if not successful. Applications like web servers or other services will generally only support the OK and Error states and fail when they encounter a RestartApplicationException.

Still, it's nice to know that the pattern is there, should you need it. It fits relatively cleanly into the rest of the API without making it more complicated. The caller passes two functions to the IApplicationManager: one to create an application and one to run it.

We'll see in the next post what the final API looks like and how we arrived at the final version of that API in Quino 2.0.

Or rescued, using the nomenclature from Eiffel exception-handling, which actually does something very similar. The exception handling in most languages lets you clean up and move on, but the intent isn't necessarily to re-run the code that failed. In Eiffel, this is exactly how exception-handling works: fix whatever was broken and re-run the original code. Quino now works very much like this as well.↩