Menu

Monthly Archives: March 2015

Recently I’ve been working on adding metric reporting into an existing application using the great Metrics library from Coda Hale. Adding it to Dropwizard applications is extremely easy but adding to Play is more tricky, so I’ve created a sample project to record how to do this.

Metrics are a vital tool in monitoring the health of your application but are often overlooked early in development. Without some way of seeing how your application is behaving under use you can end up relying on your users to tell you what’s going on, being reactive to problems instead of proactively monitoring and taking steps to prevent them. Metrics can be simple as number of active operations, or as complex as JVM usage and detailed request result breakdown, any thing you think will help monitor the health of your application.

Once you have some metrics being produced you need a way to see them, in this example I’m using open source Graphite for storing and graphing the metrics data. Metrics has a reporter library which periodically sends the metric data to Graphite. Once your data is in you can create custom graphs that suit your monitoring needs. Heroku offers a free hosted Graphite instance (with usage limitations) so I’m using it in this application as an easy way to setup and try Graphite.

Detail

See the source for full instructions on running and deploying the application to heroku.

I based the implementation from the metrics-play play plugin, which is written in Scala. I wanted a clear Java Play implementation which gave me control over the metrics names, but if you want to quickly add metrics into your Play application without fuss this is a good plugin.

This example creates metrics registries for JVM, Logback and request details by hooking into the Play application using theGlobal.java file, using the filters() and onStart methods.

Customisation and improvements

This example gives basic metrics on the Application, but for your own solution you would probably want to get specific metrics about controller actions. You can do this by either creating your own Play Filters and attaching them to the action methods or coding metrics directly into the actions. I used the Dropwizard Metrics own style for reporting on requests (2xx-responses) but you may be interested in specific results or requests and can use the Filter to intercept and report on these.

In a previous post I put up the sequence diagram below describing a design for implementing authentication and authorisation using Microservices.

What I didn’t cover was the advantages of this approach when scaling your services. Authentication and authorisation are needed by most parts of your system so they easily become a performance bottle neck. Any service in your system which needs to authenticate a user or check their permissions will need to access a central data source holding this data. Outside a monolith architecture (which has it’s own problems) this can be difficult, as a varying number of services will need to perform these functions so it needs to scale with them.

This is one of the classic arguments for microservices, as its easier to scale a small focused service doing one thing rather than a large application with many dependencies and data sources.

Here’s the most basic architecture using the microservice authentication and authorisation design above:

This architecture can only scale vertically, by increasing the specification of the single web server. If just one of the services hosted on the box is getting a lot of requests, like the authorisation box dealing with permission checks from 10 business services, then the performance of the whole application is affected. Increasing the processor and memory can only help you so much in this situation, and of course the system has multiple single points of failure.

Now here’s what is possible if you use load balancers and partition your microservices into separate servers:

This architecture can scale horizontally, by increasing the number of server instances for the specific services that are experiencing heavy load. This may seem overly complex but really if your application needs to scale well this is the only practical way to do it. It can also save hosting costs, since as well as being able to scale up (increase instances) you can scale down (reduce instances) when individual services are not under much load. The costs for a single high spec server on all the time are normally higher than multiple tiny instances being turned on and off automatically.

The tools necessary to implement this architecture are now very mature (haproxy, Puppet, Docker, etc.) and Cloud IaaS providers are offering better tools for managing your instances automatically.

While investigating how to handle complex business rules in a project a colleague of mine came up the idea for this and I created this library as a proof of concept.

The problem its trying to solve is quite common:

An application needs to evaluate data against a large number of complex/simple business rules

The business rules are mostly concerned with a limited set of values within a single business domain

The business rules need to be maintained and are updated regularly (with mostly small changes)

The users who define and maintain the rules are non-technical and cannot code to implement rule changes

Normally a problem like this is solved by either custom code or adding a large Rules Engine product, but both of these have a number of downsides.

Custom code disadvantages:

Requires custom code for each business rule

Rules cannot be changed without code release

Rules cannot be maintained by non-technical users

Rules Engine disadvantages:

Requires installation and maintenance of a new complex product (e.g. Drools)

Requires developer up-skilling to use correctly

Rules cannot be maintained by non-technical users (in practise)

Bad experiences in the past with large Rule Engine products discouraged us from using one, and in practise we would not be needing anything like the full set of features it provides. Custom code would quickly become a maintenance nightmare, and would add barriers between our users and the implementation.

The rules themselves were normally defined in english in documents and spreadsheets, so why not use something that’s closer to their “natural” state? The users aren’t idiots, they use Excel formulas to calculate all this manually, why couldn’t we find a compromise closer to what they understood?

Enter ANTLR, an open source Java based language parser. It’s used in a lot of places to convert things from one well defined language to another, such as in Hibernate to generate SQL from HQL. You can use it to define a grammar, generate parsers and apply them against text to validate it against the grammar and build a tree structure that matches the elements in your grammar.

The idea was we could use ANTLR to define a limited domain English grammar for our business rules that covered everything we needed inside our small business domain. That way we could allow users to write rules in almost natural English that we could parse and convert to executable business rules in code. That way the users can define the rules close to their normal way and maintain them on the system when they need to be updated.

e.g.

In our grammar we define a specification, with a rule being one or more specifications, as something like:

As the rules are simply strings, they can be persisted and edited using a CRUD UI, web based or otherwise. The UI can use knowledge of the grammar to aid users when editing rules, validating against the grammar, testing against known data and auto-completing for valid syntax. If necessary, rules can be versioned to maintain audit trails and published to control when they come into effect.

This approach has it’s own set of disadvantages:

Have to code business specific grammar and rule specification logic covering required rules

Grammar cannot cover all possible scenarios without excessive code

Requires users to learn the grammar and understand how it is applied to the data used in the system

I believe this approach is a good fit for when the set of business rules you are dealing with is well known and applied to similar data sets, changes frequently in small repetitive ways and there’s a requirement for users to be able to quickly test and apply changes. Giving the users who understand the rules the best the ability to directly edit and test gives them extremely useful functionality and avoids the need for defining Rule requirements documentation and long periods of testing for each time the rules are updated.

Implementation details

I’d recommend reading up about ANTLR before driving into the code, as you need to understand the grammar and how it parses rules to understand how the tree builder constructs the expressions and applies data to it.

ANTLR4 is included in the project via sbt-antlr4. The ANTLR grammar file is located atsrc/main/antlr4/RuleSet.g4 and generated ANTLR classes based on that grammar are intarget/scala-2.11/classes/com/example/rules. The generated parser is used in the RuleSetCompilerand a listener, RuleSetTreeBuilder, is attached to it to react to events when parsing Rules.

RuleSetTreeBuilder has a number of methods that are fired when the parser enters and exits identified tokens and labelled elements from the grammar, such as enterRule_set andexitArithmeticExpressionPlus. The logic inside these methods build the logical rule expressions that can be applied to the data. Classes for specifications are under the packagecom.example.rules.grammar.specification.

JsonPath, a JSON implementation of XPath, is used to allow complex queries of the JSON for the cases when the data being evaluated isn’t simple.

The grammar can be expanded to include specific business evaluations, rather than generic operations, based on knowledge of business domain and data. This allows the grammar to be more english readable instead of generic formulas. In the same way custom expressions to extract or process the data, e.g. GRASS options area instead of $.options[?(@.code=='G1' || @.code=='G2')].area.