Enterprise Architecture in practice

Pages

Saturday, June 6, 2015

A main success factor for
us is that we realized that business needs a vision not technical
design. All parts of your organization needs to participate in the enterprise transformation. Only a small fraction has technical insight, the majority have other concerns. Therefore we have a “city” regulation plan for our
systems landscape. Just like city planning. Politicians look at
concepts and overall quality of a regulation, with blocks, roads, and
parks (e.g.). They do not bother with the technical building
drawings. This is where we think many EA projects fail; they tend to
focus on the technical drawings, not the quality of life for the
organization (eg. Population for a city).

The IT-regulation plan is a
“City plan” and is a contemporary view of all major functional
areas (systems) and their main functions. It has a color legend so
that is proves to be a good transition architecture illustration. It
is maintained 2 times a year to reflect ongoing projects and decisions
from the portfolio board. Further more we have a roadmap for all
existing systems (silos) and a modernization scenario which is the
overall plan for major effort during the next 10 years. These are all
used when describing the mandate of new projects, where the work
effort is illustrated on the target architecture and the regulation
plan.

The regulation plan has
proven to be very good for communication our business for all parts
of the organization. Even the Finance Minister has a A0 copy in her
office. The power of is comes to life when business processes are
explained as a “tour of the city”. People like stories and
understand the effects on the city; where maintenance must be
conducted or new building has to be erected. It really brings
business and IT to the same table.

So take a look at our "City Plan". Although in Norwegian you get the idea of how a visualization of structure can form the basis of something to communicate upon. The illustration is quite large so it is presented at Prezi.com.

Saturday, March 7, 2015

We are now in production with our new scheme as of 2015; Ongoing reporting of wages live from all payroll systems around the country. My blogg has been about design ideas of 2010 and the demanding process of transforming our systems, knowledge and organization from silos to robust scalable 24/7.

We have managed a fundamental modernization based on a paradigm shift in software design and implementation

We have had a smooth start with great results:

Of 223.842 deliveries 97% get response of <15 seconds (end-2-end)

24/7, peaking at 3.190 deliveries / hour with constant response (a delivery is all payroll from all employers)

"Micro services" architecture inspired by "Reactive Manifesto»

Changes and fixes rolled out continuously in the daytime, in practice 2-3 times per week

All business logic running live, also with consistency against previously reported data

Proven architectural properties
The design from 2010 has proven abilities. Wise choices in 2010, broad organizational anchoring and continuous IT architecture and technology management, as well as trial and error has made this possible.

Few consecutive errors Proves the ability to change. The system is understandable and we have control of it.

Saturday, October 4, 2014

This article fills in the last piece, and completes previous articles (linked in this text below) of how we structure Case Handling. It is the design of "Process state and handling" of Continual Aggregate Hub article of 2010.

At the core
A central part to any typical Enterprise Application is the Case or Dossier, and the process handling this. The information going in to a Case, the business logic applied to it, and the subsequent business decision(s). It all has to be filed with accuracy. Case handling get complex because information changes over time, business decisions are made, and the business logic and the information going into it are also complex. Just look at financial institutions and insurance systems, as well as government systems. These have a load of legislation and business rules - that change over time – and every business decision must comply to the rules and information that was valid at that point in time. Otherwise that decision does not have integrity.
Legislation states that filing in such systems must be applied after some sort of standard; in Norway it is called “NOARK”, in the UK “RMS” and in the US “NARA” (and related).

Immutable xml-documents
In our systems design we already have stated that business decisions are immutable, and are stored as XML documents with a set of metadata. The design of the CAH has proven to have excellent qualities with regards to integrity, ease of maintenance and ability to scale. This component is now in production (see BIG DATA - Definite content).

Searching for the answer
We used 3 years looking for this design. And we started out in the BPM (Business Process Management) area. Now that we have the design, we realize that BPM is the wrong end of this challenge. There are a lot of BPM products out there, but they miss out on one main point. It is not the processing framework that is the main challenge, but the Case content (information and rules) and the state of the Case itself. That is why most of these effort are hard to maintain, and often could be solved in a much simpler way. A key point for us is that any Case may be reopened for the next 14 years. We do not want to depend on any specific BPM product, as it is a vendor lock in (enterprise wide process handling).

The missing piece
Let me introduce the CaseContainer, a generic container representing any Case in our domain. It is also an immutable xml-document, and acts as a “binder” between the Case Type and its metadata, the information going into the case, the business logic applied to it, the process state the Case is in, the resulting business decision, and as access control.

It is basically divided into two parts: 1) a set of metadata describing the Case, and complies the NOARK-5 standard. And 2) a set of references to xml-documents it consists of. The document references are divided into two groups: documents going into the case and documents representing the resulting business decision. The metadata makes the CaseContainer be part of a hierarchy of cases, is described by the Case Type, and it describes the case state; both general (eg. Open, Pending, Closed, Reopened) and specific to the case type (eg. The states of this specific case handling process). The Document Archive would be the CAH with its typed documents, or a Archive of binary documents such as PDF (for manual read by a case handler). It is all implemented with REST and XML notation. You can’t find anything more open and robust.

The Case Container described (main attributes, there are more):

Attribute

Description

UUID

Universal Unique ID for this document

PartyId

The Id of the party this case concerns

Timestamp

nil

Case number

The number of the case. Most often this has some formatting case year, sequence number and version (eg. 2014/45235/4)

This is also called “File”. It is structured as a flat “namespace” notation to contain “Fonds”, “Series”, “Class”, and “File”. And any hierarchy between them (eg: no.nta.folder.tax.2014)
The Case Type may be the whole or a subset of the "Folder".

State

It is structured as a flat “namespace” notation to contain any state in any process. (eg. no.nta.process.tax.2014.complaint)

Case responsible

The case handler or organization responsible for the decision. (eg. "no.nta.authority.username")

Date of invocation

The date for opening this Case version

Date of decision

The date for closing this Case version

Incoming documents

A list of references to all documents going into this case. We have a unique reference to every version of every xml-document. Usually this is called “Registry entry type” or “Document object reference”.

Business decision documents

Same as above, but here is the list of the documents produced in handling this case. Often these are outgoing documents and can contain an Invoice or a written statement.

Audit

A list of all changes made to this case since version 1.

[*) NTA is Norwegian Tax Authority]

Processing the Case
The processing framework handling this is also simple. A Case Type has a set of documents that the Case can consist of. When a new document comes into the CAH is it pre-defined what type of Case should be opened. The Case Type and its State is linked to a processing URI that handle it (see Enterprise Wide process handling for details in how we cope with event driven processing). In that way we connect the valid business rules to this Case Type. Connected to the event is a reference to the Case Container, and its is the Case Container that is transported to its Module for handling. The Application Layer (defined in DDD) then starts processing on it. Any open Case will either be handled automatically or show up in the Task List for manual handling. It will be handled manually if the business logic states so (by some validation), or if some of the documents going into the case is not strongly typed, and must be read by a human.
Processing may also be restarted just by creating a new version of the Case, for example when your code has been fixed and you need to ad-hoc re-run the business logic. This will result in a new version of the Case Container, and a new version of the business decision document. Handling Cases where there is new information, but no new decision, is archived as a new version of the Case Container, referencing the new incoming information, but maintains the last business decision. (see the Restaurant and the role of the "waiter", he is the one composing Case Container when events happen, at a timed event or at ad-hoc need for running a process)

Access control
Security and access handling has a better chance with this design. The user is linked to a set of Case Types and hereby valid Document Types. Now the XACML access control needs only check if the user has access to the Case Type or the Document Types. The metadata containing Case Types and Documents Types is now a PIP (Policy Information Point) for the XACML's PDP (Policy Decision Point). The access control has a high level of granularity and the makes it possible to understand and configure. An otherwise tough challenge in a service oriented environment.

Separating concerns
The CaseContainer is a design that separates the business information and logic, from the decisions made over time. In most systems these are intermingled. Now they form two separate implementations, making the Case Handling System much easier to implement, understand and maintain. The Case Container runs in the Application layer, while the Business logic works on the information in the Domain layer (as defined in Domain Driven Design)

Sunday, September 15, 2013

There is a lot of attention on Cloud platforms (IaaS, PaaS and SaaS), but not so many present how they actually will move to it. This blog is about prototyping core business to such a platform (eg. Amazon Web Services' Beanstalk). It’s also about utilizing Klob’s Learning Cycle in organizational development. An even harder problem, until you realize that your organization has to experience a common vision.

I have earlier presented results
from our Proof of Concept (Tax Norways PoC Results). That prototype was about designing software for
the "cloud" (In-memory, Big Data); to make it easier to maintain, cost less
and scale linearly. (I had a talk at QCon London 2013 about this)
This time its about how we can simplify our business and still make sure we comply to legislation. Its about using Domain Driven Design to establish the ubiquitous language and aggregates. Its about engaging business and stakeholders.

Innovation emerges from the collaboration between disciplines.
This is what Enterprise Architecture is all about: Improving
business though IT.

What business?

The need to clean up a very complex set
of forms representing mandatory reporting from the population to the
Tax Administration. Simplifying for small businesses is our focus, by addressing 13 forms (the total is 50 forms, but larger businesses and other tax domains is not addressed now). Main strategy for this simplification is to
collect timely facts through a responsive and “smart” wizard made
available for the public. There was already a business project
working on this involving 20-30 people. They had a concept ready, but
was unsure how to proceed. And I had blueprints of our future
systems.

Is the Public Sector able to innovate? Yes We Can!

Why Prototype?

Kolb's learning cycle

Lets experience something together! Ever noticed how people understand the same text or the same specification, in different ways? That is one of the major challenges in Computer Science, expectations and requirements most often are not aligned. We used Kobl's to achieve a collective understanding of our vision in this area, combining the individual skills of our organization. With this support we managed to combine deep knowledge to create something new and highly useful.

We prototyped to make this cycle come alive. Making very different disciplines; people from business (lawyers, economists and accountants), user interaction (UI designers), IT, Planning and Sponsors come together over a running application. We have run demos every 14. days (Active experimentation and Concrete experience) and discussed calculations, concepts and terms (as will be part of the new and improved ubiquitous Business Language). After the demo the participants used their Reflective observation and Abstract understanding to write new requirements to the Backlog. This is a “silver bullet” for a team fighting a complex domain. No Excel sheet, presentation or report would have made this possible.

In technical terms its more than just a
prototype. It is a full blown information model, implementation, and
a full fledged stack (although without database). And we know it will
perform with massive scale. We have now defined the Java classes and the
xml for the document store (Continual Aggregate Store).

But for business its a prototype; as
we have not covered all areas or all detail, but shown how the majority of
complex issues must be tackled. The prototype will be used as a study within new areas. Also remember that
for business, IT is just one piece. There is also timing, financing,
customer scoping, migration, integration, processes and
organisational issues.

The beautiful prototype

The domain has 13 forms with some 1500+
fields. The solution now consists of 6-7 aggregates and

Navigation at left, income facts at right

some 600+
fields altogether, but any user will only fill in a subset of these. We have utilised the latest in responsive design, to give as much as possible on a web platform. Everything is saved as you work and "backend" business logic is run for calculations and validations. Logic is not duplicated, the user is working towards the same backend as our own case handlers. (There are no Form anymore, as the user is working synchronously with the backend. Old legacy systems need an asynchronous front end, and get a costly duplication of the business logic.)

Previously there was a “back-and-forth” work process through 11
hard to comprehend steps, now the structure is sequential through 5
simple steps.
Previously the user had to do a lot of manual
calculations and book-keeping, now the prototype collect facts and
calculate for them.

Expert in your hands

Previously the user had to know what forms to
use, now the prototype guides the user though it.

Making the impossible possible
At the end there is adjustment GUI of sliders that solves what was previously hard to comprehend. Only experts with many years experience could do this. A goal for us is that normal people can
report their own tax-form in an optimal way. We have done user testing and got very positive feedback. It is much
easier for a user to report on real world things (assets, income, costs, dept), and let the application do the calculations.
Now that we see the result, it looks so simple, sublime
and beautiful.

The key artefact is the new set of fully unit testable java classes (information, logic and aggregates) of the core domain. Ready to be deployed on some PaaS. This core is now much simpler, simplifying all aspects of systems development and integration. This is how we increase the ability to change and decrease maintenance cost.

The platform

We are using Amazon Web Services (AWS) and have deployed the prototype in Apache and PHP
Beanstalk containers. This prototype continues where the last prototype stopped, and
porting that code has proven to be simple. Also we are using plain
Java and HazelCast on the backend. The backend contains all business logic and information. There is very little duplication of code, and the backend is used all the time as the user works his way through the wizard.
The front end has HTML5/CSS3,
Javascript, JSON and REST.
AWS has been really simple (time saving)
and cheap. The bill is about $100 for the test environments after 5
months! :) Also we have proven (again) that if you do your design
right, you can quite easily move to another "cloud" platform (see deploy looking good).
Testing is now a
dream compared to before, mainly because the aggregates are great
units to test, and because business provide calculations as
spreadsheets, which then sub-sequently are used as tests.
(We are deploying our private cloud, and the production-stack
will be different)

The experience
We teamed up in late March 2013 with a
team of 5 from Inmeta.no. They where experienced in Java, Infinispan,
GUI design, and web-development. They had no experience from Tax
Administration or Accounting. The business had a concept and a plan; by
starting with simple cases and adding complexity demo-by-demo. And I had a rough design for the aggregates replacing the forms. Late
August we finished. At that time we had covered a lot more than we
anticipated and also worked out new concepts (which had been
impossible to foresee without the prototype).

The theory and practices of Kolb's Learning Cycle are helpful in Computer Science.

Prototyping is a silver bullet in many aspects

Use the prototype also on all other parts of you organization

Our modernised business can run with many 10s of thousands of concurrent users

EA-perspective and Organizational Development: Business is engaged, drive change and stakeholders are behind the modernisation

Our business processes can be run in new ways, eg. being much more efficient or providing transparency for the public

A prototype will result in a mutual understanding of information model and business logic

Do not implement paper forms into xml schemas, but re-structure in aggregates

Your legacy systems should be moved to a “cloud” platform by using Domain Driven Design, Java, and a systems design approach that I talked about at QCon London 2013

If you understand HTLM5/CSS3, Javascript, JSON and REST, it is not that important what framework to use on the client side

Java can be really verbose, you dont need a rule-engine

Aggregate design (of Domain Driven Design) really rock

PaaS really saves time and cost

This show the innovation-power of small multi-discipline teams, with the right competence and ambition.

Tuesday, June 18, 2013

I urge you to start taking data quality seriously. Aggregate design (as defined in Domain Driven Design) and the technology supporting BIG DATA and NoSQL gives new possibilities, also for your core business. So be warned: pure definite business data at your fingertips.

Central to our new architecture here at Tax Norway is The Continual Aggregate Hub. One important feature of the CAH; keeping legislated versions of business data unchanged and available for “eternity”. Business data is our most important asset, the content must be definite. We must keep it for proof of procedure for more than 10 years, its integrity must be protected for wear and tear from both functional and technical upgrades in the software handling it.

Your current situation
I claim that the relational schema is too volatile for keeping the integrity of the business data stored in it, over time. The data is too fragmented, and functional enhancements to the schema will make the data deteriorate over time. “Will a join today give the same result as it did 5 years ago?”. Also a major threat against the integrity of the data is relations and other stuff that is added to the schema (DDL) to support reporting or analytical concerns. This makes the definite (explicit) content hard to get to, because the real business data is vague in the relational schema.

The Continual Aggregate Hub
Here, business data is stored as XML-documents, a version for every legislated change, and categorized by meta-data. See my talk at QCon 2013 in London, where I present that we organize business data by a classification system not unlike what libraries use for books. Basically we use the header to describe the content, and the document itself contains an Aggregate. I also show that we can compose complex domains from these Aggregates, and that applications running these domains fit nicely in the deployment model of "the cloud” and in-memory architectures. (see discussion on software design in the CAH)

The Implementation
The excellent team that implemented the data-store of the CAH constructed it as two parts; one called BOX and the other IRIS. BOX sole purpose is to store aggregates (as versioned documents), enforce a common header (meta-data for information classification), information retrieval (lookup based on meta-data), and providing feeds (ATOM) of these documents to consumers. BOX does not care what is in the document. IRIS' sole purpose is to provide search, reporting, insight and (basic) analytics based on all document content. IRIS utilize a search engine for this. We use Java, REST, ATOM-feeds, XML, and Elastic Search. We still use the Oracle database, but will migrate to a document database in a couple of years. (see blog for discussion on deployment models)

Separation of Concern
This is now in production and we see the great effect of having separated the concerns of information storage and information usage. We can at any time exchange the search engine and use sexy new tools (for example. Neo4J, SkyTree, and many others), without touching the schema of the business data, or the technology supporting BOX. A true components based approach to functionality. We can also change the schema of the business data over time without altering the old data, or altering the analytics and search capacities. The original and definite business content is untouched. The lifetime requirements of our data has never had such a good stand. Also the performance of these searches are awesome. Expect to use the same amount of space for IRIS as spaced used in BOX.

Insight into our business data has never been better. BIG DATA and NoSQL tools are giving us a fantastic opportunity. You should consider it.

Thursday, January 10, 2013

It has now been 3 years after the target architecture, and more specifically the CAH (Continual Aggregate Hub) was established. The main goal is to have a processing architecture where ease of maintenance, robustness and scale is combined. We want it highly verbose, but at the same time flexible. In the mean time we have started our modernization program, and at present we have 50-70 people on projects, but many more supporting on different levels. We have gathered more detailed insight (we ran a successful PoC) and the IT-landscape has matured. Right now we are live with our first in-memory application; it collects Bank account information from all persons and business in Norway. I am quite content that our presumptions and design are also seeing real traction in the IT-landscape. Domain Driven Design, Event driven Federated systems, Eventually Consistent, CQRS, ODS, HTTP, REST, HTML/CSS/JS, Java (container standardization), and XML for long term data still is a good choice.
My talk at QCon London 2013 is about how this highly complex domain is designed.

It is time for a little retrospective.

The Continual Aggregate Hub contains a repository or data storage, consisting of immutable documents containing aggregates. The repository is aggregate-agnostic. It does not impose any schema on these; it is up to the producer and consumers to understand the content (and for them it must be verbose). The only thing the repository mandates is a common header for all documents. The header contains the key(s), type, legitimate period and a protocol. Also part of the CAH is a processing layer. This is where the business logic is, and all components here reside in-memory and are transient. Valid state only exists in the repository, and all processing is eventually consistent, and must handle things idempotent. Components are fed by queues of documents, the aggregates in the documents are composed into a business model (things are very verbose here), and new documents are produced (and put into the repository). Furthermore all usage of information is retrieved from the repository.

Realization
With this way of structuring our systems, we can utilize in-memory or BIG-data architectures. The success for utilizing these lies in understanding how your business domain may be implemented (Module and Aggregate design). The IT-landscape within NoSQL is quickly expanding, in-memory products are pretty mature, PaaS looks more promising than ever, and BIG-data certainly has a few well proven candidates. I will not go in details on these, but use some as example as to how we can utilize them.

This is in no way an exhaustive list. Products in this blog is used as examples for different implementations or deployments of the CAH. It is not a product recommendation, nor represent what Skatteetaten might acquire.

NoSQL: It’s all about storing data as they are best structured. Data is our most valuable asset. It brings out the best of Algorithms and Data structures (as you where taught in school). For us a document store is feasible, also because legislation states formal information set that should last for a long time. In this domain example candidates are: Couch DB because of its document handling, Datomic because of immutability and timeline, or maybe MarkLogic because of XML support.

Scaleable processing, where many candidates are possible. It depends on what is important.In-memory: I would like to divide them into “Processing Grid” or “Data Grid”. Either you have data in the processing java-VM, or you have data outside the java-VM (but on the same machine). PaaS: An example is Heroku, because I think the maturity of the container is important. The container is where you put your business logic (our second most valuable asset), and we want it to run for along time (10 years +). Maybe development and test should run at Heroku, but we run the production environment at our own site. Developers could BYOD. Anyway Heroku is important because it tells a lot of how we should build systems that has the properties we are discussing here. And the CAH could be implemented there (I will talk about thet at SW2013).BIG-data: We will handle large amounts of data live from “society”. Our current data storage can’t cope with the amounts of data that will be pouring in. This may be solved with Hadoop and its “flock” of supporting systems.

Conclusion
Our application and systems ovehaul seems to fit many scalable deployment models, and that is good. Lifetime requirements are strict, and we need flexible sourcing.
We are doing Processing Grid now (we are using HazelCast), but will acquire some "in-memory" product during 2013 (either Processing or Data). Oracle is the document database, extremely simple, just a table with the header as relational columns, and the aggregate as CLOB. The database is “aggregate agnostic”.
Somewhere around 2015 when the large volumes really show up, you will probably see us with something like Hadoop, in addition maybe to the ones mentioned above. Since latency in sub-seconds is OK, and we will have a long tail of historic data, maybe just Hadoop? Who knows?

Friday, September 21, 2012

Use of Complex Event Processing (CEP) in the Continual Aggregate Hub (CAH)
With all these Aggregates being stored, it is necessary to understand what events (or combination of them) that should trigger some business activity. Some of these are already discussed in CAH, "The Restaurant" and in "Module and Aggregate design". It is the hard job of the waiter in "Restaurant" to execute CEP. CEP is already in use for the primary processing job of the CAH - to process the correct tax. This blog is about processing secondary concerns - correlating events that should trigger actions to follow up on the primay task: Business Intelligence and Predictive Analysis. (This blog-post is not about the analysis algorithms, but on the architecture and tool-box)
Just to summarize some constraints of the CAH (definitions from Domain Driven Design):

All Aggregates have a commonly defined Root-object.

Aggregates that are "Private" may change at the will of the Module producing it.

Aggregates that are "Private" are not known by other Modules.

When an Aggregate goes "Public" that is a business event and the Aggregate contains the data.

Aggregates that are "Public" will never change.

If the Module needs to change a "Public" Aggregate, a new new version is published.

All Aggregates are two-sided; the creditor and the debitor (eg. bank and account owner)

Maintenace and robustness
The design presented below provides excellent isolation from the business logic, linear scaling, and the other features of the CAH. The design gives these "Business Intelligence" concerns a dedicated place in the architecture. The idempotent feature of the CAH is important because we can calculate on historic events when we understand new patterns for fraud detection. (Any tax case up to 10 years old can be re-instantiated).

Chain of eventsCatching the business event is a valuable thing. In database systems many events occur, but since the database has a very fragmented view of the data, understanding what actually are important events drown in the flood. It is in the business logic (mainly the Application Layer in DDD) that business events are known, and it is here that Aggregates are made Public.

The aggregates stored in the CAH represents a chain of events that can be reasoned upon. These events may be divided into these groups, or lanes of events.

Everything. Typically used for a stream to the data warehouse or others that have a holistic approach

By document type (or schema). This is a lane for subscribers that are interested in a particular set of data.

By Party. This is a lane for subscribers that reason about data concerning a Party.

A combination of document type and Party

Identfying a complex eventThis part of the processing is about having an inferred state of the events over a certain period of time. In the CAH this would be the "waiters" job and a specialized set of such; Event Monitors. The unique keys for event monitoring a Party could be: Event Monitor Type (Payroll vs VAT balance), Concerns (the party Id) and Legitimate period (2012) (see definition of the super-document). These event monitors only access the Header of the document, this for performance and clear separation of concern. Reasoning about a complex eventIf the Event Monitor fins a prospect, it triggers an Event Reasoning Module. This module is capable of retrieving a subset of data from the CAH - this is data not part of the event itself - and the Module does some logic on these data. There will certainly be re-use of Services present in other Modules (which contain business logic necessary to understand the content in an Aggregate). The ERMs will also use services on Party for segmentation parameters such as "Scoring". These parameters are often set by analytics in the data warehouse, that focus on Party behavior. There is a many to one relationship here; many Event Monitors may trigger the same Event Reasoning Module. Either the Module has a negative or positive case. If there is a positive case, then the Module produce an Aggregate and makes it Public in the CAH. These Aggregates are a special set of Aggregates containing data for the BI processing. They in turn can be used for trend analytics and form basis for Predictive Analytics.

Responding to a complex event
Responding to a complex event would be to listen to Aggregates that come from ERM´s. These Aggregates will be stored for a long period of time, so that trend analysis can be done on them. The Aggregate can be sent to any participating system real-time or they are used by other Event Monitors to form chains of CEP. These Aggregates are secondary products of your pipeline of tasks. The first product is about calculating tax correctly.

In the illustration the blue Aggregates (the primary ones) emit events (below the CAH) that form an event stream. EM modules monitor events and trigger ERM for further investigation. ERM 1 has a negative case and does not produce an Aggregate. ERM 2 produce an Aggregate (a green one, representing secondary products in the CAH), which also triggers an event. The Party Registry is also in action supporting segment or scoring information, for the ERMs.

Predictive analytics
Business Intelligence, complex validation, fraud detection and monitoring done in the "live" environment and not in the Data warehouse. (As I have discussed in other blog-posts, the Data warehouse has an important role, but not for these "real-time" tasks). Fraud detection could be analyzing typical patterns and triggering action if they are out of bounds. (For example: a carpenter with certain characteristics, in Oslo, having revenue 25% less than the average, for the third month in a row, then do "something"). Patterns could also be a balance between data-sets or chains of real-world events that are linked. Predictive analytics will contribute to tackling things up front, or enable the Party (tax payer) to act in the right way. Also note that our Aggregates have two-sides; the debitor and the creditor, they help us chain events.

Involving the Party as events occur in the real world
By catching events as they occur in the real would - marriage, death, bankruptcy, trades, liability, payroll etc. - the system could also respond to the Party (the physical or legal entity) and make them acknowledge the event. The Party will then have more insight into its own tax-case, it will validate the data we have, and we are treating our citizens in the correct way. Or the Party may understand that he is doing something wrong and act in the right way. It is better to tackle this up-front.

Implementation
The Aggregates are stored as XML, and are Java object structures in-memory.
Modules are plain Java deployed in our linear scalable processing architecture, and the Modules have services in RMI, REST or WS.
Event streams are Atom feeds, or JMS in-memory.