The Application Automation Layer: Introduction And Design

This introduction lays the foundation for a framework that promotes a consistent design and coding style for large-scale, multi-developer projects and results in applications that are responsive to requirement changes, easily updated and easily debugged.

The Application Automation Layer: Introduction And Design

This introduction lays the foundation for the concepts behind the Application Automation Layer (AAL). Because this article is long enough just discussing concepts, there is no example code here--a considerable amount of code already exists in other articles that I've written. I am currently setting up a SourceForge site for public access. Future articles will discuss the AAL implementation itself and use a demonstration project (photo album organizer) to illustrate its use.

What Is Wrong With Object Oriented Development?

Objects are often interconnected. It makes for a pretty object model diagram, but class re-use is all but impossible because of the interdependencies between objects. This results in slow builds, cascading side effects when code is changed, and complex testing.

In many designs, objects insufficiently abstract the concepts that they are trying to represent. This introduces “rigidity” into an application. The less abstract an object is, the more difficult it becomes to change its behavior. Applications become inflexible to requirement changes and new technologies, and become unresponsive to market changes.

Why Do We Need An Application Automation Layer?

Design, development and testing of large scale applications is too expensive;

The longer a development effort takes, the more likely the requirements will have changed, either as a result of customer requirement changes, technology changes, or competition;

Design and implementation methodologies vary greatly across a multi-team project, resulting in code sharing and maintenance problems. A framework that promotes a consistent design and implementation philosophy reduces this difficulty;

Does The AAL Benefit Small-Scale Application Development?

Many small-scale, single developer applications embrace multiple technologies. The AAL reduces the time it takes to hook these technologies together and to add a new technology that interfaces with existing ones.

By starting with a framework that promotes re-use, an application that appears to be small-scale can migrate to a large-scale effort without redesigning and rewriting code;

Rapid prototyping.

What Are The Advantages Of The Application Automation Layer?

Build projects with debugged and tested components;

Built in instrumentation allows tracing of all events;

Component-based development;

Has This Concept Been Implemented And Tested In The Real World?

Yes. A brief description of three projects (of many) that currently use the AAL as implemented in C++/MFC:

Automation Of Satellite Design: A large-scale, multiple developer, three year project to automate the design of communication relay satellites.

Boat Yard Workflow Management: A medium-scale, single developer, multi-year project to manage the workflow of boat yard operations—work orders, job tracking, inventory, payroll and customer billing. The AAL framework allows the support of custom data representation and workflow processes for different boat yards, while retaining the same code base.

Club Management: A medium-scale, single developer, multi-year project managing the income of adult entertainment clubs and entertainers. This system utilizes complex scheduling, threading and inter-process communication.

What Does The Application Automation Layer Do?

Primarily, the AAL consists of several technologies:

A Data Hub

A Process Manager

A Component Manager

Instrumentation Package

Additional features of the AAL are considered “technology components” and are not discussed in this article. These include:

GUI Controls

Database Interface

State Manager

Additional technology component interfaces

Data Hub

The data hub provides a common data exchange mechanism between technologies that have different data representation schemes. The classical approach is illustrated in this figure:

Here, the application has an application-to-technology specific interface for data. Using the AAL, the data exchange occurs as follows:

For example, a typical application is responsible for reading a record set in to a database and reformatting it for a particular GUI control. If instead the AAL concept is used, the application instructs the database technology to load the record set. This record set is translated, by the database technology, into a common data representation. The application then instructs the GUI control to display this information. The GUI control extracts the common data representation and formats it to the particular requirements of the control.

Furthermore, the concept of loading a record set has been sufficiently abstracted by the database technology that it can handle a generic record set specified by a SQL statement, for example. Many applications seem to embed record specific functionality. This makes the application inflexible to requirement changes. The point here is that while the programmer can still do it “the wrong way”, there is now a technology that promotes (if not enforces) a better approach.

The concept of a common data representation is also used by the C#, Visual Basic and other compilers—all code is compiled into a common code language.

Process Manager

The process manager decouples unrelated objects by using a meta-interface. This reduces build time dependencies and problems related to changes in object designs, as described above. The process manager is implemented as a script engine that directs the acquisition, exchange, manipulation and deposition of data.

Processes are typically initiated by events, and of those, typically GUI events such as clicking on a button or selecting a list. Therefore, to take full advantage of the process manager, the Forms plug-in technology interfaces with the process manager in response to GUI initiated events. And as illustrated in the above diagram, the Forms plug-in interfaces with the Data Hub. These two concepts create a powerful combination that decouples GUI driven processes from the specific GUI. This is one of the fundamental steps in designing a program to be more flexible to requirement changes. I’ve seen too many applications where the “processing” is part of the “OnClick” handler of a button!

For example, consider the typical process flow that an application takes to load a GUI control with a record set, as illustrated in this diagram:

Component Manager

The component manager promotes component-based development. The application itself is considered another technology which is merely another plug-in to the framework. The component manager is responsible for loading and unloading of technology components and the registration of the component’s public interface.

Instrumentation

A fallout of this scheme is that an application is automatically instrumented—data exchanges, event invocation, object to object messaging all include instrumentation, so that it is very easy to trace an application. The AAL includes an instrumentation package (see my article on C# Debug And Trace Classes [^]).

What About Agile Programming and eXtreme Programming?

The AAL is an implementation of a framework that supports these project development styles. AP and XP are excellent ideals that can only be achieved by implementing a framework such as the AAL. The AAL has been proven to be effective in its goals, even before the concepts of AP and XP existed.

Conclusion

In the next couple of articles, I will present the implementation for the four components I discussed above. I hope no one gets upset that there isn’t any code in this article. I intended this article to present the foundational design concepts without getting mired in specific implementation issues. As a side note, I am also planning to integrate the concepts that I presented in my Organic Programming article [^] with the concepts in the AAL.

As I mentioned before, I am currently in the process of setting up a SourceForge site to support this effort. However, if you want a sneak peak at some prototype code, see my recent article on Fractal Trees [^].

Given this overview, I am interested in feedback from the programming community. And anyone who is interested in working on this project, please let me know!

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

Comments and Discussions

First of all, thank you for the effort to write up the articles, I find them very interesting. I am still going over the articles repetitively to learn more and more about the framework, but in the meantime I have a couple of remarks (or, perhaps, questions, since it is likely I have not understood the framework exactly, yet) that I thought I would pick your brain with.

1.) Data Hub Lifespan.

The existence of a Data Hub implies (?) that some framework-related data, apart from "global" application data (environment, command line parameters, etc.), has a lifespan independent of any one workflow (otherwise, it would just be a part of the Common Data Representation load). I am having some trouble with thinking up examples of such workflow-independent, yet "persistent", data. For example, if some kind of GUI control needs to load a record set, the set should be acquired by a database-related Technology, converted into CDR, handed off to a GUI-related Technology, loaded, and destroyed (all part of a specific “LoadThisAndThat” workflow). Should, say, caching be required, a cache could be a separate Technology, built into the data acquisition workflow, to decide when and how it needs to be updated, refreshed, etc. Under what circumstances, generally speaking, should some "workflow-less" data, actually persist inside of the Data Hub?

2.) Data Hub Transparency.

The existence of a Data Hub as a Workflow Manager-independent Component implies (?) that a Data Hub is capable of running its own workflows, depending on the changes in its internal state. I am guessing that a Workflow Manager would have to be hooked into the Data Hub through some kind of Observer paradigm, for the framework to react to changes in Data Hub state. If my understanding is correct, wouldn't it be advantageous (component-model -wise) to define the Data Hub as a "passive" accumulation/aggregation/etc. Component that is driven by the Workflow Manager exclusively? I tend to see any framework-state altering process as a workflow itself, which actually has the capability (sometimes even responsibility) to launch sub-workflows. For example, to update some data in a Data Hub, a Workflow Manager-driven workflow might be required to authenticate through a Technology before proceeding (something, say, as simple as "don't modify anything while day-end batch processing is active"), etc.

3.) Data Hub Dependencies.

In the light of (2.) above, do you think it would be advantageous (component-model -wise) to eliminate Technologies' capability to communicate with a Data Hub directly, and, instead, enforce Data Hub "routing" through Workflow Manager workflows (and, effectively, through the Component Manager)?

4.) Workflow Manager Complexity.

Since the Workflow Manager part of the framework has not been introduced in detail yet, I have been somewhat "confused" about its duality (that is most likely my fault, not the framework's). It seems to me that it might be simpler to work out some kind of a Workflow Interpreter, and specialize it into two independent Components in the framework: an Event Manager and a Workflow Manager. The EM would be responsible for launching Event-, Technology-, or Component- initiated workflows (and acting as a Mediator), and the WM would be responsible purely for executing requested workflows. I mean Workflow Interpreter in the sense of "workflow mediation and execution definition language" interpreter. I mean Event Manager as a Workflow Interpreter, specialized to mediate and drive workflows, through a Workflow Manager, using “Common Event Representation”. I mean Workflow Manager as a Workflow Interpreter, specialized to drive Technologies and Components, through a Component Manager, using “Common Data Representation”.

Configuration information might be one example. Data that a workflow needs to look at--information regarding establishing a connection to a database, timeout values, etc.

Also, I usually represent GUI data in a separate data store that's GUI presentation independent. A tree list is a great example, because I usually represent a tree's data in a flat representation rather than the native parent-child relationship, since it's the flat representation that is easier to persist to the database.

So, I have a flat representation that is manipulated by other GUI event--insert a leaf, move a child, delete a leaf, etc. There's a translator that figures out how to convert the flat representation to the tree view. But the flat representation is persisting in the data hub after the various workflows complete.

gniemcew wrote:wouldn't it be advantageous (component-model -wise) to define the Data Hub as a "passive" accumulation/aggregation/etc. Component that is driven by the Workflow Manager exclusively?

I'm not sure.

gniemcew wrote:do you think it would be advantageous (component-model -wise) to eliminate Technologies' capability to communicate with a Data Hub directly,

Hmmm. I think this is a viable alternative, but I'd be reluctant to implement it because I think it muddles the waters between data management and workflow management. I'm not sure the additional layer is of any advantage. Keep in mind that the data hub is supposed to be a repository for publically known data formats, and disconnected from workflows. I'm not sure if that addresses your question though.

gniemcew wrote:It seems to me that it might be simpler to work out some kind of a Workflow Interpreter, and specialize it into two independent Components in the framework: an Event Manager and a Workflow Manager.

You are absolutely correct. In fact, I typically extend the implementation to also include a message manager and state manager. The workflow manager should be strictly a, as you said, "'workflow mediation and execution definition language' interpreter".

I never did continue after the last installment of these articles as I ended up reworking the whole concept into what became MyXaml (see sig).

Keep in mind that the data hub is supposed to be a repository for publically known data formats, and disconnected from workflows.

That takes care of my questions. I was mistaken in my understanding that the Data Hub was some kind of workflow-driven "central" data and event holding object. It turns out that the Data Hub is, effectively... (gasp!) a data hub . Now I only need to marry this kind of framework to a good GUI system abstraction pattern, and I'm good to go.

Soon. To be honest, I'm having a bit of trouble disentangling the concept of a workflow manager from the scripting/parsing side of things, so I've been holding back until I get the concept and design fleshed out.

I really enjoyed this article. I will head over to SourceForge to see how the project is coming.

In the mean time, I am curious to hear your comments on the book "Software Development on a Leash" (ISBN: 1893115917, http://www.amazon.com/exec/obidos/tg/detail/-/1893115917/102-5776258-1108912). I have mixed feelings about this book. From an architectural view, it seems to go too far overboard with the concept of horizontal interfaces (which, no doubt, are a good idea when correctly applied and are in alignment with all the principles you discuss in your article above).

I think (hope) your article (and project) could end up being a more rational approach to the goals expressed in "Software Development on a Leash".

I would like to hear more discussion comparing/contrasting the AAL and the vMach framework from "Software Development on a Leash". At the very least, a review of this book might lead to discussions that could benefit the development of the AAL.

The author describes separating the application from the architecture, and while I've done this before in software on several levels, I never considered removing the application completely from the compiled software and driving application behavior from what the author calls "external structural and behavioral metadata". Using the same binary program to support multiple applications without recompiling a single line of code? Perhaps without recompiling on each and every application release, which makes perfect sense. However, he avoids rebuild for complex screen-level changes, too (even cosmetics). Not too shabby.

Sounds like what I've achieved with the MFC version, but my approach is different--I don't use metadata and object builders. I must confess I haven't read the book, so my response here is limited to reading the reviews. I'll have to get a copy of it though, it looks interesting.

Regarding your comment in the 4th article--several people so far have pointed me to the Sharp Develop project. I really need to contact these people!

Thanks for the feedback and the references--it's really helpful to network with other people.

As to the Source Forge site, it's an area I've completely ignored after the initial burn of reading through all the Unix-style documentation and finally managing to get something up and running on CVS. If you'll pardon the metaphor, it's a satellite in a LEO that's awaiting the next burn to the transfer orbit.

I do have an MSI installation with the latest and greatest though, that I could email you, if you were interested. Know anyone that can help me set up the CVS stuff on Source Forge?

Thanks!

Marc

Help! I'm an AI running around in someone's f*cked up universe simulator.Sensitivity and ethnic diversity means celebrating difference, not hiding from it. - Christian GrausEvery line of code is a liability - Taka MuraokaMicrosoft deliberately adds arbitrary layers of complexity to make it difficult to deliver Windows features on non-Windows platforms--Microsoft's "Halloween files"

I posted my initial comment after reading just the first installment of this series. (I also read a couple of your other related articles -- was one the 4th of this series? I thought there were just 3 in this series.) Anyway, I have now read the 2nd installment. These are great articles. What interests me are 1) the architectural approach, and 2) the implementation decisions. It's all very interesting and relevant to modern application development -- I can't wait to read the 3rd installment later tonight. BTW, I personally prefer your writing style over that of the author of "Software Development on a Leash." If you read the book, I'm sure you'll see it contains a lot of hype. But anything about separating the application from the architecture is interesting reading for me.

Regarding SharpDevelop, I think you should open a dialog with them. Their Add-In tree is pretty impressive.

I have to make some architectural decisions for an upcoming application. I'm 100% in favor of the AAL and component-oriented concepts, but I am concerned about performance in C#. The SharpDevelop UI is quite a bit slower than common Windows apps (Excel, Word, etc.) and the other apps I've written in C#. I love the flexibility of code maintenance benefits, but I personally want my app to be more responsive. This is a real dilemma...

It's hiding under a different section. It references the other three. You've seen in already though, as one of your messages was posted on it.

MtnBiknGuy wrote:but I am concerned about performance in C#

I agree. The next installment illustrates using XML to specify GUI's and has a demonstration app that coordinates some data with the GUI's. One of the GUI controls is a tree view directory structure, which I lifted from someone else's code on CP. There's all this Shell stuff that gets the tree structure and icons, and it's slow as molasses. I haven't looked yet as to what can be optimized in using the .NET stuff--everything the author wrote is basically going through a managed interface to the shell32.dll

Well, if you want a huge code base, I'm happy to send you the AAL implementation in C++/MFC--it's a lot more developed (having been used for about 5 years now and still actively using it), but of course I've made some different design decisions in the C#/.NET version, such as using XML for a lot of the specification stuff.

Marc

Help! I'm an AI running around in someone's f*cked up universe simulator.Sensitivity and ethnic diversity means celebrating difference, not hiding from it. - Christian GrausEvery line of code is a liability - Taka MuraokaMicrosoft deliberately adds arbitrary layers of complexity to make it difficult to deliver Windows features on non-Windows platforms--Microsoft's "Halloween files"

I've worked alongside the author of SOAL on two major projects and it does pay off if you approach it in context of .NET and true inheritance. The SOAL approach was born in the original VS 6 environment but the .NET form gives a lot more power to it. For example, I can define a SOALtreeview
as a simple user control and have it inherit the MS Treeview, the Infragistics Treeview or the COmponentOne treeView, as examples. This creates an extended user control and allows me to
provide additional "harnessing" of the underpinning control's API within the custom control's implementation (the adapter, so to speak). If I expose common functions such as Load, Fill, Layout, etc, this allows an implementor to use the common functions rather than the specific API, and allows for the interchange/ swap out the components without affecting the behavioral code external to the control. It also provides a means to define interactive relationships such as cross-component communication, observation and other abstracted behaviors because the adapters can implicitly know how to talk to one another and bind the otherwise disparate controls into a common framework.
In another example, I can create a typed data set that is actually an extended dataset to the
ADO.NET dataset. This extended dataset inherits the ADO.NET dataset, then further harnesses the
various classes commonly interacting with the dataset. In our case, we provided a dataset.Fill that mimics the xxxDataAdapter.Fill method. However, it has the capacity to automatically determine from the connection if it needs to use an OledbDataAdapter or a
SQLDataAdapter and calls the appropriate Fill() function on the adapter. Why do this? It eliminates the application-level's need to instantiate multiple resources to perform highly repetitive operations, and it consolidates the repetitive operations into a one-stop shop. I get tired
of having to present parameters to a command object to support a stored proc, when I could call the extended dataset's Fill method and it determines from metadata what the parameters should be (the Fill has a paramarray for the parms) and dresses up the command object, executes and returns the result. This is an example of consolidating structural and behavioral patterns to eliminate the entanglements at the highest application level.
It is impossible to describe how agile this approach makes our application logic, and how solid it makes the underpinning capability logic - more so each time we use it. I think the SOAL approach got obfuscated by the author's having to support the original VS 6 version and .NET in the examples, since the book was published on the boundary of .NET's initial release. I would hope that he would consider a more concise second edition to expand on .NET's capabilities

It's good to hear about your experiences. It is unfortunate, IMHO, that SOAL is not a better written book. The ideas and concepts are exciting. But I find I learn more about these concepts from sources such as Marc's articles and a couple good books on software architecture (that give a much deeper and simultaneously better balanced presentation of the concepts than SOAL does). Anyway, the SOAL concepts are interesting and it's good to hear your feedback.

I would like to solve most of my programming tasks using a visual editor, like in any good WorkFlow diagram.

I don't want to worry with the database design, that is the task for a persistence framework, able to receive dynamic requests for data and metadata updates.

I will love to use inheritance and to use classes for modelling my domain. So, my objects or documents will need a description similar to UML classes and their relations.

In addition, if my application will run like a MDA, I will have to define actors, permissions, scenes, actions, some concepts which allows the core engine to make my design a real application.

So, I found very interesting your articles, because the use of XML, their transformations and visualizations, can complement the idea of "no programming" (or programming in a simpler way).

Today I was surfing the net and found something related to this topic, http://ki.cs.tu-berlin.de/~stauch/Diplom/ (Implementation of Workspaces). I found it supports the idea of a "workflow programming paradigm".

In the olden days of COBOL programming, we had only 2 layers (in fact we didn’t have layers at all, because we never thought of them). We had libraries and programs which called these library based functions. That was the extent of code re-use. There were no patterns, no objects, no Relational databases.

We had big systems running even then, as is proved by the main-frame COBOL systems which survive even today in large corporations. There were no lost pointers, memory drains, and the cost to the customer was limited to the Unix box system, and the consultants fees. We nowadays have to pay for the development tools used by the consultant (or higher fees in lieu of it), a RDBMS package, and expensive maintenance contracts because the programmer who debugs a modern system has to know the ‘architecture’ used by the developer. (When I use the word ‘architecture’ to some of my colleagues, they look at me is if I have traded professions, but in the golden olden days, people who built stylish buildings did architecture – programmers designed system). There are now layers upon layers of software, placing the user on top of a suite of software layers , each fluffy and rich as a good pillow.

I sometimes wonder, what is the leap-frog that we have achieved from the customer’s perspective ? The end-user of the IT system, who is a functional manager wants information either in a report, or on screen and he continues to get it the same way. The screen may be a little more colorful and attractive, but when I am fighting to keep my inventory under control , and my profits high enough to pay dividends, color of screen is my least priority. I want an IT system, which can be a black box, which just makes things easier for me , because it is intelligent and fast and maintainable. Period. I have a strange feeling that the inventory and financial systems we developed in the eighties were as intelligent, fast and maintainable as the systems developed currently, if not more so.

Though I agree that adding a lot of computer science concepts like OO, design patterns , UML and use cases make the process more scientific, I feel the complexity added in terms of layering and multiple approaches has made software maintenance an esoteric art form. Tell you what , if Maruti India (which is a Suzuki version in India), can claim to have a service station everywhere, even on a remote mountain top, it is because, the mechanic can go under the hood without any lights on and tell which component is where without even seeing it, and can even make out it’s state by it’s sound. It was achieved by simple standardization. If I was let loose today in a OO-centric, n-tiered system and asked to fix a bug, I would delve into the documentation and code to see what was the design approach before I even set a finger on the code. If , in my olden days, I was let loose on a system to debug, all I had to do was to figure which directory had the latest version of the software and data – I was then almost as comfortable as a Maruti mechanic .

Software is complex by nature . No two pieces of software is same – just like no two grains of sand are same. Instead of making it more simpler, why do we make it more complex by adding so much of technology to it ?

Umm... have you lately tried to sell some software that was written back in your old Cobol days?

Users have become much more demanding, and not just in the amount of colors.
"Back in the old days" you could say "this application requires a 24 needle Epson printer, does nor run on DR-DOS and you'll spend about an hour fiddling with config.sys until everything runs fine." Try this today
Systems now need to interface components written by someone else.
Software is now rarely written by a single person - written by a team - which forces a qualitative jump in complexity on us.

I understand your point, and yes, instead of adding complexity we sometimes should use the simpler approach - but there are reasons for the complexity!

Italian is a beautiful language. amare means to love, and amara bitter.

I think there are some good ideas here, but I do not see the approach as outside object oriented programming. This is just a specific design concept which would and should be inmplemented as classes. I have used some of the same approaches and implemented them as classes in an object oriented program.

You are correct--the approach is NOT outside OOP. I hadn't intended to imply that it should be--merely that OOP is mis-applied in many cases.

This is just a specific design concept which would and should be inmplemented as classes.

Yes, that is absolutely correct. However, I seem to get a lot of blank stares when I try to explain this concept to most people. Mind you, I fully think that it is my inability to get the concept across. Of course, what I've noticed is that once people start using this design concept, they can't explain it either! (but they really like it)

I have used some of the same approaches and implemented them as classes in an object oriented program.

Excellent!

Thank you for the feedback. I'd like to hear more from you as I post more of the guts of this thing.

Marc, I will try to elaborate a little. I have used a concept I call Info classes. A group of Info classes would be somewhat similar to your data hub. The concept of the Info class originated by separating all the external data of a class into a separate class. This seems like a natural structure and I thought of it as an organic cell analogy. People do a somewhat similar thing by feeding a filled in structure to a class. And some would argue that an info class should be a structure because all the data would most likely be public. However the Info class initalizes all the data to reasonable values, has error checking, data conversion, and persistance handling. The process class is set by a copy or pointer to an Info class. Thus several process classes can look at the same data at the same time. Several process classes can use the same Info class. Info classes can use single or multiple inheritance. A process class only needs to use a subset of an Info class. This concept has worked well in computer aided design applications where many processes use subsets of the same data.

I have found that when writing Multi-Document Interface MFC applications it is best to put a minimum amount of code in the view and document classes. This minimizes coupling and maximizes reuse. Most of the code is handled by what I call manager classes, some what similar to the process manager. The manager classes are members of the application class which I suppose has a function similar to the component manager.
Ron

1. Do you have to deal with processes overwriting data in the Info class, and if so do you have a built-in semaphore mechanism?

2. Do you allow multithreaded processes, where several processes are looking at the same data simultaneously?

3. Do you encorporate workflows? I can see an architecture where one process creates an Info class that another process or processes consume, creating new Info classes, etc.

4. Do you get into dependency problems with the Info class. Say, process 1 uses ...(continued)

I was pointing out that the Info class gives you alot of choices. Normally the process object would get a copy of an Info object. A process class could be designed to work with an Info object copy or an Info object pointer. You would want to use a pointer to maintain concurrency in clearly defined situations. I would try to advoid the dependency problems you can get in by using pointers to Info objects. In a practice I have just mostly used copies of Info object. In a multithreaded application I used an Info object to communicate data between A GUI and worker thread. Info object updates were protected by a critical section. In a grapical compter aided design program copying is a workflow issue and has to be managed carefully. I used the manager class to do this. The manager class is a way to split up process expertise.

Futhermore you can have a master Info class that presents more than one interface Info class to different process classes. The single process class only has to know its Info interface class, but all the data conversion can be concetrated in the master info class.

I found the article on the Linda language interesting. I see some simularities, although Linda is a much more abstract approach.
http://iamwww.unibe.ch/~scg/Research/ComponentModels/linda.html

I am currently working on a C++ COM used by Visual Basic apps. The COM provides the GUI and database interface.

I very much like your concept of an Info object and a process class. If you don't mind, I'd probably like to encorporate this concept in some form in the AAL, giving due credit of course to you for bringing your design to my attention! Would that be OK with you?