I am back home from San Diego now. About 3 more hours of jet-lag to work on. This will be a very busy two weeks until I make a little excursion to the Pakistan Developer Conference in Karachi and then have another week to do the final preparations for TechEd Europe.

One of the three realy cool talks I'll do at TechEd Europe is called "Building Proseware" and explains the the scenario, architecture, and core implementation techniques of Proseware, an industrial-strength, robust, service-oriented example application that newtelligence has designed and implemented for Microsoft over the past 2 months.

The second talk is one that I have been looking forward to for a long time: Rafal Lukawiecki and myself are going to co-present a session. And if that weren't enough: The moderator of our little on-stage banter about services is nobody else than Pat Helland.

And lastly, I'll likely sign-off on the first public version of the FABRIQ later this week (we had been waiting for WSE 2.0 to come out), which means that Arvindra Sehmi and myself can not only repeat our FABRIQ talk in Amsterdam but have shipping bits to show this time. There will even be a hands-on lab on FABRIQ led by newtelligence instructors Achim Oellers and Jörg Freiberger. The plan is to launch the bits before the show, so watch this space for "when and where".

Overall, and as much as I like meeting all my friends in the U.S. and appreciate the efforts of the TechEd team over there, I think that for the last 4 years TechEd Europe consistently has been and will be again the better of the two TechEd events from a developer perspective. In Europe, we have TechEd and IT Forum, whereby TechEd is more developer focused and IT Forum is for the operations side of the house. Hence, TechEd Europe can go and does go a lot deeper into developer topics than TechEd US.

There's a lot of work ahead so don't be surprised if the blog falls silent again until I unleash the information avalanche on Proseware and FABRIQ.

Only this week here at TechEd it became really apparent to me how many people read the things I write here. I've had dozens of "strangers" walking up to me saying "Clemens, I read your blog. Thank you for the things you write.". It's great to meet the real people behind the numbers (I get an insane amount of hits each day for what is effectively a personal opinion outlet) and it's absolutely fantastic to hear when people tell me that I am helping them to do their job better. So what I wanted to say is ... "Thank you for stopping by every once in a while and for helping me to do my job well"

All the wonderful loose coupling on the service boundary doesn't help you the least bit, if you tightly couple a set of services on a common store. The temptation is just too big that some developer will go and make a database join across the "data domains" of services and cause a co-location dependency of data and schema dependencies between services. If you share data stores, you break the autonomy rule and you simply don't have a service.

Separating out data stores means at least that every service has it's own "tablespace" or "database" and that in-store joins between those stores are absolutely forbidden. If you have a service managing customers and a service managing invoices, the invoice service must go through the service front for anything that has to do with customer data.

If you want to do reporting across data owned by several services, you must have a reporting service that pulls the data through service interfaces, consolidates it and creates the reports from there.

Will this all be a bit slower than coupling in the store? Sure. It will make your architecture infinitely more agile, though and allows you to implement a lot of clustering scalability patterns. In that way, autonomy is not about making everything a Porsche 911; it's about making the roads wider so that nobody (including the Porsche) ends up in a traffic jam all the time. It's also about paving roads that not only let you from A to B in one stretch, but also have something useful called "exits" that let you get off or on that road at any other place between those two points.

If you decide to throw out you own customer service and replace it with a wrapper around Siebel, your invoice service will never learn about that change. If the invoice service were reaching over into co-located tables owned by the (former) customer service, you'd have a lot of work to do to untangle things. You don't need to do that untangling and all that complication. As an architect you should keep things separate from the start and make it insanely difficult for developers to break those rules. Having different databases and, better yet, to scatter them over several machines at least at development time makes it hard enough to keep the discipline.

Omar has already posted the announced fix for version 1.6 and has updated all the downloadable files. Go here to get the updated versions. If you run 1.6, get the hotfix, otherwise just get one of the full archives. We should now be stable again. Thanks to Omar and Erv Walter for providing the fix and the repacking so quickly (while I am busy in San Diego at TechEd).

We'll have a fixpack for dasBlog 1.6 within the next two days that will roll back a few internal changes that had been made to improve performance, but unfortunately caused significant instability. The code is already checked into our tree and we're going to have the fix packaged up for download very soon. If you don't have 1.6 installed yet, wait until we have the fix. Within a week we are going to replace the 1.6 verson available from the Gotdotnet workspace with a version that incorporates the fix.

Now that we're getting close to the dasBlog engine's 1st birthday, I'd like to know how people use it. I am seeing quite a few blogs out there who run the software, but it's just as interesting to know how the engine is used in corporate Intranets and whether you use it as a tool to help coordinate projects, share knowledge about certain topics or .... how would I know?

If you use dasBlog, it'd be great if you could share with me how you use it, how you like it, and what you don't like. If you've warped the engine into something totally different or if you have some really cool design but it lives hidden inside the corporate firewall, I would appreciate getting a screenshot (blur out the secrets). None of the information will be published unless you allow me to do that.

I am also interested to know whether and how you've used snippets from the blog code for your own projects and/or products. Knowing what pieces are valuable to you would allow me to isolate them and put them into some isolated "goodies" library down the road.

Scott Hanselman ran into a critical bug in dasBlog 1.6 that has to do with the new caching logic that the folks in the GDN workspace came up with (I didn't do it, I didn't do it!). We've both sent email to those who know about this issue and will see who will look at it and when. Apparently Scott posted something using an external tool and a couple of things were happening in parallel around that same time and that got the caching mechanism confused. If you get unexplicable errors and all you get is the "error page" , go ahead and delete the files entryCache.xml, categoryCache.xml, and blogdata.xml; then open and save (touch) web.config. that should get the blog back on its feet.

If you are on version 1.5 or earlier, stick to it while this is being checked. If you are on 1.6, have some tea or a lightly alcoholic beverage and don't panic.

It's rare that I give "must have" tool recommendations, but here is one: If you do any regular expressions work with the .NET Framework, go and get Roy Osherove's Regulator. Roy consolidated a lot of the best things from various free regex tools and added his own wizardry into a pretty cool "RegEx IDE".

The four fundamental transaction principles are nicely grouped into the acronym "ACID" that's simple to remember, and so I was looking for something that's doing the same for the SOA tenets and that sort of represents what the service idea has done to the distributed platform wars:

This here reminds me of the box that's quietly humming in my home office and serves as my domain controller, firewall, RAS and DSL gateway. I upgraded the machine (a rather old 400 MHz Compaq) to Windows Server 2003 the day before I flew to TechEd Malaysia last year (August 23rd, 2003). I configured it to auto-update from Windows Update and reboot at 3:00AM in case updates have been applied.

Guess what: I got back home from that trip (which included 4 days touring the Angkor temples in Cambodia and another 10 days hanging out at the beach on Thailand's Ko Samui island) and realized that I forgot the Administrator password. Tried to get in to no avail. I've got rebuilding the box on my task list, but there's no rush. I haven't really touched or switched off the machine ever since. It keeps patching itself every once in a while and otherwise simply does its job.

I am not a “smart client” programmer and probably not even a
smart client programmer and this trick has probably been around for ages, but …

For someone who’s been doing WPARAM and LPARAM acrobatics for years and
still vividly recalls what (not necessarily good) you can do with WM NCPAINT
and WM NCMOUSEMOVE (all that before I discovered the blessings of the
server-side), it’s pretty annoying that Windows Forms doesn’t
bubble events – mouse events specifically. It is actually hard to believe
that that wouldn’t work. But I’ve read somewhere that bubbling
events is “new in Whidbey”, so it is probably not my ignorance. Anyways
… include the following snippet in your form (add MouseDown, MouseUp, …
variants at your leisure), bind the respective events of all labels, panels and
all the other “dead stuff” to this very handler (yes, all the
controls share that handler) and that’ll have the events bubble up to
your form in case you need them. I am just implementing custom resizing and
repositioning for some user controls in a little tool and that’s how I
got trapped into this. Voilá. Keep it.

Omar Shahine, who took the role of the "Program Manager" for dasBlog 1.6 added a new macro feature (I am actually not really sure who added it; someone correct me if I am wrong; at least Omar OK'd the feature) that totally rocks and put us on par with MovableType in terms of easy access to older entries:

<%newtelligence.drawarchivemonths()%>

The macro creates a list of links for all months that have blog entries and if you look at my site (not at the RSS feed), you'll see it on the left-hand side of the page just under the "What's News" section. Thanks! Now I can find my old stuff again.

if you haven't see it already; Omar's comments about the 1.6 drop and links to release notes and binaries/source are on his blog.

One of the reasons why I run Windows Server 2003 on my notebook is that "Services without Components" (managed incarnation is System.EnterpriseServices.ServiceDomain) didn't work on XP. If you just touch the ServiceConfig or ServiceDomain classes on XP, you get rewarded with a PlatformNotSupportedException, because the unmanaged implementation of that feature was present, but not quite-as-perfect-as-it-should-be on XP. That will soon be history. Windows XP SP2 and the COM+ 1.5 Rollup Package 6 will fix that and will bring COM+ 1.5 pretty much on par with Windows Server 2003.

Ralf Westphal responded
to this
and there are really just two sentences that I’d like to pick out from
Ralf’s response because that allows me to go a quite a bit deeper into
the data services idea and might help to further clarify what I understand as a
service oriented approach to data and resource management. Ralf says: There
is no necessity to put data access into a service and deploy it pretty far away
from its clients.Sometimes is might make sense, sometimes it doesn’t.

I like patterns that eliminate that sort of doubt and which allow one to say
“data services always make sense”.

Co-locating data acquisition and storage with business rules inside a
service makes absolute sense if all accessed data can be assumed to be co-located
on the same store and has similar characteristics with regards to the timely
accuracy the data must have. In all other cases, it’s very beneficial to
move data access into a separate, autonomous data service and as I’ll
explain here, the design can be made so flexible that the data service consumer
won’t even notice radical architectural changes to how data is stored. I
will show three quite large scenarios to help illustrating what I mean: A federated
warehouse system, a partitioned customer data storage system and a
master/replica catalog system.

The central question that I want to answer is: Why would you want delegate
data acquisition and storage to dedicated services? The short answer is: Because
data doesn’t always live in a single place and not all data is alike.

Here the long answer:

The Warehouse

The Warehouse Inventory Service (WIS) holds data about all
the goods/items that are stored in warehouse. It’s a data service in the
sense that it manages the records (quantity in stock, reorder levels, items on
back order) for the individual goods, performs some simplistic accounting-like
work to allocate pools of items to orders, but it doesn’t really contain
any sophisticated business rules. The services implementing the supply order
process and the order fulfillment process for customer orders implement such
business rules – the warehouse service just keeps data records.

The public interface [“Explicit Boundary” SOA
tenet] for this service is governed by one (or a set of) WSDL portType(s),
which define(s) a set of actions and message exchanges that the service implements
and understands [“Shared Contract” SOA tenet]. Complementary is a deployment-dependent
policy definition for the service, which describes several assertions about the
Security and QoS requirements the service makes [“Policy” SOA tenet].

The WIS controls its own, isolated store over which it has exclusive control
and the only way that others can get at the content of that data store is
through actions available on the public interface of the service [“Autonomy”
SOA tenet].

Now let’s say the company running the
system is a bit bigger and has a central website (of which replicas might be
hosted in several locations) and has multiple warehouses from where items can
be delivered. So now, we are putting a total of four instances of WIS into our data
centers at the warehouses in New Jersey, Houston, Atlanta and Seattle. The
services need to live there, because only the people on site can effectively
manage the “shelf/database relationship”. So how does that impact
the order fulfillment system that used to talk to the “simple” WIS?
It doesn’t, because we can build a dispatcher service implementing the
very same portType that accepts order information, looks at the order’s
shipping address and routes the allocation requests to the warehouse closest to
the shipping destination. In fact now, the formerly “dumb” WIS can
be outfitted with some more sophisticated rules that allow to split or to shift
the allocation of items to orders across or between warehouses to limit freight
cost or ensure the earliest possible delivery in case the preferred warehouse
is out of stock for a certain item. Still, from the perspective of the service
consumer, the WIS implementation is still just a data service. All that
additional complexity is hidden in the underlying “service tree”.

While all the services implement the very
same portType, their service policies may differ significantly. Authentication
may require certificates for one warehouse and some other token for another
warehouse. The connection to some warehouses might be done through a typically rock-solid
reliable direct leased line, while another is reached through a
less-than-optimal Internet tunnel, which impacts the application-level demand for
the reliable messaging assurances. All these aspects are deployment specific
and hence made an external deployment-time choice. That’s why WS-Policy
exists.

The Customer Data Storage

This scenario for the Customer Data Storage Service (CDS) starts
as simple as the Warehouse Inventory scenario and with a single service. The
design principles are the same.

Now let’s assume we’re running a quite sophisticated e-commerce
site where customers can customize quite a few aspects of the site, can store and
reload shopping carts, make personal annotations on items, and can review their own order history. Let’s also assume that we’re
pretty aggressively tracking what they look at, what their search keywords are
and also what items they put into any shopping cart so that we can show them a very
personalized selection of goods that precisely matches their interest profile. Let’s
say that all-in-all, we need to have storage space of about 2Mbytes for the cumulative
profile/tracking data of each customer. And we happen to have 2 million
customers. Even in the Gigabyte age, ~4mln Mbytes (4TB) is quite a bit of data payload
to manage in a read/write access database that should be reasonably responsive.

So, the solution is to partition the customer data across an array of
smaller (cheaper!) machines that each holds a bucket of customer records. With
that we’re also eliminating the co-location assumption.

As in the warehouse case, we are putting a dispatcher service implementing
the very same CDS portType on top of the partitioned data service array and
therefore hide the storage strategy re-architecture from the service consumers entirely.
With this application-level partitioning strategy (and a set of auxiliary
service to manage partitions that I am not discussing here), we could scale
this up to 2 billion customers and still have an appropriate architecture. Mind
that we can have any number of dispatcher instances as long as they implement
the same rules for how to route requests to partitions. Strategies for this are
a direct partition reference in the customer identifier or a locator service
sitting on a customer/machine lookup dictionary.

Now you might say “my database engine does this for me”. Yes, so-called
“shared-nothing” clustering techniques do exist on the database
level for a while now, but the following addition to the scenario mandates
putting more logic into the dispatching and allocation service than – for
instance – SQL Server’s “distributed partitioned views”
are ready to deal with.

What I am adding to the picture is the European Union’s Data Privacy
Directive. Very simplified, the EU directives and regulations it is
illegal to permanently store personal data of EU citizens outside EU territory,
unless the storage operator and the legislation governing the operator complies
with the respective “Safe Harbor” regulations spelled out in these EU
rules.

So let’s say we’re a tiny little bit evil and want to treat EU
data according to EU rules, but be more “relaxed” about data
privacy for the rest of the world. Hence, we permanently store all EU customer data
in a data center near Dublin, Ireland and the data for the rest of the world in
a data center in Dallas, TX (not making any implications here).

In that case, we’re adding yet another service on top of the unaltered
partitioning architecture that implements the same CDS contract and which internally
implements the appropriate data routing and service access rules. Those rules
which will most likely be based on some location code embedded in the customer
identifier (“E1223344” vs. “U1223344”). Based
on these rules, requests are dispatched to the right data center. To improve
performance and avoid having to data travel along the complete path repeatedly
or in small chunks during an interactive session with the customer (customer is
logged into the web site), the dispatcher service might choose to have a temporary,
non-permanent cache for customer data that is filled with a single request and allows
quicker and repeat access to customer data. Changes to the customer’s
data that result from the interactive session can later be replicated out to
the remote permanent storage.

Again, the service consumer doesn’t
really need to know about these massive architectural changes in the underlying
data services tree. It only talks to a service that understands a well-known
contract.

The Catalog System

Same picture to boot with and the same rules: Here we have
a simple service fronting a catalog database. If you have millions of catalog
items with customer reviews, pictures, audio and/or video clips, you might
chose to partition this just like we did with the customer data.

If you have different catalogs depending on the markets you are selling into
(for instance German-language books for Austria, Switzerland and Germany), you
might want to partition by location just as in the warehouse scenario.

One thing that’s very special about catalog data is that very much of
it rarely ever changes. Reviews are added, media might be
added, but except for corrections, the title, author, ISBN number and content
summary for a book really doesn’t ever change as long as the book is kept
in the catalog. Such data is essentially “insert once, change never”.
It’s read-only for all practical purposes.

What’s wonderful about read-only data is that you can replicate it,
cache it, move it close to the consumer and pre-index it. You’re
expecting that a lot of people will search for items with “Christmas”
in the item description come November? Instead of running a full text search every
time, run that query once, save the result set in an extra table and have the
stored procedure running the “QueryByItemDescription” activity
simply return the entire table if it sees that keyword. Read-only data is
optimization heaven.

Also, for catalog data, timeliness is not a great concern. If a customer review
or a correction isn’t immediately reflected on the presentation surface,
but only 30 minutes or 3 hours after is has been added to the master catalog,
it doesn’t do any harm as long as the individual adding such information
is sufficiently informed of such a delay.

So what we can do to with the catalog is to periodically (every few hours or
even just twice a week) consolidate, pre-index and then propagate the master
catalog data to distributed read-only replicas. The data services fronting the
replicas will satisfy all read operations from the local store and will
delegate all write operations directly (passthrough) to the master catalog
service. They might choose to update their local replica to reflect those
changes immediately, but that would preclude editorial or validation rules that
might be enforced by the master catalog service.

So there you have it. What I’ve described here is the net effect of
sticking to SOA rules.

·Shared Contract: Any number of services can implement the
same contract (although the concrete implementation, purpose and hence their
type differ). Layering contract-compatible services with gradually increasing levels
of abstractions and refining rules over existing services creates very clear
and simple designs that help you scale and distribute data very well

·Autonomy allows for data partitioning and data access
optimization and avoids “tight coupling in the backend”.

·Policy: Separating out policy from the service/message
contract allows flexible deployment of the compatible services across a variety
of security and trust scenarios and also allows for dynamic adaptation to “far”
or “near” communications paths by mandating certain QoS properties
such as reliable messaging.

Service-Orientation is most useful if you don’t consider it as just
another technique or tool, but embrace it as a paradigm. And very little of
this thinking has to do with SOAP or XML. SOAP and XML are indeed just tools.

I didn't spend much time for anything except writing, coding, travel, speaking and being at geek parties in the past weeks. Hence, I am sure I am the last one to notice, but I find it absolutely revolutionary that the Microsoft Visual C++ 2003 command line compiler (Microsoft C/C++ Version 13.1) is now a freebie.

Rebecca Dias
from Microsoft asked us to do a bit of work for her team and write a demo app
for TechEd 2004. As things happen and being the serious German engineers we
are, it just turned out to be a little too serious, little
too big to be useful as a “and now here’s a bit of code!”
demo app for TechEd (U.S.).

What we’ve built is a very serious service-oriented application
and your feedback will contribute to the final decision about how Microsoft is
going to make the application and code available to you. What’s already
clear is that I will do a TechEd Europe talk that will cover the most
important architecture and technology choices made for the application. Unfortunately
the decision to have such a talk came too late to squeeze it into the TechEd U.S.
agenda. Come to Amsterdam, TechEd
Europe isn’t sold out, yet.

Comment on Rebecca’s blog entry here and let her
know whether you rather like little samples like Duwamish or a full-blown SOA system
that you can stick your head into for a week.

[This might be more a “note to self” than anything else and might not be immediately clear. If this one goes over your head on the first pass – read it once more ]

Fellow German RDRalf Westphal is figuring out layers and data access. The “onion” he has in a recent article on his blog resembles the notation that Steve Swartz and I introduced for the Scalable Applications Tour 2003. (See picture; get the full layers deck from Microsoft’s download site if you don’t have it already)

What Ralf describes with his “high level resource access” public interface encapsulation is in fact a “data service” as per our definition. To boot, we consider literally every unit in a program (function, class, module, service, application, system) as having three layers: the outermost layer is the publicly accessible interface, the inner layer is the hidden internal implementation and the innermost layer hides and abstracts services and resource access. The concrete implementation of this model depends on the type of unit you are dealing with. A class has public methods as public interface, protected/private methods as internal implementation and uses “new” or a factory indirection to construct references to its resource providers. A SQL database has stored procedures and views as public interface, tables and indexes as internal implementation and the resource access is the database engine itself. It goes much further than that, but I don’t want to get carried away here.

A data service is a layered unit that specializes in acquiring, storing, caching or otherwise dealing with data as appropriate to a certain scope of data items. By autonomy rules, data services do not only hide the data access methods, but also any of these characteristics. The service consumer can walk up to a data service and make a call to acquire some data and it is the data service’s responsibility to decide how that task is best fulfilled. Data might be returned from a cache, aggregated from a set of downstream services or directly acquired from a resource. Delegating resource access to autonomous services instead of “just” encapsulating it with a layer abstraction allows for several implementations of the samedata service contract. One of the alternate implementations might live close to the master data copy, another might be sitting on a replica with remote update capability and yet another one may implement data partitioning across a cluster of storage services. Which variant of such a choice of data services is used for a specific environment then becomes a matter of the deployment-time wiring of the system.

Data services are the resource access layer of the “onion” model on the next higher level of abstraction. The public interface consists of presentation services (which render external data presentations of all sorts, not only human interaction), the internal implementation is made up of business services that implement the core of the application and the resource access are said data services. On the next higher level of abstraction, presentation services may very well play the role of data services to other services. And so it all repeats.

Now … Ralf says he thinks that the abstraction model works wonderfully for pulling chunks of data from underlying layers, but he’s very concerned about streaming data and large data sets – and uses reporting as a concrete example.

Now, I consider data consolidation (which reporting is) an inherent functionality of the data store technology and hence I am not at all agreeing with any part of the “read millions of records into Crystal Reports” story. A reporting rendering tool shall get pre-consolidated, pre-calculated data and turn that into a funky document; it should not consolidate data. Also, Ralf’s proposed delivery of data to a reporting engine in chunks doesn’t avoid that you’ll likely end up having to co-locate all received data into memory or onto disk to actually run the consolidation and reporting job --- in which case you end up where you started. But that’s not the point here.

Ralf says that for very large amounts of data or data streams, pull must change to push and the resource access layer must spoon-feed the business implementation (reporting service in his case) chunks of data at a time. Yes! Right on!

What Ralf leaves a bit in the fog is really how the reporting engine learns of a new reporting job, where and how results of the engine are being delivered and how he plans to deal with concurrency. Unfortunately, Ralf doesn’t mention context and how it is established and also doesn’t loop his solution back to the layering model he found. Also, the reporting service he’s describing doesn’t seem very flexible as it cannot perform autonomous data acquisition, but is absolutely dependent on being fed by the app – which might create an undesirable tightly coupled dependency between the feeder and a concrete report target.

The reporting service shall be autonomous and must be able to do its own data acquisition. It must be able to “pull” in the sense that it must be able to proactively initiate requests to data providers. At the same time, Ralf is right that the request result should be pushed to the reporting service, especially if the result set is very large.

Is that a contradiction? Does that require a different architecture? I’d say that we can’t allow very large data sets to break the fundamental layering model or that we should have to rethink the overall architectural structure in their presence. What’s needed is simply a message/call exchange pattern between the reporting service and the underlying data service that is not request/response, but duplexand which allows the callee to incrementally bubble up results to the caller.Duplex is the service-oriented equivalent of a callback interface with the difference that it’s not based on a (marshaled) interface pointer but rather on a more abstract context or correlation identifier (which might or might not be a session cookie). The requestor invokes the data service and provides a “reply-to” endpoint reference referencing itself (wsa:ReplyTo/wsa:Address), containing a correlation cookie identifying the originator context (wsa:ReplyTo/wsa:ReferenceProperties), and identifying an implemented port-type (wsa:ReplyTo/wsa:PortType) for which the data service knows how to feed back results. The port-type definition is essential, because the data service might know quite a few different port-types it can feed data to – in the given case it might be a port-type defined and exposed by the reporting service. [WS-Addressing is Total Goodness™]. What’s noteworthy regarding the mapping of duplex communication to the presented layering model is that the request originates from within the resource access layer, but the results for the requests are always delivered at the public interface.

The second fundamental difference to callbacks is that the request and all replies are typically delivered as one-way messages and hence doesn’t block any resources (threads) on the respective caller’s end.

For chunked data delivery, the callee makes repeated calls/sends messages to the “reply-to” destination and sends an appropriate termination message or makes a termination call when the request has been fulfilled. For streaming data delivery, the callee opens a streaming channel to the “reply-to” destination (something like a plain socket, TCP/DIME or – in the future -- Indigo’s TCP streaming transport) and just pumps a very long, continuous message.

Bottom line: Sometimes pull is good, sometimes push is good and duplex fits it all back into a consistent model.

People often ask me what I’ve done before Bart, Achim and I started newtelligence together with Jörg. So where do we come from? Typically, we have given somewhat “foggy” answers to those kinds of questions, but Achim and I talked about that yesterday and we’ve started to ask ourselves “why we do that”?

In fact, Achim, Bart and I had been working together for a long time before we started newtelligence. We used to work for a banking software company called ABIT Software GmbH, which then merged with two other sibling companies by the same owners to form today’s ABIT AG. We’ve only reluctantly communicated that fact publicly, because the formation of our company didn’t really get much applause from our former employer – quite the contrary was true and hence we’ve been quite cautious.

For us it was always quite frustrating that ABIT was sitting on heaps on very cool technology that my colleagues and I developed over the years (including patented things) and never chose to capitalize on the technology itself. Here are some randomly selected milestones:

We had our own SOAP 0.9 stack running in 1999, which was part of a framework that had a working and fully transparent object-relational mapping system based on COM along with an abstract, XML-based UI description language (people call those things XUL or XAML nowadays).

In 1998 we forced (with some help of our customer’s wallet) IBM into a 6 months avalanche of weekly patches for the database engine and client software that turned SQL/400 (the SQL processor for DB/400 on AS/400) from a not-quite-to-perfect product into a rather acceptable SQL database.

In 1996 we fixed well over 500 bugs and added tons of features to Borland’s OWL for OS/2 with which we must have had the pretty unique framework setup where cross-platform Windows 3.x, Windows NT and OS/2 development actually functioned on top of that shared class library.

In 1994 we already had what could be considered as the precursor to a service-oriented architecture in with collaborating, (mostly) autonomous services. The framework underlying that architecture had an ODBC C++ class library well over 6 months before Microsoft came out with their first MFC wrapper for it, had an MVC model based centered around the idea of “value-holders” that we borrowed from SmallTalk and which spoke, amongst other things, a text-validation protocol that allowed us to have a single “TextBox” control could be bound against arbitrary value holders that would handle all the text-input syntax rules as per their data type (or subtype). All of this was fully based on the nascent COM model which was then still buried in three documentation pages of OLE 2.0. I didn’t care much about linking and embedding (although I wrote my own in-place editing client from scratch), but I cared a lot about IUnknown as soon as I got my hands on it in late 1993. And all applications (and services) built on that framework supported thorough OLE Automation with Visual Basic 3.0 to a degree that you could fill out any form and press any button programmatically – a functionality that was vital for the application’s workflow engine.

And of course, during all that time, we were actively involved in project and product development for huge financial applications with literally millions of lines of production code.

None of the technology work (except the final products) was ever shared or available to anyone for licensing. We were at a solutions company that supported great visions internally, but never figured out that the technology would be a value by itself.

newtelligence AG exists because of that pain. Years back, we’ve already designed and implemented variations of many of the technologies that are now state of the art or (in the case of XAML) not even shipping yet. At the same time, we continue to develop our vision and that’s how we can stay on top of things. So it’s really not that we’re not learning like crazy and go through constant paradigm shifts – we’re lucky that we can accumulate knowledge on top of the vast experience that we have and adjust and modernize our thinking. However, what’s different now is that we can share the essence of what we figure out with the world. That’s a fantastic thing if you’ve spent most of your professional life “hidden and locked away” and were unable to share things with peers.

So every time you’ll see a “Flashback” title here on my blog, I’ll dig into my recollection and try to remember some of the architectural thinking we had back in those times. We’ve made some horrible mistakes and had some exuberant and not necessarily wise ideas (such as the belief that persistent object-identity and object-relational mapping are good things); but we also had quite a few really bright innovative ideas. The things that really bring you forward are the grand successes and the most devastating defeats. We’ve got plenty of those under our belt and even though some of these insights date back 10 years, they are surprisingly current and the “lessons learned” very much apply to the current technology stacks and architectural patterns.

So – if you’ve ever thought that we’re “all theory” authors and “sample app” developers – nothing is further away from the truth. Also: Although I fill an “architecture consultant” role more than anything else now, I probably write more code on a monthly basis than some full-time application developers – what’s finally surfacing in talks and workshops is usually just the tip of that iceberg and often very much simplified to explain the essence of what we find.

At the SDC conference in Arnhem (NL), Chris Anderson entered the following task into his Pocket Outlook: "Turn Clemens from a server developer into a smart client developer within the next 15 months." We'll see how that goes. Chris: For that to happen, you'll have to give me something that I can seriously fall in love with.

It's interesting that I get far more than 10000 unique daily unique page views on the site along with a similar number of aggregator views daily without even posting much. At least that was true for the last couple of months. Today is my "get back to blogging day". At the same time, the number of tracked direct referrals that I get when someone navigates to an entry via a link on another site is relatively low and accounts for less than 3% of the daily traffic.

I am sure I am the last to realize this phenomenon, but: I conclude that I must have a "root blog". That means that the overwhelming majority of readers don't find me via links; instead, I am on their daily reading list or in their RSS aggregator. I don't really get many on-topic inbound links, but I give links. Other great examples for "higher order" root blogs are those of Robert Scoble and Don Box, because once they link to me, the number of direct referrals rises significantly.

When I started blogging (when blogosphere was much smaller), I had a "leaf blog" that wouldn't get many reads except through other people's links. It's interesting to observe how those things change.

The two biggest conferences in Microsoft space (save PDC)
are coming up and I am already looking forward to be in San Diego
in two weeks and in Amsterdam
four weeks later. Those two events are always very special because they are
big, because they are really well organized and because I get to meet and party
with very many good friends who I see regularly at some place somewhere on
earth, but only once a year we’re all together.

As much as I value the technical education aspect of events like that (yes,
I do attend sessions, too), the
primary reason for me to go to TechEd is too meet friends and make new friends.
And the “networking” on the professional level that goes on at
TechEd is very important as well: There’s nothing in this industry as
valuable as learning from other people.

What I am also looking forward to is some time off when TechEd Amsterdam is
over. At that time, I will have been to 25 countries since January of this year
(several of them twice or even more often) and I would have to do some serious
analysis of my calendar to assess how many events it were. My friend Lester
Madden made the best comment on that sort of traveling lifestyle some time back
in February. We boarded one of those planes together and he threw himself into
the seat grinning sarcastically “Ah! Home, sweet home”.

So with the somewhat slow summer time ahead, I’d like to say “Thank
you for all the beer”, because Microsoft (most, but not all events were
hosted by them) certainly knows how to throw great parties. So here are my “Feierabend Awards”
for the first half of 2004 and before the “big two” events:

My “Winter/Spring 2004 Best Organized-After-Work-Activity Award”
goes – hands down – to Microsoft Finland and their Architecture
Bootcamp in Ruka, where we did a 25km snow
mobile ride in beautiful northern Finland and afterwards had a very Finnish “now
let’s get naked with all the customers and go to Sauna” experience.
Runner up is a great evening hosted by Microsoft Turkey at Galata Tower in
Istanbul. The restaurant up there is an absolute tourist trap, but we had a fun
night and the views from up there can’t be beat.

My “Winter/Spring 2004 Best Beer Award” must
of course go to Dublin. Not much (except ourlocalbeer in and around
Düsseldorf) beats a fresh Guinness. Along
with that goes the sub-award for “most inappropriate workplace
discussion” about how cleavage (Def. 6) is most
effectively used in business.

The “Winter/Spring 2004 Best Restaurant Award”
goes to the Vilamoura
Restaurant (Portuguese) at the Intercontinental Hotel in Sandton/Johannesburg for
absolutely awesome shellfish. Runner up is another Portuguese restaurant: the Doca
Peixe in Lisbon/Portugal. The special Best Homefood Award goes to Malek’s mother. The “Winter/Spring
2004 Best Nightclub Award” goes to the Amstrong
(sic!) Jazz Club (which it really isn’t) in Casablanca, Morocco.

The “Winter/Spring 2004 Gorgeous Event Hostesses Recruiting
Award” (sorry, but while that’s not strictly “after work”
that’s a category that I can’t leave out) has to be evenly split
between four winners: Morocco’s North Africa
Developer Conference 2004 (just ask Mr. Forte), Slovenija’s
NT Konferenca 2004
(reliable winner each year), the Longhorn Developer Preview event
in Budapest/Hungary and the MS EMEA Architect Forum
Event in Milan, Italy. Israel already won the best party event and that should
speak pretty much for itself. Therefore they’re runner up in this
category.

The “Winter/Spring 2004 Best Travel Buddy Award”
goes to Arvindra Sehmi for the EMEA Architect Tour, and Lester Madden, Nigel
Watling, Hans Verbeeck, and David Chappell for the Longhorn Developer Preview
Tour.

Finally, the “Winter/Spring 2004 Best Host Award”
goes to my great friend Malek Kemmou from
Morocco, whose house became “Speaker’s HQ” before, during and
after the NDC conference and who took us all around the country to experience Morocco
– and refused to let any of us pay for anything.

I talked about transactions on several events in the last few weeks and the sample that I use to illustrate that transactions are more than just a database technology is the little tile puzzle that I wrote a while back. For those interested who can't find it, here's the link again. The WorkSet class that is included in the puzzle is a fully functional, lightweight, in-memory 2 phase-commit transaction manager that's free for you to use.

Sometimes you’re trying to fix a problem for ages (months in our case) and while the solution is really simple, you only find it by complete accident and while looking for something completely different.

(And yes, I do think that we need to finally get a network admin to take care of those things)

For several months, our Exchange server “randomly” denied communicating with several of our partner’s mail servers. There were several of our partners who were not able to send us email and their emails would always bounce, although we could communicate wonderfully with the rest of the world. What was stunning is that there wasn’t any apparent commonality between the denied senders and the problem came and went and sometimes it would work and sometimes it wouldn’t.

First we thought that something was broken about our DNS entries and specifically about our MX record and how it was mapped to the actual server host record. So we reconfigured that – to no avail. Then we thought it’d be some problem with the SMTP filters in the firewall and spent time analyzing that. When that didn’t go anywhere, we suspected something was fishy about the network routing – it wasn’t any of that either. I literally spent hours looking at network traces trying to figure out what the problem was – nothing.

Yesterday, while looking for something totally different, I found the issue. Some time ago, during one of the email worm floods, we put in an explicit “deny” access control entry into the SMTP service for one Korean and one Japanese SMTP server that were sending us hundreds of messages per minute. The error that we made was to deny access by the server DNS name and not by their concrete IP address.

What happened was that because of this setting our SMTP server would turn around and try to resolve every sender’s IP address back to a host name to perform that check and that’s independent of the “Perform reverse DNS lookup on incoming messages” setting in the “Delivery”/“Advanced Delivery” dialog. It would then simply deny access to all those servers for which it could not find a host name by reverse lookup. I removed those two entries and now it all works again.

Of course, the error isn’t really ours, but the problem was. What’s broken is that the whole reverse DNS lookup story is something that seems (is) really hard to set up and that quite a few mail servers simply don’t reversely resolve into any host name. DNS sucks.

On one of those flights last week I read a short article about Enterprise
Services in a developer magazine (name withheld to protect the innocent). The “teaser”
introduction of the article said: “Enterprise Services serviced
components are scalable, because they are stateless.” That statement is of
course an echo of the same claim found in many other articles about the topic
and also in many write-ups about Web services. So, is it true? Unfortunately,
it’s not.

public class AmIStateless
{
public int MyMethod()
{
//
do some work
return
0;
}
}

“Stateless” in the sense that it is being used in that
article and many others describes the static structure of a class. Unfortunately
that does not help us much to figure out how well instances of that class help
us to scale by limiting the amount of server resources they consume. More
precisely: If you look at a component and find that it doesn’t have any
fields to hold data across calls (see the code snippet) and does furthermore
not hold any data across calls in some secondary store (such as a “session
object”), the component can be thought of as being stateless with regards
to its callers, but how is the relationship with components and services that
are called from it?

But before I continue: Why do we say that “stateless” scales well?

A component (or service) that does not hold any state across invocations has
many benefits with regards to scalability. First, it lends itself well to load
balancing. If you run the same component/service on a cluster of several
machines and the client needs to make several calls, the client can walk up to
any of the cluster machines in any order. That way, you can add any number of
machines to the cluster and scale linearly. Second, components that don’t
hold state across invocations can be discarded by the server at will or can be
pooled and reused for any other client. This saves activation (construction)
cost if you choose to pool and limits the amount of resources (memory, etc.) that
instances consume on the server-end if you choose to discard components after
each call. Pooling saves CPU cycles. Discarding saves memory and other
resources. Both choices allow the server to serve more clients.

However, looking at the “edge” of a service isn’t
enough and that’s where the problem lies.

The AmIStateless service that I am illustrating here does not stand
alone. And even though it doesn’t keep any instance state in fields as
you can see from the code snippet, it is absolutely not stateless. In fact, it
may be a horrible resource hog. When the client makes a call to a method of the
service (or otherwise sends a message to it), the service does its work by employing
the components X and Y.Y in turn delegates work to an
external service named ILiveElsewhere. All of a sudden, the
oh-so-stateless AmIStateless service might turn into a significant
resource consumer and limit scalability.

First observation: While no state is held in fields, the service does
hold state on the stack while it runs. All local variables that
are kept in on the call stack in the invoked service method, in X and in Y are
consuming resources and depending on what you do that may not be little. Also,
that memory will remain consumed until the next garbage collector run.

Second observation: If any of the secondary components takes a long
time for processing (especially ILiveElsewhere), the service consumes and
blocks a thread for a long time. Depending on how you invoke ILiveElsewhere you
might indeed consume more than just the thread you run on.

Third observation: If AmIStateless is the root of a
transaction, you consume significant resources (locks) in all backend resource
managers until the transaction completes – which may be much later than
when the call returns. If you happen to run into an unfortunate situation, the
transaction may take a significant time (minutes) to resolve.

Conclusion: Since the whole purpose of what we usually do is
data processing and we need to pass that data on between components, nothing is
ever stateless while it runs. “Stateless” is a purely static view
on code and only describes the immediate relationship between one provider and one consumer with regards to how much information is kept across
calls. “Stateless” says nothing about what happens during a call.

Consequence: The scalability recipe isn’t to try achieving static
statelessness by avoiding holding state across calls. Using this as a pattern
certainly helps the naïve, but the actual goal is rather to keep sessions (interaction
sequence duration) as short as possible and therefore limit the resource consumption
of a single activity. A component that holds state across calls but for which
the call sequence takes only a very short time or which does not block a lot of
resources during the sequence may turn out to aid scalability much more than a
component that seems “stateless” when you look at it, but which
takes a long time for processing or consumes a lot of resources while
processing the call. One way to get there is to avoid accumulating state on call
stacks. How? Stay tuned.

I am slowly getting out of a very, very long period of "working too much". In the last 3 1/2 weeks I worked pretty much for 18 hours every day in order to get a fairly large service oriented application done (sharing the workload with my newtelligence partner Achim Oellers). The stats: 13 services, about 20 portTypes, 1.6 MB of C# code, 10 SQL Server databases (autonomy!), countless stored procedures. We have duplex (one-way with reply path), simplex (one-way) and request/response communication paths, use ObjectPooling, Just In Time Activation, Role Based Security, Compensating Resource Managers, Process Initialization, Automatic Transactions, Service Domains, Run-As-Service, and Loosely Coupled Events from Enterprise Services, we use several features from ASP.NET Web Services, we use quite a bit of the Web Service Enhancements Tools, have full instrumentation with Eventlog support an Performance Counters, have deployment tools that create domain accounts, elevate their privileges and configure all the security settings to run a service in "locked down" mode, and use SQL Server Replication. The core services were supposed to ship yesterday and we made that date.

Now I need to work on the backlog. I am late on delivering some PowerPoint decks. I have a 12 hour travel day today. That means writing PPTs on the plane.