Dennis van der Stelthttp://dennis.bloggingabout.net You can learn great things from your mistakes when you aren’t busy denying them.Fri, 02 Sep 2016 19:37:36 +0000en-UShourly1https://wordpress.org/?v=4.5.3http://creativecommons.org/licenses/by-nc-sa/2.0/http://creativecommons.org/licenses/by-nc-sa/2.0/http://creativecommons.org/images/public/somerights20.gifSome Rights ReservedSubscribe with My Yahoo!Subscribe with NewsGatorSubscribe with My AOLSubscribe with BloglinesSubscribe with NetvibesSubscribe with GoogleSubscribe with PageflakesSubscribe with PlusmoSubscribe with The Free DictionarySubscribe with Bitty BrowserSubscribe with NewsAlloySubscribe with Live.comSubscribe with Excite MIXSubscribe with Yourminis.comSubscribe with Attensa for OutlookSubscribe with WebwagSubscribe with netomat HubSubscribe with Podcast ReadySubscribe with FlurrySubscribe with WikioSubscribe with Daily RotationSDN Presentations on batchjobs and microserviceshttp://feeds.bloggingabout.net/~r/dennisvanderstelt/~3/BuQ91cggvHU/ http://dennis.bloggingabout.net/2016/09/02/presentations-batchjobs-microservices/#respondFri, 02 Sep 2016 19:37:36 +0000http://dennis.bloggingabout.net/?p=578837Today I gave two presentations at an SDN Event in Zeist. An engineer (not the software kind) made a mistake somewhere and 5 minutes before the end of my first session on batch jobs...

]]>Today I gave two presentations at an SDN Event in Zeist. An engineer (not the software kind) made a mistake somewhere and 5 minutes before the end of my first session on batch jobs and NServiceBus Sagas, the alarm in the building went off and directed everyone outside of the building. This made sure I could not end my presentation and I only needed a few more minutes. If you were there and want to know more about replacing batch jobs with Sagas, make sure to read this blog post about it.

]]>http://dennis.bloggingabout.net/2016/09/02/presentations-batchjobs-microservices/feed/0http://dennis.bloggingabout.net/2016/09/02/presentations-batchjobs-microservices/Death of the batch job – NServiceBus Sagashttp://feeds.bloggingabout.net/~r/dennisvanderstelt/~3/zUEI-TuUshY/ http://dennis.bloggingabout.net/2016/03/17/nservicebus-sagas-presentation/#respondThu, 17 Mar 2016 08:57:09 +0000http://dennis.bloggingabout.net/?p=578830In a presentation for Blaak Selectie and Betabit, I presented on NServiceBus Sagas yesterday. How they can help develop your (long running) business processes better and how to get rid of batch jobs as...

]]> In a presentation for Blaak Selectie and Betabit, I presented on NServiceBus Sagas yesterday. How they can help develop your (long running) business processes better and how to get rid of batch jobs as well.

]]>http://dennis.bloggingabout.net/2016/03/17/nservicebus-sagas-presentation/feed/0http://dennis.bloggingabout.net/2016/03/17/nservicebus-sagas-presentation/#NoDeadlineshttp://feeds.bloggingabout.net/~r/dennisvanderstelt/~3/p-wSA87bfSA/ http://dennis.bloggingabout.net/2016/03/14/no-deadlines/#respondMon, 14 Mar 2016 16:05:40 +0000http://dennis.bloggingabout.net/?p=578818Delivering an immutable set of features, within a fixed deadline and with a set amount of developers. It was not unusual for the projects I’ve worked on for a long period of my career....

]]>Delivering an immutable set of features, within a fixed deadline and with a set amount of developers. It was not unusual for the projects I’ve worked on for a long period of my career. For quite some time I wasn’t even aware that it could be done differently, until I learned about the trade-off triangle or trade-off matrix. Read on to see how you can work without deadlines and focus on quality instead.

Back in the days

It was even some sort of a game, way back then. After my team and I estimated a project to be around 100 hours. In the proposal for the customer the hours were cut down to 75. If we were lucky. So next time we’d estimate an additional 20%, because we were quick learners. But so were the managers, reducing our estimates by 30% cut. Until we came to an unwritten and unspoken agreement, in which we’d estimate 50% additional time and they’d cut it down by just 50%. So instead of 100 hours, we’d estimate 140 and the customer would be offered 70 hours. And for some reason this seemed like a proper negotiation between developers and managers. Whereas you can clearly see it’s a total loss for development. And as always, it was quality that suffered severely.

There are probably many to blame for this type of software development. Our project managers because they force us to drop quality and make us work overtime over and over again. Our manager or boss because he’s allowing for this behavior. We ourselves are also to blame, simply because we accept this behavior. Maybe we struggled to get better planning schedules and better quality software. We tried to convince project managers to take proper time to include unit testing. Eventually we start to believe we can never win the fight. And we simply accept it as fact, or quit our job to find an employer that actually does value its developers and its own products under development. And we took the easy way out.

But is there actually a simple answer to the problem? Can we change people to think differently about software development? Maybe not. However, I do believe we can create awareness. And awareness might lead to the light side, where there is this understanding of how schedules, resources and budget are bound together. And to help create awareness, I find the trade-off matrix to be invaluable.

Trade-off matrix

The first time I came into contact with the trade-off matrix was while learning about the Microsoft Solution Framework. It was presented as an actual matrix, where the team and the customer would have to agree on the default priorities when making tradeoff decisions early in the project.

In projects, there is a well-known relationship between the project variables of resources, schedule and features. Also known as money & people, time and scope. The customer is the allowed to put one checkmark in every row, but is not allowed to put two checkmarks in the same column. And I’ve seen a lot of times where the first reaction is to put everything on fixed. After explaining the rules again and trying a second time, it is suddenly very hard to make a good decision. Sometimes there are the obvious decisions. Most of us will remember the millennium bug, where a lot of computers would crash and civilization would end because of global chaos and panic unless companies would pay IT consultants a lot of money to fix the problem. On these projects, schedule would probably be fixed. There’s no way to change the moment in time when the next millennium would start. So then we must choose to have resources or features on optimize, meaning we will do everything in our power to not let it get out of control. And the other one would go on accept, which basically means we accept anything that could happen. Not without thought, but when we need to choose, the matrix guides us in our decisions.

With security devices to login to systems, buildings, etc. the features will most likely be fixed, as we can’t compromise on security. The image above shows the typical trade-off matrix used by Microsoft product teams. Teams consist of a fixed set of people and deadlines do exist, but they’re a bit flexible. However, when features aren’t finished on time, they might very well never make it to release. Remember WinFS, the transactional file system for example? This very matrix is what caused a lot of grief for developers and customers who saw promising features being dropped just before released of a new version of Windows. Perhaps now we can better understand the reasoning behind this.

Instead of using a matrix, you can also use a triangle. On the internet there’s much more information to be found on the trade-off triangle, as shown on the side. The idea is the same; there are three interconnected elements and constraining or enhancing one of the elements, requires trade-offs for the others. Most of the time I use the triangle in communication with developers and the matrix in communication with managers. The matrix is more strict and gives less room for discussions. And there will be discussions. I can’t count the number of times that managers want to add a fourth element: quality. This is ridiculous because you should never compromise on quality. And that’s why Scrum is so great, because with Scrum we get autonomous teams who never compromise on quality. Right?

Working without deadlines

Then why do so many of our projects lack proper test coverage. Or much worse, projects overflowing with spaghetti code, simply because we did not have the proper time to refactor. Even with Scrum, developers will occasionally make the suggestion to rewrite a certain part of the system, because a rewrite will solve everything. Bugs will be gone, spaghetti code will vanish and it will be cheap to add new features again. Which is never true. Ever. Even worse, from the business perspective, you failed at your job. Why is there spaghetti code? Why did you not write clean code? Business blames developers for this and can’t understand why they’re demanding rewrites for code they wrote themselves.

The root cause of this is deadlines. Everything we do is deadline driven. Deadlines might slide, but rarely to achieve of higher amount of quality in our code. We might think quality is a vital, but hidden part in the trade-off matrix, but it’s not. Worse yet, the business has a huge influence on quality, but it’s not visible; they make trade-offs when working with the other elements.

So imagine a world where deadlines don’t exist and we are no longer bound by deadlines, but focus on quality instead. Instead of deadline driven, we’re quality driven. And quality is the highest goal in your product, its features and the way it is build. We live in an IT world where this is almost unthinkable. But it is not. It is however a decision that a company has to make. Do you want cheap products that can break easily, or do you want to build the highest quality products? We all know products that take the same approach, are of the highest quality, with the added benefit that everyone knows them for this quality. They have a name to uphold. Apple, Tesla, Toyota, Cisco, Gillette, Coca-Cola, Lego – they don’t compromise on quality. And we love them! So why can’t our software be loved?

When quality driven, we define what we do by looking at scope, instead of deadlines. Of course we want to deliver a new product or system as fast as possible. So what do we build first, that provides the most value? Our product backlog is likely full of features to build, so with the current resources assigned, what should we pick up first that will make our customers happy? And when something new comes up, we should ask ourselves if is this is really what we need right now. Is it the most important feature that we want in the next release? How many customers will get additional benefit out of this feature? Semantic Versioning is a really good push-back mechanism for this. If a feature will single handedly result in bumping up a major or minor version, we really need to think if it’s something we need for the version we’re currently building.

And with every feature we deliver or issue we fix, we focus on quality. That way we won’t have a product or system, nor will we have code, that we do not fully support and aren’t proud of. Because not having quality in our product is a disease that will eventually hurt us. Hard. It’s the technical debt so many of us speak about, but we all have to live with. And if developers recommend we rebuild a component from scratch, you know that you’ve seriously compromised on quality.

]]>http://dennis.bloggingabout.net/2016/03/14/no-deadlines/feed/0http://dennis.bloggingabout.net/2016/03/14/no-deadlines/NDC London 2016 Distributed System Principleshttp://feeds.bloggingabout.net/~r/dennisvanderstelt/~3/Q5bOFmGg_64/ http://dennis.bloggingabout.net/2016/03/02/ndc-london-2016-distributed-system-principles/#respondWed, 02 Mar 2016 10:07:09 +0000http://dennis.bloggingabout.net/?p=578812In the eight fallacies, there’s one that says the topology never changes. Isn’t that covered by the other fallacies? Because we kind of try to use the network in a way that abstracts us...

In the eight fallacies, there’s one that says the topology never changes. Isn’t that covered by the other fallacies? Because we kind of try to use the network in a way that abstracts us from the topology usually. So we don’t even know what the topology is in the first place. We usually don’t care about the topology of the network.

I didn’t fully understand the question and I deferred it, to avoid a boring going back and forth and be able to understand the question better. During the session however, I did reply that with current cloud options, topology changes might become less of an issue. I want to get back on the initial question and the comment I provided. I had the impression I answered the question to his satisfaction, after the session. However…

The 8th fallacy in the cloud

Looking again at the question, I still don’t fully understand the intention of it. How can other fallacies ‘cover’ for the eighth fallacy on topology? Another remark was that ‘the network is abstracted away’, that I don’t fully grasp. The cloud abstracts certain parts of the network away. But this can also be said about on premise networks, or even things like ADO.NET where I can execute queries against SQL Server using the SqlAdapter, without ever knowing how that SqlConnection was set up, let alone know how my application can magically connect to some SQL Server, tell it to execute a task and receive a response containing rows of data.

But abstracted away doesn’t result in never changing topology. To put it bluntly, your virtual machine might be taken down from under you, without you knowing or even noticing. Hardware can fail, your virtual machine might be on shared hardware and needs to move, or it is decided that it needs to be near your other services. Your cloud provider is able to move virtual machines around at its own discretion. The topology also changes when you change transports, enhance security, make routing changes, etc. Microsoft Azure even has some of its services marked ‘classic’, which makes you wonder what the changes are between the ‘classic’ and newer versions. And when they abandon the ‘classic’ version, will your system be deployable again without any change? Believing there’s just one administrator is another fallacy. The other administrator can make some smart changes to the firewall without anyone else ever knowing, your not working correctly as a result. I’ve stated before that the worst coupling is hidden in your database. The same applies to the settings of your topology. There might be some setting specifically for your system, but without anyone remembering the exact reason, it might change. These result in hard to find issues.

Some examples

In the past I worked at a company that had a system with a lot of technical debt. Several developers were working on fixing performance issues and what not. Until some day a large amount of boxes was delivered to the front door by a hardware vendor. As most developers working here were geeks and love good hardware, we were curious to what the hardware was for. The operations team told us they had decided to build a new infrastructure from scratch. I can go on forever on their lack of communication with the development team and that rebuilding anything rarely helps, but that’s not the point. They just changed almost every aspect of the topology without us knowing. One of the most funny incidents, was a request to add a second connectionstring to our applications, so that one connection could read and the other connection could write to the database. Obviously adding a random string in configuration isn’t going to change the behavior of the system. Even more surprising was that the DBA was unaware of transactional and concurrency issues we would run into.

A more subtle example is when the operations team changed the firewall and suddenly various HTTP headers were missing, that the development team heavily relied on.

In my #NoDeadlines article, I explained how Microsoft worked in the past with the trade-off matrix. They changed this strategy since Azure. Suddenly Microsoft did not announce features in advance, they just delivered them and announced they were available. This is great for a lot of people who like experimenting with what is new, but also has an impact on your system. New features can be released that you wished you had access to when you starting building your system. But the rapid change with Azure can also result in changing APIs you rely on.

Conclusion

Microsoft Azure and other cloud providers, make it easier to develop highly available and scalable applications. It abstracts away a lot of technical details we’re not interested dealing with, especially with the services they offer. Still the fallacies of distributed computing apply and are maybe even more important than before.

]]>http://dennis.bloggingabout.net/2016/03/02/ndc-london-2016-distributed-system-principles/feed/0http://dennis.bloggingabout.net/2016/03/02/ndc-london-2016-distributed-system-principles/NServiceBus Publish Subscribe tutorialhttp://feeds.bloggingabout.net/~r/dennisvanderstelt/~3/iyz6EK813p0/ http://dennis.bloggingabout.net/2015/10/28/nservicebus-publish-subscribe-tutorial/#commentsWed, 28 Oct 2015 14:26:46 +0000http://dennis.bloggingabout.net/?p=578788In this tutorial I’ll try to explain how publish / subscribe (pub/sub) works and how to set this up using NServiceBus. Publish Subscribe pattern The pattern in itself, I think, is easiest to understand...

]]>In this tutorial I’ll try to explain how publish / subscribe (pub/sub) works and how to set this up using NServiceBus.

Publish Subscribe pattern

The pattern in itself, I think, is easiest to understand when looking at the button click event in Windows Forms. The button was developed in the .NET Framework and has, among others, the click event. Mostly you’ll have a single method subscribed to the click event. But it is possible to have multiple methods subscribe to the same click event. Or one method subscribed to multiple button click events. Or a variation of this.

It’s the same with messaging. You can have some code publish an event, without knowing who your subscribers are. Because the event is about something that already happened, it’s always in past tense. So when you cancel an order, you’ll have the OrderCancelled event.

With NServiceBus you can subscribe to these events and the subscription will be stored by NServiceBus. The subscriptions can be stored as MSMQ messages, but more logically would be SQL Server storage or some other data storage mechanism. For transports that support publish/subscribe natively, no persistence is required. Read more about this here.

Tutorial

So enough with the theoretical side of things, there are already way too many resources that describe this. Let’s get to the code. Here’s what we’re going to do

Create a class library that will contain our messages

Focus on the pub/sub side first and create the publisher

Create a class library

Add NServiceBus NuGet packages

Configure the service

Write a handler that will publish the event

Add the subscriber

Add another class library

Add NServiceBus NuGet packages

Configure the service

Write the handler that will subscribe to the event

Configure NServiceBus to send the subscription

Add a console application that will initiate the message calls

Create the messages

First we’ll create the solution and the class library that will contain the messages. The messages need to go into a separate assembly because they will be shared among different projects. So in Visual Studio, create a new Class Library. We’ll call the project Messages, but name the solution PubSubDemo, as shown in the image on the right.

Visual Studio automatically adds Class1 but we need two different classes, CancelOrderCommand and OrderCancelledEvent. Both only have one property of type Guid with the name Id. Here’s the code for these.

Create the publisher

Now we’ll create the project that actually publishes the messages as this is the focus of this article. Add another Class Library to your solution with the name Publisher. Now you should have two projects in your solution.

We’ll add the NServiceBust.Host NuGet package. I always use the Package Manager Console (View -> Other Windows -> Package Manager Console) instead of the NuGet GUI, but that’s just my thing. This article is written on NServiceBus 5.2.9 with NServiceBus.Host 6.0 but it should work with older and newer 5.x version as well. To be 100% sure this works, use the following commands in the Package Manager Console. When NServiceBus 6.0 will be released, significant changes to the API will be made and these demos won’t work out of the box anymore. It should be real easy to upgrade however.

A couple of things happened. First of all the packages were installed, references were added and an additional file was added to your project called EndpointConfig.cs. We’ll look at the file later. I first want you to right-click the Publisher assembly and take a look at the properties. Under “Debug” you’ll find the “Start external program” was set to the NServiceBus.Host.exe file. This executable will host your service. During debugging this host will be executed, instead of your Class Library, which cannot be executed at all. In production, it should be properly installed and run as a Windows Service. What the host does is scan for class libraries (.dll files) and possible implementations of certain interfaces. For example the IConfigureThisEndpoint interface in our EndpointConfig.cs file.

Remember our messages in the Class Library called “Messages”? The convention specifies that all messages should be in this namespace and that types for commands should end with “Command” and with events they should end with “Event”. When we’ve done that, let’s have a look at the App.config configuration. There’s a lot of green, but if we remove the green, we’ll end up with this.

For now this is enough, we’ll get back to this later. We’ll now have a look at the file Class1.cs which we’ll change into a message handler. If you haven’t done so, add a reference to the Messages project. Then make sure that Class1.cs looks like this. And don’t forget to rename the file to CancelOrderHandler.cs to not get confused by names.

We’ve created a message handler for the CancelOrderCommand. This is default NServiceBus and can be read about here. Now what we want to do is publish a message. To do this, we need a reference to the Bus, which we’ll have injected via the constructor. Then we can publish a new event, based on the incoming message. This will make our class look like this.

I’ve omitted everything like logging to clarity. The solution on GitHub will have a little bit more. This handler could not be any simpler. We are now done with this project, and we’ll need to add two more. We’ll add the subscriber and a console application, initiating the process by sending out the initial CancelOrderCommand over the bus. Later on we’ll transform this project, so that it is not only a publisher, but also subscriber of its own events.

Create the subscriber

Add another Class Library called “Subscriber” and perform the two NuGet Package commands again, but now for this specific project.

Now the important part is the configuration file. Here we specify that there is an endpoint with messages that we’re interested in. In the following configuration, we specify that we’re interested in everything, but we can of course trim this down and be more specific into what we’re interested in. But we’ll keep it as easy as possible in this demo, so that it’ll always work. So line 12 was added in comparison to the configuration of the publisher. With this line we are telling the endpoint called “Publisher” that we’re interested in events being published, that are in the assembly called “Messages”. When we’ve done this, we’re also done with this project and will speed up and create our console applicaiton.

Create the console application

What we could do, is immediately publish a message from the publisher using the IWantToRunWhenBusStartsAndStops interface. But since we’re using the InMemory persistence for our subscriptions, this could turn out ugly. Because the subscription isn’t registered yet, when we start sending out the message. It gets too complicated to make that work, so we’ll use a seperate console application to send out the first command.

Add a new console application called “ConsoleApplication”, add a reference to the Messages project and install NServiceBus (not the host) via NuGet.

We do almost everything the same, although we’re creating the Bus ourselves using the CreateSendOnly method. This will make the initialization of the Bus extremely fast. Then we send the initial command. But first we wait for a keypress. Why is that?

At first, the subscriber is immediately sending out a subscription message to the publisher, via MSMQ. The publisher has to pick up this message and store the subscription; in memory of course, because that’s what we specified. If we start sending out the initial command, the subscription isn’t stored in memory yet and no event will be send out. So it’s imperative that we see a “NServiceBus.SubscriptionReceiverBehavior” message coming in on the console window of the Publisher service. If we wait about 5 seconds, that should be more than enough.

Press a key and the command should be send, immediately followed by an event being send out. The event is picked up by the subscriber. Put in breakpoints to actually see it happen, or add some output to the console. Friendly reminder that outputting to the console doesn’t help when your service is running as a Windows Service. But for testing purposes it’s fine.

Making your publisher a subscriber

So how about adding another subscriber, our own publisher? Open up the configuration of the publisher and change the MessageEndpointMapping to also subscribe.

Run everything again and you’ll see that you now have two subscribers, your Subscriber service and your Publisher service.

Conclusion

Both the console application, the publisher and the subscriber projects now have the same MessageEndpointMappings in their configuration. In the Console Application, this means you want to send messages of the “Messages” assembly to the “publisher” endpoint. In the Subscriber service this means you want to send messages of the “Messages” assembly to the “publisher” endpoint, but also want to receive events in that assembly. And adding the same configuration to the publisher service means about the same.

This gives a lot of flexibility and hopefully made the publish/subscribe pattern used with NServiceBus a lot clearer.

]]>http://dennis.bloggingabout.net/2015/10/28/nservicebus-publish-subscribe-tutorial/feed/2http://dennis.bloggingabout.net/2015/10/28/nservicebus-publish-subscribe-tutorial/Business Components as mini systemshttp://feeds.bloggingabout.net/~r/dennisvanderstelt/~3/PhRky5p39_Q/ http://dennis.bloggingabout.net/2015/10/27/business-components-as-mini-systems/#respondTue, 27 Oct 2015 21:05:25 +0000http://dennis.bloggingabout.net/?p=578720In my High Availability article I mentioned business components and questions were asked in the comments, what these business components actually are. This post is a follow up to my previous post about the autonomy of...

]]>In my High Availability article I mentioned business components and questions were asked in the comments, what these business components actually are. This post is a follow up to my previous post about the autonomy of those business components. I’ll specifically try to answer the question in the first comment.

This was asked because I said that I could change a business component “knowing that the chance of breaking code somewhere else is practically nonexistent”. When you start to understand this, you’ll see the benefit of true real-time autonomy of your services, by adding events to your Service Oriented Architecture. Since the beginning of software development, a lot of thought has gone into modularity and the effort into trying to get rid of the big ball of mud that your software can become. What I initially didn’t understand was the fact that most coupling is practically invisible, inside your database. But this also crawls into your layers, creating more and more coupling. Not just at design-time, but especially at run-time! That’s why you need to separate your components. And in such a way, that it’s the most logical choice for everyone involved in the software being build. Not just developers, but also engineers, DBAs and especially the business!

On the Particular website there’s a great post about layering. It’s actually about microservices, but I love the part about loose coupling and high cohesion in your application. For some reason we’ve all learned to split the layers in our system. But the article says this is like splitting an atom, which will likely result in a lot of mess. So instead of trying to split the layers horizontally, we can identify vertical slices and end up with the business components I’ve blogged about previously.

Why I mentioned this, is because of the original question about what a business component owns. If we don’t split the horizontal layers, but do identify vertical slices, and separate data among these verticals, how can we create a user interface where products, prices, shopping carts, invoices, etc. are joined together? If Business Component A within the Sales Bounded Context cannot share data with Business Component B within the Finance Bounded Context, how can we visualize a product on screen, show its price and also show popular products in the same category? The person asking the question was right, the Business Component does own everything, from database up ’till the user interface. And in the user interface, everything is joined together visually.

UI Composition When we want to visualize data on screen, we want to bring data together in a composite user interface. This means we’re composing a screen of multiple smaller parts. We could call these small parts micro views.

Micro views When we create micro services, we can also create micro views. A single business component can have one or more micro views that it can use to display information to the end user. In ASP.NET MVC this is possible with multiple views and controllers, using the RenderAction method, providing an action and controller. This seperate controller can render HTML completely independently from the calling view. But it’s also possible to render the output using multiple HTTP Modules. But these days you can defer this to the client’s browser and use javascript ui frameworks like AngularJS, Knockout or Aurelia to display data in a highly dynamic way. It is insane what some Javascript developers can achieve and I’ve met developers who love the added challenge of retrieving and displaying data like this.

Dependencies in data When you’re displaying a product, it’s likely you know which product to display due to the url containing some reference to your product.

When displaying the details of the product, supply the business component of the Sales Bounded Context the product identifier to retrieve these details.

When displaying the price of the product, supply the business component of the Finance Bounded Context the product identifier to retrieve the price, possibly including discounts.

When displaying products in the same category, have another business component (of perhaps another Bounded Context) provide a list with products of the same category. When a list of identifiers is retrieved, use the same method as described above to retrieve data of every single product. Of course it’d be great if these calls could be batched to reduce the number of HTTP calls, but that’s another topic.

Read models The result might be a bit chatty towards your business components. If you need to be able to retrieve data extremely fast, you might want read models. Instead of querying a relational schema with a lot of references and creating a lot of locks, while reading and updating the data, you can also copy and prepare data, just for a specific micro view. This is achieved quite simply by publishing an event and have several subscribers, that all exist, just to update their counterpart of a micro view. It doesn’t mean you can’t do relational data and have to do everything with some NoSQL solution. You might do relational data as regular storage and have some NoSQL alternative for read models. You can also have read models in distributed cache like Redis, for example. When you have high contention on some data in your relational database, you might want to transform the data after querying and then store the result in memory, in Redis, or somewhere else. When so many resources access only a small amount of your data, there might be no need to prepare the data ahead in a read model. You might just transform and store it after initial retrieval. For quite some time, it’s perfectly fine to have this as a read model in some fast but temporary storage. And mind yourself that caching is eventual consistency. But it’s important to have options and select the best solution possible for every issue you’re working on.

Due to these topics, the data isn’t (as) coupled inside the database as before, nor in your code. This makes it extremely easy to change code in your Business Components and change your database schema, without having to worry about your code breaking in an entirely different Bounded Context or even Business Component. I’ve experienced this first hand, and it’s as if an entire new world opens up to you, once you start realizing the possibilities and extensibility of this way of designing your system.

]]>http://dennis.bloggingabout.net/2015/10/27/business-components-as-mini-systems/feed/0http://dennis.bloggingabout.net/2015/10/27/business-components-as-mini-systems/Building reliable applications with messaginghttp://feeds.bloggingabout.net/~r/dennisvanderstelt/~3/4qSJPdlohbU/ http://dennis.bloggingabout.net/2015/10/16/building-reliable-applications-messaging/#respondFri, 16 Oct 2015 20:13:33 +0000http://dennis.bloggingabout.net/?p=578772Yesterday I presented my messaging presentation again, organized by Blaak Selectie and Betabit. Above all I had a really good time with a great audience. Some really interesting questions from the audience, showing insight into...

]]>Yesterday I presented my messaging presentation again, organized by Blaak Selectie and Betabit. Above all I had a really good time with a great audience. Some really interesting questions from the audience, showing insight into some real world problems and sometimes really challenging me. I like to think I managed!

Below you can find the slides and here you can download the code presented. There was so much more to show and so much I did not have time to cover, but perhaps we’ll organize another event or something different. If you want to get in touch for a presentation at your company, or even advise or coaching, don’t hesitate to contact me. I’d love to share information and thoughts together!

]]>http://dennis.bloggingabout.net/2015/10/16/building-reliable-applications-messaging/feed/0http://dennis.bloggingabout.net/2015/10/16/building-reliable-applications-messaging/Upcoming sessions end 2015http://feeds.bloggingabout.net/~r/dennisvanderstelt/~3/uS5eZSzyC9s/ http://dennis.bloggingabout.net/2015/09/01/upcoming-session-end-2015/#respondMon, 31 Aug 2015 22:35:21 +0000http://dennis.bloggingabout.net/?p=578763Software Development Network – September 11th, 2015 The first two sessions I’ll be presenting are at Software Development Network (SDN) at the Achmea Conference Center in Zeist. More information can be found here at...

The first two sessions I’ll be presenting are at Software Development Network (SDN) at the Achmea Conference Center in Zeist. More information can be found here at SDN.

Another view on microservices (slidedeck) In this talk we’ll take a different perspective on microservices as a lot of other folks do in our industry. Not because it’s the best way or even the silver bullet, but just because the other way isn’t the right way. Should be interesting.

Distributed Systems Principles (slidedeck) In this talk we’ll be looking at some principles that are very common in distributed systems, but most overlooked by many. Of course not you, but it’s still a good reminder to see the complexity of which we get our kicks from, solving them all.

Blaak Selectie – October 15th, 2015

After a session with great audience at Blaak Selectie last time in Rotterdam, I will present another session with them on reliable messaging. I’ll start by explaining the basics and dive a little deeper into the subject, showing a fair amount of demos and talk about some best practices and difficult subjects. Hopefully you’ll go home with loads of great ideas of which you’re dire to implement into your own systems. As an extra, Particular Software, creators of the NServiceBus platform, provided me with a few books to give away. Be sure to ask a good question to have a change to receive one of David Boike‘s Learning NServiceBus. I’ve also got codes for everyone at the meeting worth viewing two days of Udi Dahan his Advanced Distributed Systems Design training online.

The venue is at Betabit headquarter in Rotterdam, right under the famous Van Brienenoord bridge. The event is free, but if you’d like to join, signing up is mandatory, due to a hard limit in seating. More info here at Blaak Selectie or here at Betabit.

]]>http://dennis.bloggingabout.net/2015/09/01/upcoming-session-end-2015/feed/0http://dennis.bloggingabout.net/2015/09/01/upcoming-session-end-2015/NDC Oslo talk on duplicating vs replicating data in microserviceshttp://feeds.bloggingabout.net/~r/dennisvanderstelt/~3/pXuwp4h_Buo/ http://dennis.bloggingabout.net/2015/07/22/ndc-oslo-talk-duplicating-vs-replicating-data-microservices/#respondWed, 22 Jul 2015 21:48:35 +0000http://dennis.bloggingabout.net/?p=578758Just my luck, my session at NDC Oslo 2015 wasn’t recorded properly. During the session the microphone went dead and the replacement was apparently never recorded. The idea of my session was to get...

The idea of my session was to get the point across that microservices can be small, but should not by default communicate via REST, nor should they by default be a single deployable unit. For some reason these are mandatory to a lot of folks I talk to, as well as what I heard at NDC Oslo a lot. What I do feel is important is that you don’t duplicate data, but can however replicate data. I am preparing blogposts for this and try to get my points across. Especially since the session isn’t online. You can however watch two presentations by Udi Dahan about “CQRS but different” and “Business logic, a different perspective” where I feel he also mentions a few of the points I tried to come across.

Besides this, the conference was awesome. I loved the venue and the people, can’t remember to have laughed this hard until I saw James Mickens his presentation. And I’ve also never seen developers party this hard, as they did at NDC Oslo 2015.

]]>http://dennis.bloggingabout.net/2015/07/22/ndc-oslo-talk-duplicating-vs-replicating-data-microservices/feed/0http://dennis.bloggingabout.net/2015/07/22/ndc-oslo-talk-duplicating-vs-replicating-data-microservices/Unit test code snippet for Visual Studiohttp://feeds.bloggingabout.net/~r/dennisvanderstelt/~3/7_RiNN7b-8k/ http://dennis.bloggingabout.net/2015/04/10/unit-test-code-snippet-visual-studio/#respondFri, 10 Apr 2015 11:22:54 +0000http://dennis.bloggingabout.net/?p=578743Every single time I reinstall Visual Studio I have to search for my unit test code snippet again. So for my own reference, here it is as an attachment, which you’ll have to put...

]]>Every single time I reinstall Visual Studio I have to search for my unit test code snippet again. So for my own reference, here it is as an attachment, which you’ll have to put in the following folder: