To have a look into the generic hosting models, we should also have a look into the different application models we have in ASP.NET Core. In this and the next post I'm going to write about Blazor, which is a new member of the ASP.NET Core family. To be more precisely, Blazor are two members of the ASP.NET Core family. On the one hand we have Blazor Server Side which actually is ASP.NET Core running on the server and on the other hand we have Blazor Client Side which looks like ASP.NET Core and is running on the browser inside a WebAssembly. Both frameworks share the same view framework, which is Razor Components. Both Frameworks may share the same view logic and business logic. Both frameworks are single page application (SPA) frameworks, there is no page reload from the server visible while browsing the application. Both frameworks look pretty similar up from the Program.cs

Under the hood, both frameworks are hosted completely different. While Blazor Client Side is completely running on the Client, there is no web server needed. Blazor Server Side on the other hand is running upon a web server and is using WebSockets and a generic JavaScript client to simulate the same SPA behavior as Blazor Client Side.

Hosting and Startup

Within this post I'm trying to compare Blazor Server Side to the already known ASP.NET Core frameworks like MVC and Web API.

First let's create a new Blazor Server Side project using the .NET Core 3 Preview 7 SDK:

In the ConfigureServices method there are the Razor Pages added to the IoC container. Razor Pages is used to provide the page that is hosting the Blazor application. In this case it is the _Host.cshtml in the Pages directory. Every single page application (SPA) has at least one almost static page which is hosting the actual application that is running in the browser. React, Vue, Angular and so on have to have the same thing. It is a index.html that is loading all the JavaScripts and hosting the JavaScript application. In case of Blazor there is also a generic JavaScript running on the hosting page. This JavaScript will connect to a SignalR WebSocket that is running on the server side.

Additional to the Razor Pages, the services needed for Blazor Server Side will be added to the IoC container. This services will be needed by the Blazor Hub which actually is the SignalR Hub that provides the WebSocket endpoint.

The Configure also looks similar to the other ASP.NET Core frameworks. The only differences are in the last lines, where the Blazor Hub gets added and where the fallback page gets added. This fallback page actually is the hosting Razor Page mentioned before. Since the SPA supports deep links and created URLs for the different views created on the client, the application need to route to a fallback page in case the user directly navigates to client side route that is not existing on the server. So the server will just provide the hosting page and the client will load the right views depending on the URLs in the browser afterwards.

Blazor

The key feature of Blazor are the razor based components, which get interpreted on a runtime that understand C# and Razor and rendered on the client. With Blazor Client Side it the Mono runtime running inside the WebAssembly and on the Server Side version it is the .NET Core runtime running on the server. That means the Razor components get interpreted and rendered on the server. After that they get pushed to the client using SignalR and placed on the right place inside the hosting page using the generic JavaScript which is connected to the SignalR.

So we have a server side rendered single page application, without any visible roundtrip to the server.

The Razor components are also placed in the pages folder, but have the file extension .razor. Except the App.razor which is directly in the project directory. Those are the actual view components, which contain the logic of the application.

If you have a more detailed look into the components, you'll see some similarities to React or Angular, in case you know those frameworks. I mentioned the App.razor which is the root component. Angular and React also have this kind of root component. Inside the Shared directory there is a MainLayout.razor, which is the layout component. (Also this kind of components are available in React and Angular.) All the other components in the pages directory are using this layout implicitly because it is set as the default layout in the _Imports.razor. Those components also define a route that is used to navigate to the component. Reusable components without a specific route are placed inside the Shared directory.

Conclusion

Even this is just a small introduction and overview about Blazor Server side, but I only want to quickly show the new ASP.NET Core 3.0 frameworks to create web applications. This is the last kind of normal server application I want to show. In the next part, I'm going to show Blazor Client side which uses a completely different hosting model.

Blazor server side by the way is the new replacement for ASP.NET WebForms to create stateful web applications using C#. WebForms won't be migrated to ASP.NET Core. It will be supported in the same way as the full .NET Framework will be supported in the future. Which there will be no new versions and no new features in the future. With this new in mind, it absolutely makes sense to have a more detailed look into Blazor Server Side.

The problem

Let’s say you have a ASP.NET Core application without the bundled ASP.NET Core runtime (e.g. to keep the download as small as possible) and you want to run your ASP.NET Core application on a Windows Server hosted by IIS.

General approach

Each .NET Core Runtime (and there are quite a bunch of them) is backward compatible (at least the 2.X runtimes), so if you have installed 2.2.6, your app (created while using the .NET runtime 2.2.1), still runs.

Why check the minimum version?

Well… in theory the app itself (at least for .NET Core 2.X applications) may run under runtime versions, but each version might fix something and to keep things safe it is a good idea to enforce security updates.

Check for minimum requirement

I stumbled upon this Stackoverflow question/answer and enhanced the script, because that version only tells you “ASP.NET Core seems to be installed”. My enhanced version searchs for a minimum required version and if this is not installed, it exit the script.

The last two posts were just a quick look into the Program.cs and the Startup.cs. This time I want to have a little deeper look into the new endpoint routing.

Wait!

Sometimes I have an Idea about a specific topic to write about and start writing. While writing I'm remembering that I maybe already wrote about it. Than I take a look into the blog archive and there it is:

I the last post, I took a quick look into the Program.cs of ASP.NET Core 3.0 and I quickly explored the Generic Hosting Model. But also the Startup class has something new in it. We will see some small but important changes.

Just one thing I forgot to mention in the last post: It should just work ASP.NET Core 2.1 code of the Program.cs and the Startup.cs in ASP.NET Core 3.0, if there is no or less customizing. The IWebHostBuilder is still there and can be uses the 2.1 way and also the default 2.1 Startup.cs should run in ASP.NET Core 3.0. It may be that you only need to do some small changes there.

The next snippet is the Startup class of an newly created empty web project:

public class Startup
{
// This method gets called by the runtime. Use this method to add services to the container.
// For more information on how to configure your application, visit https://go.microsoft.com/fwlink/?LinkID=398940
public void ConfigureServices(IServiceCollection services)
{
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
app.UseRouting();
app.UseEndpoints(endpoints =>
{
endpoints.MapGet("/", async context =>
{
await context.Response.WriteAsync("Hello World!");
});
});
}
}

The empty web project is a ASP.NET Core project without any ASP.NET Core UI feature. This is why the ConfigureServices method is empty. There is no additional service added to the dependency injection container.

The new stuff is into in the Configure method. The first lines look familiar. Depending on the hosting environment the development exception page will be shown.

app.UseRouting() is new. This is a middleware that enables the new endpoint routing. The new thing is, that routing is decoupled from the specific ASP.NET Feature. In the previous Version every feature (MVC, Razor Pages, SIgnalR, etc.) had its own endpoint implementation. Now the endpoint and routing configuration can be done independently. The Middlewares that need to handle a specific endpoint will now be mapped to a specific endpoint or route. So the Middlewares don't need to handle the routes anymore.

If you wrote a Middleware in the past which needs to work on a specific endpoint, you added the logic to check the endpoint inside the middleware or you used the MapWhen() extension method on the IApplicationBuilder to add the Middleware to a specific endpoint.

Now you create a new pipeline (using IApplicationBuilder) per endpoint and Map the Middleware to the specific new pipeline.

The MapGet() method above does this implicitly. It created a new endpoint "/" and maps the delegate Middleware to the new pipeline that was created internally.

That was a simple snippet. Now let's have a look into the Startup.cs of a new full blown web application using individual authentication. Created by using this .NET CLI command:

dotnet new mvc --auth Individual

Overall this also looks pretty familiar if you already know the previous versions:

This is a MVC application, but did you see the lines where MVC is added? I'm sure you did. It is not longer called MVC, even if it is the MVC pattern used, because it was a little bit confusing with Web API.

To add MVC you now need to add AddControllersWithViews(). If you want to add Web API only you just need to add AddControllers(). I think this is a small but useful change. This way you can be more specific by adding ASP.NET Core features. In this case also Razor pages where added to the project. It is absolutely no problem to mix ASP.NET Core features.

AddMvc() still exists and is still working in ASP.NET Core

The Configure method doesn't really change, except the new endpoint routing part. There are two endpoints configured. One for controller routes (Which is Web API and MVC) and one for RazorPages.

Conclusion

This is also just a quick look into the Startup.cs with just some small but useful changes.

In the next post I'm going to do a little more detailed look into the new endpoint routing. While working on the GraphQL endpoint for ASP.NET Core, I learned a lot about the endpoint routing. This feature makes a lot of sense to me, even if it means to rethink some things, when you build and provide a Middleware.

In ASP.NET Core 3.0 the hosting environment changes to get more generic. Hosting is not longer bound to Kestrel and not longer bound to ASP.NET Core. This means you are able to create a host, that doesn't start the Kestrel web server and doesn't need to use the ASP.NET Core Framework.

This is a small introduction post about the Generic Hosting Environment in ASP.NET Core 3.0. During the next posts I'm going to write more about it and what you can do with it in combination with some more ASP.NET Core 3.0 features.

In the next posts we will see a lot more details about why this makes sense. For the short term: There are different hosting models. One is the already known web hosting. One other model is running a worker service without a web server and without ASP.NET Core. Also Blazor uses a different hosting model inside the web assembly.

How does it look like in ASP.NET Core 3.0?

First let's recap how it looks in previous versions. This is a ASP.NET Core 2.2 Startup.cs that creates an IWebHostBuilder to start up Kestrel and to bootstrap ASP.NET Core using the Startup class:

Now a IHostBuilder will be created and configured first. If the default host builder is created, a IWebHostBuilder is created to use the configured Startup class.

The typical .NET Core App features like configuration, logging and dependency injection are configured on the level of the IHostBuilder. All the ASP.NET specific features like authentication, Middlewares, ActionFilters, Formatters, etc. are configured on the level of the IWebHostBuilder.

Conclusion

This makes the Hosting environment a lot more generic and flexible.

I'm going to write about specific scenarios during the next posts about the new ASP.NET Core 3.0 features. But first I will have a look into Startup.cs to see what is new in ASP.NET Core 3.0.

This post will introduce you to the Azure AD Access Review feature. With the introduction of modern collaboration through Microsoft 365 and Microsoft Teams being the main tool it is important to mange who is a member to the underlying Office 365 Group (Azure AD Group).

Microsoft has great resources to get started on a technical level. The feature enables a set of people to review another set of people. Azure AD is leveraging this capability (all under the bigger umbrella called Identity Governance) on two assets: Azure AD Groups and Azure AD Apps. Microsoft Teams as a hub for collaboration is build on top of Office 365 Groups and so we will have a closer look at the Access Review part for Azure AD Groups.

Each Office 365 Group (each Team) is build from a set of owners and members. With the open nature of Office 365, members can be employees, contractors, or people outside of the organization.

In our modern collaboration (Teams, SharePoint, …) implementation we strongly recommend to leverage full self service group creation that is already built into the system. With this setup everyone is able to create and manage/own a group. Permanent user education is needed for everyone to understand the concept behind modern groups. Many organizations also have a strong set of internal rules that forces a so called information owner (could be equal to the owner of a group) to review who has access to their data. Most organization rely on the fact people are fulfilling their duties as demanded, but lets face it owners are just human beings that need to do their “real” job. With the introduction of Azure AD Access Review we can support these owner duties and make the process documented and easy to execute.

AAD Access Review can do the following to support an up to date group membership:

Setup an Access Review for an Azure AD Group

Specify the duration (start date, recurrence, duration, …)

Specify who will do the review (owner, self, specific people, …)

Specify who will be reviewed (all members, guests, …)

Specify what will happen if the review is not executed (remove members, …)

Before we start we need to talk about licensing. It is obvious that M365 E5 is the best SKU to start with ;) but if you are not that lucky, you need at least an Azure AD P2 license. It is not a “very” common license as it was only part of the EMS E5 SKU, but Microsoft started some time ago really attractive license bundles. Many orgs with strong security requirements will at some point hit a license SKU that will include AAD P2. For your trusty lab tenants start a EMS E5 trial to test these features today. To be precise only the accounts reviewing (executing the Access Review) need the license, at least this is my understanding and as always with licensing ask your usual licensing people to get the definitive answer.

The setup of an Access Review (if not automated through MS Graph Beta) is setup in the Azure Portal in the identity governance blade of AAD. To create our first Access Review we need to on-board to this feature.

Please note we are looking at Access Review in the context of modern collaboration (groups created by Teams, SharePoint, Outlook, …). Access Review can be used to review any AAD group that you use to grant access to a specific resource or keep a list of trusted users for an infrastructure piece of tech in Azure. The following information might not always be valid for your scenario!

This is the first half of the screen we need to fill-out for a new Access Review:

Review name: This is a really important piece! The Review name will be the “only” visible clue for the reviewer once they get the email about the outstanding review. With self service setup and with the nature of how people name their groups we need to ensure people are understanding what they review. We try to automate the creation of the reviews so we put the review timing, the group name and the groups object id in the review name. The ID is helping during support if you send out 4000 Access Reviews and people ask why they got this email they can provide you with the ID and things get easier. For example: 2019-Q1 GRP New Order (af01a33c-df0b-4a97-a7de-c6954bd569ef)

Frequency: Also very important! You have to understand that an Access Review is somehow static. You could do a recurring review, but some information will be out of sync. For example the group could be renamed, but the title will not be updated and people might get confused about misleading information in the email that is send out. If you choose to let the owner of a group do the review, the owners will be “copied” to the Access Review config and not updated for future reviews. Technically this could be fixed by Microsoft, but as of now we ran into problems in the context of modern collaboration.

Users: “Members of a group” is our choice for collaboration. The other option is “Assigned to an application” and not our focus. For a group we have the option to do a guest only review or review everybody as a member of a group. Based on organizational needs and information like the confidentiality we can make a decision. As a starting point it could be a good option to go with guests only because guests are not very well controlled in most environments. An employee at least has a contract and the general trust level should be higher.

Group: Select a group the review should apply to. The latest changes to the Access Review feature allowed to select multiple groups at once. From a collaboration perspective I would avoid it, because at the end of the creation process each group will have its own Access Review instance and the settings are no longer shared. Once again from a collab point of view we need some kind of automation because it is not feasible to create these reviews by an manual task in a foreseeable future.

Reviewers: The natural choice for an Office 365 Group (Team) is to go with the “Group owners” option. Especially if we automate the process and don’t have an extra database to lookup who is the information owner. For static groups or highly confidential groups the option “Selected users” could make sense. An interesting option is also the last one called “Members (self)”. This option will "force” each member to take a decision if the user is any longer part of this project, team or group. We at Glück & Kanja are currently thinking about doing this for some of our internal clients teams. Most of our groups are public and accessible by most of the employee, but membership will document some kind of current involvement for the client represented by the group. This could also naturally reduce the number of teams that show up in your Microsoft Teams client app. As mentioned earlier at the moment it seems that the option “Group owners” will be resolved once the Access Review starts and the instance of the review is then fixed. So any owner change could be not reflected in future instances in recurring reviews. Hopefully this will be fixed by Microsoft.

Program: This is a logical grouping of access reviews. For example we could add all collaboration related reviews to one program vs administration reviews with a more static route.

More advanced settings are collapsed, but should definitely be reviews.

Upon completion settings: Allows to automatically apply the review results. I would suggest to try this settings, because it will not only document the review but take the required action on the membership. If group owners are not aware what these Access Review email are, then we talk about potential loss of access for members not reviewed, but at the end that is what we want. People need to take this part of identity governance for real and take care of their data. Any change by the system is document (Audit log of the group) and can be reverse manually. If the system is not executing the results of the review, someone must look up results regularly and then ensure to remove the users based on the outcome. If you go for Access Review, I strongly recommend on automatically applying the results (after you own internal tests).

Lets take a look on the created Access Review.

Azure Portal: This is an overview for the admin (non recurring access review).

Email: As you can see the prominent Review name is what is standing out to the user. The group name (also highlighted red) is buried within all other text.

Click on “Start Review” from the email: The user now can take action based on recommendations (missing in my lab tenant due to inactivity of my lab users).

Take Review: Accept 6 users.

Review Summary: This is the summary if the owner has taken all actions.

Azure Portal: Audit log information for the group.

After the user completed the review the system didn’t make a change to the group. Based on the configuration if actions should be automatically applied the results apply at the end of the review process! Until this time the owners can change their mind. Once the review period is over the system will apply the needed changes.

I really love this feature in the context of modern collaboration. The process of keeping a current list of involved members in a team is a big benefit for productivity and security. The “need to know” principal is supported by a technical implementation “free of cost” (a mentioned everyone should have AAD P2 through some SKU 😎).

Our GK O365 Lifecycle tool was extended to allow the creation of Access Reviews through the Microsoft Graph based on the Group/Team classification. Once customers read or get a demo about this feature and own the license we immediately start a POC implementation. If our tool is already in place it is only a matter of some JSON configuration to be up and running.

The problem

“Cannot connect to sql\instance. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1)”

Let’s say we have a system with a running SQL Server (Express or Standard Edition - doesn’t matter) and want to connect to this database from another machine. The chances are high that you will see the above error message.

Be aware: You can customize more or less anything, so this blogposts does only cover a very “common” installation.

I struggled last week with this problem and I learned that this is a pretty “old” issue. To enlighten my dear readers I made the following checklist:

This UDP Port 1434 is used to query the real TCP port for the named instance.

Now the most important part: The SQL Server will use a (kind of) random dynamic port for the named instance. To avoid this behavior (which is really a killer for Firewall settings) you can set a fixed port in the SQL Server Configuration Manager.

When executing a program, there is always the possibility of an unexpected runtime error occurring. These occur when a program tries to perform an illegal operation. This kind of scenario can be triggered by events such as division by 0 or a pointer which tries to reference an invalid memory address. We can significantly improve the way these exceptions are handled by using the keywords __TRY and __CATCH.

The list of possible causes for runtime errors is endless. What all these errors have in common is that they cause the program to crash. Ideally, there should at least be an error message with details of the runtime error:

Because this leaves the program in an undefined state, runtime errors cause the system to halt. This is indicated by the yellow TwinCAT icon:

For an operational system, an uncontrolled stop is not always the optimal response. In addition, the error message does not provide enough information about where in the program the error occurred. This makes improving the software a tricky task.

To help track down errors more quickly, you can add check functions to your program.

Check functions are called whenever the relevant operation is executed. The best known is probably CheckBounds(). Each time an array element is accessed, this function is implicitly called beforehand. The parameters passed to this function are the array bounds and the index of the element being accessed. This function can be configured to automatically correct attempts to access elements which are out of bounds. This approach does, however, have some disadvantages.

CheckBounds() is not able to determine which array is being accessed, so error correction has to be the same for all arrays.

Because CheckBounds() is called whenever an array element is accessed, it can significantly slow down program execution.

It’s a similar story with other check functions.

It is not unusual for check functions to be used during development only. Check functions include breakpoints, which stop the program when an operation throws up an error. The call stack can then be used to determine where in the program the error has occurred.

The ‘try/catch’ statement

Runtime errors in general are also known as exceptions. IEC 61131-3 includes __TRY, __CATCH and __ENDTRY statements for detecting and handling these exceptions:

The TRY block (the statements between __TRY and __CATCH) contains the code with the potential to throw up an exception. Assuming that no exception occurs, all of the statements in the TRY block will be executed as normal. The program will then continue from the line immediately following the __ENDTRY statement. If, however, one of the statements within the TRY block causes an exception, the program will jump straight to the CATCH block (the statements between __CATCH and __ENDTRY). All subsequent statements within the TRY block will be skipped.

The CATCH block is only executed if an exception occurs; it contains the error handling code. After processing the CATCH block, the program continues from the statement immediately following __ENDTRY.

The __CATCH statement takes the form of the keyword __CATCH followed, in brackets, by a variable of type __SYSTEM.ExceptionCode. The __SYSTEM.ExceptionCode data type contains a list of all possible exceptions. If an exception occurs, causing the CATCH block to be called, this variable can be used to query the cause of the exception.

The following example divides two elements of an array by each other. The array is passed to the function using a pointer. If the return value is negative, an error has occurred. The negative return value provides additional information on the cause of the exception:

The ‘finally’ statement

The optional __FINALLY statement can be used to define a block of code that will always be called whether or not an exception has occurred. There’s only one condition: the program must step into the TRY block.

We’re going to extend our example so that a value of one is added to the result of the calculation. We’re going to do this whether or not an error has occurred.

The statement in the FINALLY block (line 24) will always be executed whether or not an exception has occurred.

If no exception occurs within the TRY block, the FINALLY block will be called straight after the TRY block.

If an exception does occur, the CATCH block will be executed first, followed by the FINALLY block. Only then will the program exit the function.

__FINALLY therefore enables you to perform various operations irrespective of whether or not an exception has occurred. This generally involves releasing resources, for example closing a file or dropping a network connection.

Extra care should be taken in implementing the CATCH and FINALLY blocks. If an exception occurs within these blocks, it will give rise to an unexpected runtime error, resulting in an immediate uncontrolled program stop.

The sample program runs under 32-bit TwinCAT 3.1.4024 or higher. 64-bit systems are not currently supported.

Depending on the task, it may be necessary for function blocks to require parameters that are only used once for initialization tasks. One possible way to pass them elegantly is to use the FB_init() method.

Before TwinCAT 3, initialisation parameters were very often transferred via input variables.

This had the disadvantage that the function blocks became unnecessarily large in the graphic display modes. It was also not possible to prevent changing the parameters at runtime.

Very helpful is the method FB_init(). This method is implicitly executed one time before the PLC task is started and can be used to perform initialization tasks.

The dialog for adding methods offers a finished template for this purpose.

The method contains two input variables that provide information about the conditions under which the method is executed. The variables may not be deleted or changed. However, FB_init() can be supplemented with further input variables.

Example

An example is a block for communication via a serial interface (FB_SerialCommunication). This block should also initialize the serial interface with the necessary parameters. For this reason, three variables are added to FB_init():

It is also possible to overwrite FB_init(). In this case, the same input variables must exist in the same order and be of the same data type as in the basic FB (FB_SerialCommunication). However, further input variables can be added so that the derived function block (FB_SerialCommunicationRS232) receives additional parameters:

In the method FB_init() of FB_SerialCommunicationRS232, only the copying of the new parameter (nBaudrate) is necessary. Because FB_SerialCommunicationRS232 inherits from FB_SerialCommunication, FB_init() of FB_SerialCommunication is also executed implicitly before the PLC task is started. Both FB_init() methods of FB_SerialCommunication and of FB_SerialCommunicationRS232 are always called implicitly. When inherited, FB_init() is always called from ‘bottom’ to ‘top’, first from FB_SerialCommunication and then from FB_SerialCommunicationRS232.

Forward parameters

The function block (FB_SerialCommunicationCluster) is used as an example, in which several instances of FB_SerialCommunication are declared:

However, there are some things to be taken into consideration here. The call sequence of FB_init() is not clearly defined in this case. In my test environment the calls are made from ‘inside’ to ‘outside’. First fbSerialCommunication01.FB_init() and fbSerialCommunication02.FB_init() are called, then fbSerialCommunicationCluster.FB_init(). It is not possible to pass the parameters from ‘outside’ to ‘inside’. The parameters are therefore not available in the two inner instances of FB_SerialCommunication.

The sequence of the calls changes as soon as FB_SerialCommunication and FB_SerialCommunicationRS232 are derived from the same basic FB. In this case FB_init() is called from ‘outside’ to ‘inside’. This approach cannot always be implemented for two reasons:

If FB_SerialCommunication is located in a library, the inheritance cannot be changed just offhand.

The call sequence of FB_init() is not further defined with nesting. So it cannot be excluded that this can change in future versions.

One way to solve the problem is to explicitly call FB_SerialCommunication.FB_init() from FB_SerialCommunicationCluster.FB_init().

All parameters, including bInitRetains and bInCopyCode, are passed on directly.

Attention: Calling FB_init() always initializes all local variables of the instance. This must be considered as soon as FB_init() is explicitly called from the PLC task instead of implicitly before the PLC task.

Access via properties

By passing the parameters by FB_init(), they can neither be read from outside nor changed at runtime. The only exception would be the explicit call of FB_init() from the PLC task. However, this should principally be avoided, since all local variables of the instance will be reinitialized in this case.

If, however, access should still be possible, appropriate properties can be created for the parameters:

The setter and getter of the respective properties access the corresponding local variables in the function block (nInternalDatabits, eInternalParity and nInternalStopbits). Thus, the parameters can be specified in the declaration as well as at runtime.

By removing the setter, you can prevent the parameters from being changed at runtime. If the setter is available, FB_init() can be omitted. Properties can also be initialized directly when declaring an instance.

In this case, the initialization values of the properties have priority. The transfer by property and FB_init() has the disadvantage that the declaration of the function block becomes unnecessarily long. To implement both does not seem necessary to me either. If all parameters can also be written via properties, the initialization via FB_init() can be omitted. Conclusion: If parameters must not be changeable at runtime, the use of FB_init() has to be considered. If the write access is possible, properties are another opportunity.

Another year later, again it was the July 1st and I got the email from the Global MVP Administrator I'm waiting for :-)

Yes, this is kind of a yearly series of posts. But I'm really excited that I got re-awarded to be an MVP for the fifth year in a row. This is absolutely amazing and makes me really proud.

Even though some folks reduces the MVP to just a marketing instrument of Microsoft and they say MVPs are just selling Microsoft to the rest of the world, it tells me that my work in my spare time is important for some people outside. These folks are right anyway. Sure I'm selling Microsoft to the rest of the world, but this is my hobby. I don't sell it explicitly, I'm just telling other people about stuff I work with, stuff I use to get things done and to earn money at the end. It is about .NET and ASP.NET as well as about software development and the developer community. It is also about stuff I just learned while looking into new technology.

Selling Microsoft is just a side effect with no additional effort and it doesn't feel wrong.

I'm not sure whether I put a lot more effort into my hobby since I'm a MVP or not. I think it was a bit more, because being a MVP makes me proud, makes me feel successful and tells me that my work is important for some folks. Who cares :-)

While some folks are reading my blog, attending the user group meetings or watching my live streams. I will continue doing that kind work.

As already written I'm proud of it and proud to get the fifth ring to my MVP award trophy, which will be blue this time.

And I'm feeling lucky that I'm able to attend the Global MVP summit the fifth time next year in March and to see all the MVP friends again. I'm really looking forward to that event and to be in the nice and always sunny Seattle area. (Yes, it is always sunny in Seattle, when I'm there.)

I'm also happy to see that almost all MVP friends got re-awarded.

Congratulations to all awarded and re-awarded MVP

Many thanks to developer community for being a part of it. And many thanks for that amazing feedback I get as a result of my work. It is a lot of fun to help and to contribute to that awesome community :-)

While writing on the Customizing ASP.NET Core series, a reader asked me to bundle all the posts into a book. I was thinking about it for a while. Also because I tried to write a book in the past together with a collogue at the YOO. But publishing a book with a publisher in behind turned out to be stress. Since we have a family with small kids and a job where we work on different projects, the book never has priority one. The publisher didn't see that fact. Fortunately the publisher quits the contract because we weren't able to deliver a chapter per week.

This is the planned cover for the bundled series:

(I took that photo at the Tschentenalp above Adelboden in Switzerland. It is the View to the Lohner Mountains)

Leanpub

In the past I already had a look into different self publishing platforms like Leanpub which looks pretty easy and modern. But it also has a downside:

Leanpub gives me 80 % of the salary, but we need to do the publishing and the marketing to sell that book

A publisher only gives me 20%, but does a professional publishing and marketing. He will sell a lot more books.

At the end you cannot get rich by publishing a book like this. But it is anyway nice to get some money out of your effort. Also Amazon provides a possibility to publish a book by yourself which looks nice for self-publisher. I'm going to try this as well.

In the past Leanpub also provides print on demand. This seemed to be stopped now. I couldn't found any information about it now. Anyway, it is good enough to publish in various eBook formats.

So I decided to go with Leanpub to try the self-publishing way.

Writing

Even if the most of the contents are already written for the blog, I decided to go over all the parts to also update all the stuff to ASP.NET Core 3.0. I also decided to also leave the ASP.NET Core 2.2 information, because this will also be valid for a while. So the chapter will handle 3.0 and 2.2.

Writing for Leanpub also works with GitHub and Markdown files, which also reduces the effort. I'm able to bind a GitHub repository to Leanpub and push Markdown files into it. I need to structure and order the different files in a book.txt file. Every markdown file is a chapter in that book.

Currently I have 13 chapters a preface, a about me chapter, a chapter to describe the technical requirements for this book and a small postface. All in all about 80 pages.

Rewriting

Sometimes it was hard to rewrite the demos and contents to ASP.NET Core. If you are writing about customizing that goes deeply into the APIs, you will definitely face some significant changes. So it wasn't that easy to get a custom DI container running in ASP.NET Core 3. Also adding the Middlewares using a custom Route changes from 2.2 to 3.0 Preview 3 and changes again from the preview 3 to the preview 6. Iven though I already had some experience with 3.0 there where some changes between the different previews.

But luckily I also have some chapters without any differences between 2.2 and 3.0

Updating the blog posts

I'm not yet sure whether I need to update the blog post or not. My current idea is to create new posts and to mention the new post in the old ones.

There is definitely enough stuff for a lot of new posts about About ASP.NET Core. One thing for example is the new Framework reference that was a pain in the ass during a live stream where I tried to update a preview 3 solution to preview 6.

Publishing

Currently I'm not sure when I'm able to publish this book. At the moment it is review by to people doing the non technical review and one guy doing the technical review.

I think I'm going to publish this book during the summer.

Contributing

If you want to help making this book better, feel free to go to the repositories, fork them and to create PRs.

It would also be helpful to propose a price you would pay for such a book. Until yet I got some proposals, but his seem to be a pretty high price from my perspective. It seems some folks are really willing to pay around 25 EUR. https://leanpub.com/customizing-aspnetcore/. What do you think?

If you ever dreamed to use Javascript in your .NET application there is a simple way: Use Jint.

Jint implements the ECMA 5.1 spec and can be use from any .NET implementation (Xamarin, .NET Framework, .NET Core). Just use the NuGet package and has no dependencies to other stuff - it’s a single .dll and you are done!

Why should integrate Javascript in my application?

In our product “OneOffixx” we use Javascript as a scripting language with some “OneOffixx” specific objects.

The pro arguments for Javascript:

It’s a well known language (even with all the brainfuck in it)

You can sandbox it quite simple

With a library like Jint it is super simple to interate

I highly recommend to checkout the GitHub page, but here a some simple examples, which should show how to use it:

Example 1: Simple start

After the NuGet action you can use the following code to see one of the most basic implementations:

We create a new “Engine” and execute some simple Javascript and returen the completion value - easy as that!

Example 2: Use C# function from Javascript

Let’s say we want to provide a scripting environment and the script can access some C# based functions. This “bridge” is created via the “Engine” object. We create a value, which points to our C# implementation.

Since the uprising of Docker on Windows we also invested some time into it and packages our OneOffixx server side stack in a Docker image.

Windows Server 2016 situation:

We rely on Windows Docker Images, because we still have some “legacy” parts that requires the full .NET Framework, thats why we are using this base image:

FROM microsoft/aspnet:4.7.2-windowsservercore-ltsc2016

As you can already guess: This is based on a Windows Server 2016 and besides the “legacy” parts of our application, we need to support Windows Server 2016, because Windows Server 2019 is currently not available on our customer systems.

In our build pipeline we could easily invoke Docker and build our images based on the LTSC 2016 base image and everything was “fine”.

The product has the same core features as TFS 2018, but with a new UI and other improvements. For a full list you should read the Release Notes.

*Be aware: This is the OnPrem solution, even with the slightly missleading name “Azure DevOps Server”. If you are looking for the Cloud solution you should read the Migration-Guide.

“Updating” a TFS 2018 installation

Our setup is quite simple: One server for the “Application Tier” and another SQL database server for the “Data Tier”.
The “Data Tier” was already running with SQL Server 2016 (or above), so we only needed to touch the “Application Tier”.

Application Tier Update

In our TFS 2018 world the “Application Tier” was running on a Windows Server 2016, but we decided to create a new (clean) server with Windows Server 2019 and doing a “clean” Azure DevOps Server install, but pointing to the existing “Data Tier”.

In theory it is quite possible to update the actual TFS 2018 installation, but because “new is always better”, we also switched the underlying OS.

Update process

The actual update was really easy. We did a “test run” with a copy of the database and everything worked as expected, so we reinstalled the Azure DevOps Server and run the update on the production data.

Steps:

Summary

If you are running a TFS installation, don’t be afraid to do an update. The update itself was done in 10-15 minutes on our 30GB-ish database.

Just download the setup from the Azure DevOps Server site (“Free trial…”) and you should be ready to go!

In this 12th part of this series, I'm going to write about how to customize hosting in ASP.NET Core. We will look into the hosting options, different kind of hosting and a quick look into hosting on the IIS. And while writing this post this again seems to get a long one.

This will change in ASP.NET Core 3.0. I anyway decided to do this post about ASP.NET Core 2.2 because it still needs some time until ASP.NET Core 3.0 is released.

This post is just an overview bout the different kind of application hosting. It is surely possible to go a lot more into the details for each topic, but this would increase the size of this post a lot and I need some more topics for future blog posts ;-)

WebHostBuilder

Like in the last post, we will focus on the Program.cs. The WebHostBuilder is our friend. This is where we configure and create the web host. The next snippet is the default configuration of every new ASP.NET Core web we create using File => New => Project in Visual Studio or dotnet new with the .NET CLI:

As we already know from the previous posts the default build has all the needed stuff pre-configured. All you need to run an application successfully on Azure or on an on-premise IIS is configured for you.

But you are able to override almost all of this default configurations. Also the hosting configuration.

Kestrel

After the WebHostBuilder is created we can use various functions to configure the builder. Here we already see one of them, which specifies the Startup class that should be used. In the last post we saw the UseKestrel method to configure the Kestrel options:

.UseKestrel((host, options) =>
{
// ...
})

Reminder: Kestrel is one possibility to host your application. Kestrel is a web server built in .NET and based on .NET socket implementations. Previously it was built on top of libuv, which is the same web server that is used by NodeJS. Microsoft removes the dependency to libuv and created an own web server implementation based on .NET sockets.

This first argument is a WebHostBuilderContext to access already configured hosting settings or the configuration itself. The second argument is an object to configure Kestrel. This snippet shows what we did in the last post to configure the socket endpoints where the host needs to listen to:

This will override the default configuration where you are able to pass in URLs, eg. using the applicationUrl property of the launchSettings.json or an environment variable.

HTTP.sys

Do you know that there is another hosting option? A different web server implementation? It is HTTP.sys. This is a pretty mature library deep within Windows that can be used to host your ASP.NET Core application.

.UseHttpSys(options =>
{
// ...
})

The HTTP.sys is different to Kestrel. It cannot be used in IIS because it is not compatible with the ASP.NET Core Module for IIS.

The main reason to use HTTP.sys instead of Kestrel is Windows Authentication which cannot be used in Kestrel only. Another reason is, if you need to expose it to the internet without the IIS.

Also the IIS is running on top of HTTP.sys for years. Which means UseHttpSys() and IIS are using the same web server implementation. To learn more about HTTP.sys please read the docs.

Hosting on IIS

An ASP.NET Core Application shouldn't be directly exposed to the internet, even if it's supported for even Kestrel or the HTTP.sys. It would be the best to have something like a reverse proxy in between or at least a service that watches the hosting process. For ASP.NET Core the IIS isn't only a reverse proxy. It also takes care of the hosting process in case it brakes because of an error or whatever. It'll restart the process in that case. Also Nginx may be used as an reverse proxy on Linux that also takes care of the hosting process.

To host an ASP.NET Core web on an IIS or on Azure you need to publish it first. Publishing doesn't only compiles the project. It also prepares the project to host it on IIS, on Azure or on an webserver on Linux like Nginx.

dotnet publish -o ..\published -r win32-x64

This produces an output that can be mapped in the IIS. It also creates a web.config to add settings for the IIS or Azure. It contains the compiled web application as a DLL.

If you publish a self-contained application it also contains the runtime itself. A self-contained application brings it's own .NET Core runtime, but the size of the delivery increases a lot.

And on the IIS? Just create a new web and map it to the folder where you placed the published output:

It get's a little more complicated if you need to change the security, if you have some database connections and so on. This would be a topic for a separate blog post. But in this small sample it simply works:

This is the output of the small Middleware in the startup.cs of the demo project:

Nginx

Unfortunately I cannot write about Nginx, because I don't have a running Linux currently to play around with it. This is one of the many future projects I have. I just got ASP.NET Core running on Linux using the Kestrel webserver.

Conclusion

ASP.NET Core and the .NET CLI already contain all the tools to get it running on various platforms and to set it up to get it ready for Azure and the IIS, as well as Nginx. This is super easy and well described in the docs.

BTW: What do you think about the new docs experience compared to the old MSDN documentation?

I'll definitely go deeper into some of the topics and in ASP.NET Core there are some pretty cool hosting features that make it a lot more flexible to host your application:

Currently we have the WebHostBuilder that creates the hosting environment of the applications. In 3.0 we get the HostBuilder that is able to create a hosting environment that is completely independent from any web context. I'm going to write about the HostBuilder in one of the next blog posts.

On my Twitch stream I planned to show how to migrate a legacy ASP.NET application to ASP.NET Core, to start a completely new ASP.NET Core project and to show some news about the .NET Developer Community. When I did the first stream and introduced the plans to the audience, it somehow turns into the direction to migrate the legacy application. So I chose the old Sharpcms project to show the migration, which is maybe not the best choice because this CMS doesn't use the common ASP.NET patterns.

About the sharpcms

Initially the Sharpcms was built by a Danish developer. Back when he stopped maintaining it, me and my friend Thomas Huber asked him to take over his project and to continue maintaining this project. He said yes and since than we were the main contributors and coordinators of this project.

This is where my Twitter handle was born. Initially I planned to use this Twitter account to promote the sharpcms, but I used it off-topic. I promoted blog posts, community events using this account as well did some interesting discussions on twitter. I used it too much, it got linked everywhere and it didn't make sense to change it anymore.
Anyway the priorities changed. The sharpcms wasn't the main hobby project anymore, but I still used this Twitter handle. It still kinda makes sense to me, because I work with CSharp and I'm a kind of a CMS expert. (I developed on two different ones for years and used a lot more.)

We had huge plans with this project, but as always plans and priorities change with new family members and new jobs. We haven't done anything on that CMS for years. Actually I'm not sure whether this CMS is still used or not.

Anyway. This is one of the best CMS systems from my perspective. Easy to setup, lightweight and fast to run and easy to use for users without a technical background. Creating templates for this CMS need a good knowledge of XML and XSLT, because XML is the base of this CMS and XSLT is used for the templates. This was super fast with the .NET Framework. Caching wasn't really needed for the sharpcms.

Juergen.IO.Stream

In the first show on Twitch I introduced the plans about to migrate the sharpcms and the other one about to start a plain new ASP.NET Core project. It turns out that the audience wanted to see the migration project. I introduced the sharpcms, showed the original sources and started to create .NET Standard libraries to show the difficulties.

I wasn't that pessimistic than the audience, cause I still knew that CMS. There where not too much dependencies to the classic ASP.NET and System.Web stuff. And as expected it wasn't that hard.

The rendering of the output in the sharpcms is completely based on XML and XSLT. The sharpcms creates a XML structure that get's interpreted and rendered using XSLT templates.

XSLT is a XML based programming language that navigates through XML data and crates any output. It actually is a programming language, you are able to create decision statements, loops, functions and variables. It is limited, but as well as Razor, ASP or PHP you mix the programming language with the output you wanna create, which makes it easy and intuitive to use.

This means there is no rendering logic in the C# codes. All the C# code does is to work on the request and to create the XML data containing the data to show. At the end it transforms the XML using the XSLT templates.

The main work I needed to do to create the Sharpcms running is to wrap the ASP.NET Core request context into a request context that looks similar to the System.Web version that was used inside the Sharpcms. Because it heavily uses the ASP.NET WebForm page object and its properties.

The migration strategy was to get it running even if it is kinda hacky and to clean it up later on. Know we are in this state. The old Sharpcms sources are working on ASP.NET Core using .NET Standard libraries.

Performance

Albert Weinert (a community guy, former MVP and a Twitch streamer as well) told me during the first stream, that XSLT isn't that fast in .NET Core. Unfortunately he was right. The transformation speed and the speed of reading the XML data isn't that fast.

We'll need to have a look into the performance and to find a way to speed it up. Maybe to create a alternative view engine to replace the XML and XSLT based view engine somewhen. It would also be possible to have multiple view engines. Razor, Handlebars or Liquid would be an option. All of these already have .NET implementations which can be used here.

Next steps

Even though the CMS is now running on ASP.NET Core, there's still a lot to do. Here are the next issues I need to work on:

Map the Middleware as routed one, like it should work in ASP.NET core 3.0

Join me

If you like to join me in the stream to work together with me on the Sharpcms.Core, feel free to tell me. I would be super happy to do a pair programming session to work on a specific problem. It would be great to have experts on this topics in the stream:

Razor or Handlebars to create an alternative view engine

Security and Encryption to make this CMS more secure

DevOps to create a build and release pipeline

Summary

Migrating the old Sharpcms to ASP.NET Core was fun, but it's not yet done. There is a lot more to do. I'll continue working on it on my stream, but will also do some other stuff in the streams.

If you like to work on the Sharpcms to help me to solve some issues or to start creating a modern documentation. Feel free. This would help a lot.

In the first part I described why I think that continuous delivery is important for an adequate developer experience and in the second part I draw a rough picture about how we implemented it in a 5-teams big product development. Now it is time to discuss about the big impact – and the biggest benefits – regarding the development of the product itself.

Why do more and more companies, technical and non-technical people want to change towards an agile organisation? Maybe because the decision makers have understood that waterfall is rarely purposeful? There are a lot of motives – beside the rather wrong dumb one “because everybody else does this” – and I think there are two intertwined reasons for this: the speed at wich the digital world changes and the ever increasing complexity of the businesses we try to automate.

Companies/people have finally started to accept that they don’t know what their customer need. They have started to feel that the customer – also the market – has become more and more demanding regarding the quality of the solutions they get. This means that until Skynet is not born (sorry, I couldn’t resist ) we oftware developers, product owners, UX designers, etc. have to decide which solution would be the best to solve the problems in that specific business and we have to decide fast.

We have to deliver fast, get feedback fast, learn and adapt the consequences even faster. We have to do all this without down times, without breaking the existing features and – for most of us very important: without getting a heart attack every time we deploy to production.

IMHO These are the most important reasons why every product development team should invest in CI/CD.

The last missing piece of the jigsaw which allows us to deliver the features fast (respectively continuously) without disturbing anybody and without losing the control how and when features are released is called feature toggle.

A feature toggle[1] (also feature switch, feature flag, feature flipper, conditional feature, etc.) is a technique in software development that attempts to provide an alternative to maintaining multiple source-code branches (known as feature branches), such that a feature can be tested even before it is completed and ready for release. Feature toggle is used to hide, enable or disable the feature during run time. For example, during the development process, a developer can enable the feature for testing and disable it for other users.[2]

Wikipedia

The concept is really simple: one feature should be hidden until somebody, something decides that it is allowed to be used.

As you see, implementing feature toggles is really that simple. To adopt this concept will need some effort though:

Strive for only one toggle (one if) per feature. At the beginning it will be hard or even impossible to achieve this but it is a very important to define this as a middle-term goal. Having only one toggle per feature means the code is highly decoupled and very good structured.

Place this (main) toggle at the entry point (a button, a new form, a new API endpoint) the first interaction point with the user (person or machine) and in disabled state it should hide this entry point.

The enabled state of the toggle should lead to new services (in micro service world), new arguments or to new functions, all of them implementing the behavior for feature.enabled == true. This will lead to code duplication: yes, this is totally ok. I look at it as a very careful refactoring without changing the initial code. Implementing a new feature should not break or eliminate existing features. The tests too (all kind of them) should be organized similarly: in different files, duplicated versions, implemented for each state.

the different states of the toggle lead to clearly separated paths

Through the toggle you gain real freedom to make mistakes or just the wrong feature. At the same time you can always enable the feature and show it the product owner or the stake holders. This means a feedback loop is reduced to minimum.

This freedom has a price of course: after the feature is implemented, the feedback is collected, the decision for enabling the feature was made, after all this the source code must be cleaned up: all code for feature.enabled == false must be removed. This is why it is so important to create the different paths so that the risk of introducing a bug is virtually zero. We want to reduce workload not increase it.

Toggles don’t have to be temporary, business toggles (i.e. some premium features or “maintenance mode”) can stay forever. It is important to define beforehand what kind of toggle will be needed because the business toggles will be always part of your source code. The default value for this kind of toggles should be false.

The default value for the temporary toggles should be true and it should be deactivated on production, activated during the development.

One advice regarding the tooling: start small, with a config map in kubernetes, a database table, a json file somewhere will suffice. Later on new requirements will appear, like notifying the client UI when a toggle changes or allowing the product owner to decide, when a feature will be enabled. That will be the moment to think about next steps but for the moment it is more important to adopt this workflow, adopt this mindset of discipline to keep the source code clean, learn the techniques how to organize the code basis and ENJOY HAVING THE CONTROL over the impact of deployments, feature decisions, stress!

That’s it, I shared all of my thoughts regarding this subject: your journey of delivering continuously can start or continued ) now.

p.s. It is time for the one sentence about feature branches: Feature toggles will never work with feature branches. Period. This means you have to decide: move to trunk based development or forget continuous development.

p.p.s. For the most languages exist feature toggle libraries, frameworks, even platforms, it is not necessary to write a new one. There are libraries for different complexities how the state can be calculated (like account state, persons, roles, time settings), just pick one.

If you have a Middleware that needs to work on a specific path, you should implement it by mapping it to a route in ASP.NET Core 3.0, instead of just checking the path names. This post doesn't handle regular Middlewares, which need to work all request, or all requests inside a Map or MapWhen branch.

At the Global MVP Summit 2019 in Redmond I attended the hackathon where I worked on my GraphQL Middlewares for ASP.NET Core. I asked Glen Condron for a review of the API and the way the Middleware gets configured. He told me that we did it all right. We followed the proposed way to provide and configure an ASP.NET Core Middleware. But he also told me that there is a new way in ASP.NET Core 3.0 to use this kind of Middlewares.

Glen asked James Newton King who works on the new Endpoint Routing to show me how this needs to be done in ASP.NET Core 3.0. James pointed me to the ASP.NET Core Health Checks and explained me the new way to go.

BTW: That's kinda closing the loop: Four summits ago Damien Bowden and I where working on the initial drafts of the ASP.NET Core Health Checks together with Glen Condron. Awesome that this is now in production ;-)

How this should be done now

In ASP.NET Core 3.0 these kind of mappings, where you may listen on a specific endpoint, should be done using the EndpoiontRouteBuilder. If you create a new ASP.NET Core 3.0 web application. MVC is now added a little different in the Startup.cs than before:

The method MapControllerRoute() adds the controller based MVC and Web API. The new ASP.NET Core Health Checks, which also provide an own endpoint will also be added like this. Means we now have Map() methods as extension methods on the IEndpointRouteBuilder instead of Use() methods on the IApplicationBuilder. It is still possible to use the Use methods.

Based on the current IEndpointRouteBuilder a new IApplicationBuilder is created, where we Use the GraphQL Middleware as before. We pass the ISchemaProvider and the GraphQlMiddlewareOptions as arguments to the Middleware. The result is a RequestDelegate in the pipeline variable.

The configured endpoint pattern and the pipeline than gets mapped to the IEndpointRouteBuilder. The small extension Method WithDisplayName() sets the configured display name to the endpoint.

I needed to copy this extension method to from the ASP.NET Core repository to my code base, because the current development build of ASP.NET Core didn't contain this method two weeks ago. I need to check the latest version ASAP.

In ASP.NET Core 3.0 the GraphQl and the GraphiQl Middleware can now added like this:

Conclusion

This approach feels a bit different. In my opinion it messes the startup.cs a little bit. Previously we added one middleware after another... line by line to the IApplicationBuilder method. With this approach we have some Middlewares still registered on the IApplicationBuilder and some others on the IEndpointRouteBuilder inside a lambda expression on a new IApplicationBuilder.

The other thing is, that the order isn't really clear anymore. When will the Middlewares inside the UseRouting() be executed and in which direction? I will dig deeper into this the next months.

Also this year I was invited to attend the yearly Global MVP Summit in Redmond and Bellevue. It started last week Sunday until Thursday. As last year I add two days before and after the summit to get some time to explore Seattle. This is a small summery of the 8 days in the Seattle area.

Just to weeks before the summit starts there was the so called #snowmageddon2019 in the north west of the US. Cold and a lot of snow from the US perspective. But I was sure, when I will arrive in Seattle it'll be sunny and warm. And it was. I never had a rainy day in Seattle. In Bellevue and Redmond I had, but never in Seattle. Also last year I stayed two nights before and two nights after the summit in downtown Seattle and it was sunny than, but rainy while staying in Bellevue. Anyway, Seattle is always sunny, people are happy and friendly because of that.

Pre-Summit days in Seattle

As well as last year I stayed the first two nights in the Green Tortoise Hostel in downtown Seattle near the Pike place. This is a cheap hostel, you need to share the room with six to eight other people. But it is anyway impressive. The weekend when I arrived it was ComiCon in Seattle and Sint Patrick's day. So the hostel was full of ComiCon attendees, people wearing green things, Backpackers, and some MVPs.

I again met the South Korean Azure MVP in this hostel as last year, who gave me the sticker of his Korean Azure user group. I also met him the two nights after in the same hostel as well as during the summit

Even if the hostel is cheap, compared with the hotels in Seattle, the location is absolutely awesome. If you leave the hostel, you will stumble into the only Starbucks restaurant, that serves the Pike Place Special Reserve outside the Pike Place. Leaving the restaurant, you will stumble into the public market of the Pike Place where you can grab some pastries for breakfast. Than leaving the Pike Place to take the breakfast in the sun within the Victor Steinbrueck Park.

I arrived on Friday and took the Light Rail to Downtown Seattle, checked in to the Green Tortoise and went for a walk threw the Pike Place and had the first awesome burger at Lowell's Restaurant while enjoying the nice view to the Puget Sound. Saturday starts slowly with the breakfast described in the last paragraph. Later than I joined some MVPs for the guided Market Experience tour. Where I learned a lot about the market.

Did you know that the fist Starbucks isn't really the first one, but the oldest one? Did you know that you need to found your business on the Pike Place to get a spot to sell your stuff? All you want to sell on the market needs to be produced by yourself (except meat, sausage and fish I think)

Later I joined some MVP Friends for lunch and for a walk to the space needle. We had lunch at the Pike Place Brewery before where I found sausages, sauerkraut and meshed potatoes on the menu. Beer brazed sausages, with fine apple sauerkraut. Seattle meats Bavaria. I needed to try it and it was really yummy.

In the evening we had free beer at the hostel. With free beer and my laptop I started to merge almost all of the pull requests to the ASP.NET Core GraphQL Middlewares, answered almost all open issues and updated the dependencies of the project.

The Summit days in Bellevue and Redmond

The Sunday also started slowly, before I took the express bus to Redmond where the Summit hotels are located. I checked in to the Marriott Bellevue, where I shared the room with the famous Alex Witkowski. This room was awesome, with a great view to the space needle and a super modern stylish sliding door to the bathroom that cannot be locked and that always wasn't really closed. Felt strange while sitting on the toilet, but must be super modern for a 599$ room ;-)

Sunday is the day where the most of the MVPs registering for the summit at the biggest summit hotel. Some soft skill talks were held there too. The first parties organized by MVPs or tools vendors where on Saturday so we joined them and met the first Microsofties and other famous MVPs. it got late and the Monday got hard. Anyway the actual Summit starts on Monday with a lot of technical sessions.

From Monday to Wednesday there where a lot of interesting technical sessions. Many of them really had a lot of value. Some others didn't contain new information for me, because the most stuff in my area was openly discussed on GitHub, but anyway clarified some rumors.

I really got into Razor Components, which is not about Blazor as I initially thought. Also Scott Hanselman did a clarification post about it. [link] Razor Components is component based development using Razor. It looks similar to React, even if it may be rendered on the server side, as well as on the client side using Blazor. Awesome stuff.

The Thursday also is a highlight for me. Thursday is hackathon day. I joined Jeff Fritz who showed us his mobile streaming setup. I got a chance to talk to Jeff and to other Twitch streamers, like Emanuele Bartolesi. Besides of that I worked on the ASP.NET Core CraphQL Middlewares and had a chance to get a review by Glen Condron. He also told me that the way how a Middleware is created changed in 3.0 for Middlewares that handle a specific path. I'll write about it in one of the next posts. Glen and James Newton King who works on the new ASP.NET Core routing supported me to get it running for ASP.NET Core 3.0.

Post-Summit days in Seattle

On Thursday after the hackathon I moved back to Seattle into the Green Tortoise and again met the south Korean Azure MVP at the check-in. I used the night to work on the ASP.NET Core CraphQL Middleware to finish the GraphQL Middleware registration using the route mapping.

Friday was shopping day. My wife always need some pants from her favorite store Seattle and I need to buy some souvenirs for the Kids (usually some t-shirts). After this was done I decided to explore the international district and china town where I also had a quick lunch in on of the Asian restaurants. China town was less colorful than expected but nice anyway. An awesome detail: You know you are in china town, if the street names are printed in two languages.

I left china town and unexpectedly stumbled into the old part of Seattle. The Pioneer Square was surprisingly nice. Old houses, small shops and pubs. One of the pups sells a German stout beer "Köstritzer", as well as "Biers" and "Brats".

Also found the "Berliner" döner and kebab restaurant, which is (as far as I know) the very first and the only real döner restaurant in the US:

In the evening I decided to go to the Hardrock Cafe across the street to take a dinner. I was there for the first time. I don't get why this is a popular place. Pretty loud, uncomfortable and the food is good but not really special. Anyway, I continued to get the GraphiQL Middleware (the GraphQL UI) running using the new route mapping and cleaned up all the changes. Free beer at the Green Tortoise and Coding matches pretty well.

Saturday was the day to fly back at home. The morning starts with the annual JustCommunity Summit at the Lowell's Restaurant in the Public Market area of the Pike Place. Kostia and I took a breakfast and talked about the plans of the INETA Germany and JustCommunity. Our goals: To have a strategy about the JustCommunity until the end of the year. We also need to lineup the INETA tasks with the community support of the .NET Foundation.

Leaving Seattle

This was the fifth time in Seattle which is one of the most impressive cities. Pretty diverse, fascinating and pretty much different to any other cities in the US I've bin (not that many unfortunately).

Leaving Seattle is a little bit like leaving home. The last years I didn't know why. Now I'm pretty sure it is because I always meet friends, community members and many other nice people for the summit. The Summit is a little bit like a annual family meetup.

But one week without the family is hard as well and it is time to go home to my lovely wife and the three boys :-)

The people who know me, also know that I'm a huge fan of consoles and CLIs. I run the dotnet CLI as well as the Angular CLI and the create-react CLI. Yeoman is also a tool I like. I own a Mac, but cannot really work with the Mac UI. I really prefer the terminal in Mac. Also Git is used in the console the most time. The only situation where I don't use git in the console, is while resolving merge conflicts. I configured KDiff3 as the merge tool. I don't really need a graphic user interface for all the other tasks to work with Git.

So I do using the Git Flow process.

About Git Flow

In general Git Flow is a branching concept over Git. It is pretty clear and intuitive, but following this concept manually in Git is a bit hard and needs some time. Git Flow is now implemented in many graphical user interfaces like SourceTree. This reduces the overhead.

Git Flow is mainly about merging and branching. It defines two main branches, which are "master" as the production/release branch and "develop" as the working branch. The actual work is done in different types of feature branches:

"feature" a branch created based on "develop" to implement new featues

Git Flow is also a tool provided as Git extension. This reduces branching, merging, releasing tagging to just one single command and does all the needed tasks in the background for you. This CLI makes it super easy to follow Git Flow.

Install Git Flow as Git Extension

The installation is a bit annoying, because it needs a some additional tools and some more tasks for just a small Git extension.

To install it you need cygwin, which also is a console that gives you Linux like tools on Windows. The easiest way to install cygwin is to use Chocolatey, which is a packet manager for Windows. (apt-get for windows). You can also install it manually by running the installer, but you need to ensure to also install cyg-get, wget and util-linux, which is much easier using Chocolatey.

If this is done exit the bash by typing exit and close the console by typing exit. Closing the consoles and open it again ensures all the environment variables needed are available.

Open a new console and type git flow. You should now see the Git Flow CLI help like this:

Every time you checkout or create a new repository you need to run git flow init to enable Git Flow.

Using this command you will setup Git Flow on an existing repository by configuring the different branch prefixes and specifying the two main branches. I would propose to choose the default prefixes and names:

Working with Git Flow

Using Git Flow is pretty easy using this CLI. Let's assume we need to start working on a feature called "Implement validation". We could now write a command like this

git flow feature start implement-validation

This will work as expected:

Since the most of us are using a planning tool like Jira or TFS it would make more sense to use the ticket number here as feature name. In case you use the TFS I would propose to add the work item type to the number:

Jira: PROJ-101

TFS: Task-34212

This helps to keep the branch names clean and you don't start messing around with long branch names or wrong names. Git Flow usually deletes the feature branch after merging it back. So the list of branches will never be too long. But anyway, I learned in the past few years, it is much easier to follow ticket numbers than weird named branch names, because we talk about the current tickets every day in the daily scrum meeting.

All the commands that are not related to branches can be done using the regular Git CLI. That means commands to commit, to push and so on.

Git Flow will merge the branches, if you finish them. It doesn't work with rebase or other approaches. This means it'll take over the entire history of the feature branch. Because of this I would also propose to add the ticket number to the commit messages like this: "PROJ-101: adds validation to the form". This makes it easy to follow the history in case it is needed.

To finish a feature you should first merge the latest changes of the development branch in:

git fetch --all
git merge develop
git flow feature finish

If you don't add the feature name to the git flow feature finish command, Git Flow will try to close the current feature branch and will write out a message in case the current branch is not a feature branch.

I would propose to always merge the latest changes of develop to the current feature branch to solve possible conflicts within the feature branch instead in the develop branch. This way the merge to develop will almost never have a conflict.

I showed the way how to work with Git Flow using the feature branch. But it'll work the same way with the other branch types. Except with the release and the hotfix branches where you need to set the tag name as feature name. This should be the version number of the release or the version number of the hotfix.

While finishing these two branches Git Flow will ask you for a tag message. After finishing it you need to push both the master and the develop brunch, as well as the tags:

Conclusion

I really love the CLI help of this tool. It is not only descriptive but also explaining. The same way the GIT CLI is explaining things. It is also providing proposals in case a command is miss-spelled.

Git Flow helps me to speed up the branching and merging flows and to follow the Git Flow process. I proposed to use Git Flow in the company and works pretty well there. And I learned a lot about how this process works in production.

As written somewhen in the past, It also helps me to write my blog. I really use Git Flow to organize my posts I'm working on. I'm creating a feature per post and a hotfix in case I need to fix a post or something else on the blog. I use SemVer to version my releases and hotfixes: Every post increases the feature number and a hotfix increases the patch number. The feature number also is the number of post in my blog. The number of open features in my blog is the number of posts I'm working on. This way I can work on many posts separately and I'm able to release the posts separately.

Scenario

We have a pretty simple scenario: We have a table with a simple Id + ParentId schema and some demo data in it. I have seen this design quite a lot in the past and in the relational database world this is the obvious choice.

Problem

Each data entry is really simple to load or manipulate. Just load the target element and change the ParentId for a move action etc.. A more complex problem is how to load a whole “data tree”.
Let’s say I want to load all children or parents of a given Id. You could load everything, but if your dataset is large enough, this operation will work poorly and might kill your database.

Another naive way would be to query this with code from a client application, but if your “tree” is big enough, it will consume lots of resources, because for each “level” you open a new connection etc.

Recursive Common Table Expressions!

Our goal is to load the data in one go as effective as possible - without using Stored Procedures(!). In the Microsoft SQL Server world we have this handy feature called “common table expresions (CTE)”.
A common table expression can be seen as a function inside a SQL statement. This function can be invoked by itself and now we can call this a “recursive common table expression”.

The syntax itself is a bit odd, but works well and you can enhance it with JOINs from other tables.

The anchor.[Id] = 7 is our starting point and should be given as a SQL parameter. The with statement starts our function description, which we called “RCTE”.
In the first select we just load everything from the target element.
Note, that we add a “Lvl” property, which starts at 1.
The UNION ALL is needed (at least we were not 100% if there are other options).
In the next line we are doing a join based on the Id = ParentId schema and we increase the “Lvl” property for each level.
The last line inside the common table expression uses the “recursive” feature.

Now we are done and can use the CTE like a normal table in our final statement.

Result:

We now only load the “path” from the child entry up to the root entry.

If you ask why we introduce the “lvl” column:
With this column it is really easy see each “step” and it might come handy in your client application.

Scenario B: From parent to all descendants

With a small change we can do the other way around. Loading all descendants from a given id.

The logic itself is more or less identical, we changed only the INNER JOIN RCTE ON …

In this example we only load all children from a given id. If you point this to the “root”, you will get everything except the “alternative root” entry.

Conclusion

Working with trees in a relational database might not “feel” as good as in a document database, but it doesn’t mean, that such scenarios needs to perform bad. We use this code at work for some bigger datasets and it works really well for us.

After describing the context a little bit in part one it is time to look at the single steps the source code must pass in order to be delivered to the customers. (I’m sorry, but it is a quite long part )

The very first step starts with pushing all the current commits to master (if you work with feature branches you will probably encounter a new level of self-made complexity which I don’t intend to discuss about).

I think, if you agree having CD this way (commit ->…->production) than you have implicitly enforced trunk-based development.

This scenario triggered a totally new view on what we could achieve – good and bad – and made the responsibility on our shoulders palpable.— Krisztina Hirth (@YellowBrickC) March 11, 2019

This action triggers the first checks and quality gates like licence validation and unit tests. If all checks are “green” the new version of the software will be saved to the repository manager and will be tagged as “latest”.

Successful push leads to a new version of my service/pkg/docker image

At this moment the continuous integration is done but the features are far from being used by any customer. I have a first feedback that I didn’t brake any tests or other basic constraints but that’s all because nobody can use the features, it is not deployed anywhere yet.

Well let Jenkins execute the next step: deployment to the Kubernetes environment called integration (a.k.a. development)

Continuous delivery to the first environment including the execution of first acceptance tests

At this moment all my changes are tested if they can work together with the currently integrated features developed by my colleagues and if the new features are evolving in the right direction (or are done and ready for acceptance).

This is not bad, but what if I want to be sure that I didn’t break the “platform”, what if I don’t want to disturb everybody else working on the same product because I made some mistakes – but I still want to be a human ergo be able to make mistakes ? This means that my behavioral and structure changes introduced by my commits should be tested before they land on integration.

These must be obviously a different set of tests. They should test if the whole system (composed by a few microservices each having it’s own data persistence, one or more UI-Apps) is working as expected, is resilient, is secure, etc.

At this point came the power of Kubernetes (k8s) and ksonnet as a huge help. Having k8s in place (and having the infrastructure as code) it is almost a no-brainer to set up a new environment to wire up the single systems in isolation and execute the system tests against it. This needs not only the k8s part as code but also the resources deployed and running on it. With ksonnet can be every service, deployment, ingress configuration (manages external access to the services in a cluster), or config map defined and configured as code. ksonnet not only supports to deploy to different environments but offers also the possibility to compare these. There are a lot of tools offering these possibilities, it is not only ksonnet. It is important to choose the fitting tool and is even more important to invest the time and effort to configure everything as code. This is a must-have in order to achieve a real automation and continuous deployment!

I will not include here any ksonnet examples, they have a great documentation. What is important to realize is the opportunity offered with such an approach: if everything is code then every change can be checked in. Everything checked in can be included observed/monitored, can trigger pipelines and/or events, can be reverted, can be commented – and the feature that helped us in our solution – can be tagged.

What happens in a continuous delivery? Some change in VCS triggers pipeline, the fitting version of the source code is loaded (either as source code like ksonett files or as package or docker image), the configured quality gate checks are verified (runtime environment is wired up, the specs with the referenced version are executed) and in case of success the artifact will be tagged as “thumbs up” and promoted to the next environment. We started do this manually to gather enough experience to automate the process.

Deploy manually the latest resources from integration to the review stage

If you have all this working you have finished the part with the biggest effort. Now it is time to automate and generalize the single steps. After the Continuous Integration the only changes will occur in the ksonnet repo (all other source code changes are done before), which is called here deployment repo.

Roll out, test and eventually roll back the system ready for review

I think, this post is already to long. The next part ( I think, it will be the last one) I would like to write about the last essential method, how to deploy to production, without annoying anybody (no secret here, this is why feature toggles were invented for ) and about some open questions or decisions what we encountered on our journey.

Last year my colleagues and I had the pleasure to spend 2 days with @hamvocke and @diegopeleteiro from @thoughtworks reviewing the platform we created. One essential part of our discussions was about CI/CD described like this: “think about continuous delivery as a journey. Imagine every git push lands on production. This is your target, this is what your CD should enable.”

Even if (or maybe because) this thought scared the hell out of us, it became our vision for the next few months because we saw great opportunities we would gain if we would be able to work this way.

Boundaries (as in Domain Driven Design) defined based on the business we were in.

Each team having full ownership and full accountability for their part of business (represented by the SCS).

Basic heuristics regarding source code organisation: “share nothing” about business logic, “share everything” about utility functions (in OSS manner), about experiences you made, about the lessons you learned, about the errors you made.

Ensuring the code quality and the software quality is 100% team responsibility.

You build it, you run it.

One Platform-as-a-service team to enable this business teams to deliver features fast.

The architecture we have chosen was meant to support our organisation: independent teams able to work and deliver features fast and independently. They should decide themselves when and what they deploy. In order to achieve this we defined a few rules regarding inter-system communication. The most important ones are:

Even though we set some guards regarding the overall architecture, the teams still had the ownership for the internal architecture decisions. As at the beginning we didn’t have continuous delivery in place every team was alone responsible for deploying his systems. Due the missing automation we were not only predestined to make human errors but we were also blind for the couplings between our services. (And we spent of course a lot of time doing stuff manually instead of letting Jenkins or Gitlab or some other tool doing this stuff for us )

One example: every one of our systems had at least one React App and a GraphQL API as the main communication (read/write/subscribe) channel. One of the best things about GraphQL is the possibility to include the GraphQL-schema in the react App and this way having the API Interface definition included in the client application.

Is this not cool? It can be. Or it can lead to some very smelly behavior, to a real tight coupling and to inability to deploy the App and the API independently. And just like my friend @etiennedi says: “If two services cannot be deployed independently they aren’t two services!”

This was the first lesson we have learned on this journey: If you don’t have a CD pipeline you will most probably hide the flaws of your design.

One can surely ask “what is the problem with manual deployment?” – nothing, if you have only a few services to handle, if every one in your team knows about these couplings and dependencies and is able to execute the very precise deployment steps to minimize the downtime. But otherwise? This method doesn’t scale, this method is not very professional – and the biggest problem: this method ignores the possibilities offered by Kubernetes to safely roll out, take down, or scale everything what you have built.

Having an automated, standardized CD pipeline as described at the beginning – with the goal that every commit will land on production in a few seconds – having this in place forces everyone to think about the consequences of his/hers commit, to write backwards compatible code, to become a more considered developer.