DotNetGerman BloggersAlle Blogs von DotNetGerman.comCopyright 2004-2014 DotNetGerman.comDotNetGerman.comhttps://asp.net-hacker.rocks/2019/09/10/aspnetcore30-blazor-serverside.htmlhttp://feedproxy.google.com/~r/jgutsch/~3/_O9vgFI8nF4/aspnetcore30-blazor-serverside.htmlJürgen GutschNew in ASP.NET Core 3.0 - Blazor Server Side<p>To have a look into the generic hosting models, we should also have a look into the different application models we have in ASP.NET Core. In this and the next post I'm going to write about Blazor, which is a new member of the ASP.NET Core family. To be more precisely, Blazor are two members of the ASP.NET Core family. On the one hand we have Blazor Server Side which actually is ASP.NET Core running on the server and on the other hand we have Blazor Client Side which looks like ASP.NET Core and is running on the browser inside a WebAssembly. Both frameworks share the same view framework, which is Razor Components. Both Frameworks may share the same view logic and business logic. Both frameworks are single page application (SPA) frameworks, there is no page reload from the server visible while browsing the application. Both frameworks look pretty similar up from the <code>Program.cs</code></p>
<p>Under the hood, both frameworks are hosted completely different. While Blazor Client Side is completely running on the Client, there is no web server needed. Blazor Server Side on the other hand is running upon a web server and is using WebSockets and a generic JavaScript client to simulate the same SPA behavior as Blazor Client Side.</p>
<h2>Hosting and Startup</h2>
<p>Within this post I'm trying to compare Blazor Server Side to the already known ASP.NET Core frameworks like MVC and Web API.</p>
<p>First let's create a new Blazor Server Side project using the .NET Core 3 Preview 7 SDK:</p>
<pre><code class="language-shell">dotnet new blazorserverside -n BlazorServerSideDemo -o BlazorServerSideDemo
cd BlazorServerSideDemo
code .
</code></pre>
<p>The second and third line changes the current directory to the project directory and opens it into Visual Studio Code, if it is installed.</p>
<p>The first thing I usually do is to have a short glimpse into the <code>Program.cs</code>, but in this case this class looks completely equal to the other projects. There is absolutely no difference:</p>
<pre><code class="language-csharp">public class Program
{
public static void Main(string[] args)
{
CreateHostBuilder(args).Build().Run();
}
public static IHostBuilder CreateHostBuilder(string[] args) =&gt;
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =&gt;
{
webBuilder.UseStartup&lt;Startup&gt;();
});
}
</code></pre>
<p>At first a default <code>IHostBuilder</code> is created and upon this a <code>IWebHostBuilder</code> is created to spin up a Kestrel web server and to host a default ASP.NET Core application. Nothing spectacular here.</p>
<p>The <code>Startup.cs</code> may be more special.</p>
<p>Actually it looks like a common ASP.NET Core <code>Startup</code> class except there are different services registered and a different Middlewares is used:</p>
<pre><code class="language-csharp">public class Startup
{
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}
public IConfiguration Configuration { get; }
public void ConfigureServices(IServiceCollection services)
{
services.AddRazorPages();
services.AddServerSideBlazor();
services.AddSingleton&lt;WeatherForecastService&gt;();
}
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
else
{
app.UseExceptionHandler(&quot;/Home/Error&quot;);
app.UseHsts();
}
app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseRouting();
app.UseEndpoints(endpoints =&gt;
{
endpoints.MapBlazorHub();
endpoints.MapFallbackToPage(&quot;/_Host&quot;);
});
}
}
</code></pre>
<p>In the <code>ConfigureServices</code> method there are the Razor Pages added to the IoC container. Razor Pages is used to provide the page that is hosting the Blazor application. In this case it is the <code>_Host.cshtml</code> in the Pages directory. Every single page application (SPA) has at least one almost static page which is hosting the actual application that is running in the browser. React, Vue, Angular and so on have to have the same thing. It is a index.html that is loading all the JavaScripts and hosting the JavaScript application. In case of Blazor there is also a generic JavaScript running on the hosting page. This JavaScript will connect to a SignalR WebSocket that is running on the server side.</p>
<p>Additional to the Razor Pages, the services needed for Blazor Server Side will be added to the IoC container. This services will be needed by the Blazor Hub which actually is the SignalR Hub that provides the WebSocket endpoint.</p>
<p>The Configure also looks similar to the other ASP.NET Core frameworks. The only differences are in the last lines, where the Blazor Hub gets added and where the fallback page gets added. This fallback page actually is the hosting Razor Page mentioned before. Since the SPA supports deep links and created URLs for the different views created on the client, the application need to route to a fallback page in case the user directly navigates to client side route that is not existing on the server. So the server will just provide the hosting page and the client will load the right views depending on the URLs in the browser afterwards.</p>
<h2>Blazor</h2>
<p>The key feature of Blazor are the razor based components, which get interpreted on a runtime that understand C# and Razor and rendered on the client. With Blazor Client Side it the Mono runtime running inside the WebAssembly and on the Server Side version it is the .NET Core runtime running on the server. That means the Razor components get interpreted and rendered on the server. After that they get pushed to the client using SignalR and placed on the right place inside the hosting page using the generic JavaScript which is connected to the SignalR.</p>
<p>So we have a server side rendered single page application, without any visible roundtrip to the server.</p>
<p>The Razor components are also placed in the pages folder, but have the file extension <code>.razor</code>. Except the <code>App.razor</code> which is directly in the project directory. Those are the actual view components, which contain the logic of the application.</p>
<p>If you have a more detailed look into the components, you'll see some similarities to React or Angular, in case you know those frameworks. I mentioned the <code>App.razor</code> which is the root component. Angular and React also have this kind of root component. Inside the Shared directory there is a <code>MainLayout.razor</code>, which is the layout component. (Also this kind of components are available in React and Angular.) All the other components in the pages directory are using this layout implicitly because it is set as the default layout in the <code>_Imports.razor</code>. Those components also define a route that is used to navigate to the component. Reusable components without a specific route are placed inside the Shared directory.</p>
<h2>Conclusion</h2>
<p>Even this is just a small introduction and overview about Blazor Server side, but I only want to quickly show the new ASP.NET Core 3.0 frameworks to create web applications. This is the last kind of normal server application I want to show. In the next part, I'm going to show Blazor Client side which uses a completely different hosting model.</p>
<p>Blazor server side by the way is the new replacement for ASP.NET WebForms to create stateful web applications using C#. WebForms won't be migrated to ASP.NET Core. It will be supported in the same way as the full .NET Framework will be supported in the future. Which there will be no new versions and no new features in the future. With this new in mind, it absolutely makes sense to have a more detailed look into Blazor Server Side.</p><img src="http://feeds.feedburner.com/~r/jgutsch/~4/_O9vgFI8nF4" height="1" width="1" alt=""/>Tue, 10 Sep 2019 00:00:00 Z2019-09-10T00:00:00ZJürgen GutschJürgen GutschJürgen Gutschhttp://heise.de/-4509474Holger SchwichtenbergWord-Automatisierung in einem Scheduled Task des Windows-ServersSo löst man die Probleme beim Start der Word-Automatisierungsobjekte in einem Hintergrundprozess.Mon, 09 Sep 2019 13:08:00 +02002019-09-09T13:08:00+02:00Holger SchwichtenbergHolger SchwichtenbergHolger Schwichtenberghttp://heise.de/-4493541Golo RodenPlädoyer für eine offene und tolerante Kommunikation in der IT-UnternehmenskulturInformatiker gelten häufig als fachlich kompetent, aber sozial inkompetent. Dieses Vorurteil lässt sich aber mit der richtigen Kommunikationskultur beheben.Tue, 03 Sep 2019 10:13:00 +02002019-09-03T10:13:00+02:00Golo RodenGolo RodenGolo Rodenhttps://blog.codeinside.eu/2019/08/31/check-installed-version-for-aspnetcore-on-windows-iis-with-powershellhttp://feedproxy.google.com/~r/Code-insideBlog/~3/PYHZbUFYvRI/check-installed-version-for-aspnetcore-on-windows-iis-with-powershellCode-Inside BlogCheck installed version for ASP.NET Core on Windows IIS with Powershell<h1 id="the-problem">The problem</h1>
<p>Let’s say you have a ASP.NET Core application <strong>without</strong> the bundled ASP.NET Core runtime (e.g. to keep the download as small as possible) and you want to run your ASP.NET Core application on a Windows Server hosted by IIS.</p>
<h1 id="general-approach">General approach</h1>
<p>The general approach is the following: Install the <a href="https://docs.microsoft.com/en-us/aspnet/core/host-and-deploy/iis/index?view=aspnetcore-2.2#install-the-net-core-windows-server-hosting-bundle">.NET Core hosting bundle</a> and you are done.</p>
<p>Each .NET Core Runtime (and there are quite a bunch of <a href="https://dotnet.microsoft.com/download/dotnet-core/2.2">them</a>) is backward compatible (at least the 2.X runtimes), so if you have installed 2.2.6, your app (created while using the .NET runtime 2.2.1), still runs.</p>
<h1 id="why-check-the-minimum-version">Why check the minimum version?</h1>
<p>Well… in theory the app itself (at least for .NET Core 2.X applications) may run under runtime versions, but each version might fix something and to keep things safe it is a good idea to enforce security updates.</p>
<h1 id="check-for-minimum-requirement">Check for minimum requirement</h1>
<p>I stumbled upon this <a href="https://stackoverflow.com/questions/38567796/how-to-determine-if-asp-net-core-has-been-installed-on-a-windows-server">Stackoverflow question/answer</a> and enhanced the script, because that version only tells you “ASP.NET Core seems to be installed”. My enhanced version searchs for a minimum required version and if this is not installed, it exit the script.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$DotNetCoreMinimumRuntimeVersion = [System.Version]::Parse("2.2.5.0")
$DotNETCoreUpdatesPath = "Registry::HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Updates\.NET Core"
$DotNetCoreItems = Get-Item -ErrorAction Stop -Path $DotNETCoreUpdatesPath
$MinimumDotNetCoreRuntimeInstalled = $False
$DotNetCoreItems.GetSubKeyNames() | Where { $_ -Match "Microsoft .NET Core.*Windows Server Hosting" } | ForEach-Object {
$registryKeyPath = Get-Item -Path "$DotNETCoreUpdatesPath\$_"
$dotNetCoreRuntimeVersion = $registryKeyPath.GetValue("PackageVersion")
$dotNetCoreRuntimeVersionCompare = [System.Version]::Parse($dotNetCoreRuntimeVersion)
if($dotNetCoreRuntimeVersionCompare -ge $DotNetCoreMinimumRuntimeVersion) {
Write-Host "The host has installed the following .NET Core Runtime: $_ (MinimumVersion requirement: $DotNetCoreMinimumRuntimeVersion)"
$MinimumDotNetCoreRuntimeInstalled = $True
}
}
if ($MinimumDotNetCoreRuntimeInstalled -eq $False) {
Write-host ".NET Core Runtime (MiniumVersion $DotNetCoreMinimumRuntimeVersion) is required." -foreground Red
exit
}
</code></pre></div></div>
<p>The “most” interesting part is the first line, where we set the minimum required version.</p>
<p>If you have installed a version of the .NET Core runtime on Windows, this information will end up in the registry like this:</p>
<p><img src="https://blog.codeinside.eu/assets/md-images/2019-08-31/registry.png" alt="x" title="Registry view" /></p>
<p>Now we just need to compare the installed version with the existing version and know if we are good to go.</p>
<p>Hope this helps!</p><img src="http://feeds.feedburner.com/~r/Code-insideBlog/~4/PYHZbUFYvRI" height="1" width="1" alt=""/>Sat, 31 Aug 2019 23:45:00 ZCode-Inside BlogCode-Inside BlogCode-Inside Bloghttp://heise.de/-4509446Holger SchwichtenbergAssembly-Meta-Daten (AssemblyInfo.cs) in .NET CoreIn .NET-Core-Projekten werden die Metadaten im Standard in der Projektdatei gespeichert. Eine AssemblyInfo.cs wie im klassischen .NET ist aber dennoch möglich.Thu, 29 Aug 2019 16:51:00 +02002019-08-29T16:51:00+02:00Holger SchwichtenbergHolger SchwichtenbergHolger Schwichtenberghttp://heise.de/-4496312Golo RodenNeue Serie: Götz & GoloAm 3. September 2019 wird es soweit sein: Die neue Serie "Götz & Golo" startet auf diesem Blog. Ein kurzer Ausblick, worum es in dieser Serie gehen und was das Konzept dahinter sein wird.Mon, 19 Aug 2019 09:43:00 +02002019-08-19T09:43:00+02:00Golo RodenGolo RodenGolo Rodenhttps://asp.net-hacker.rocks/2019/08/16/aspnetcore30-endpoint-routing.htmlhttp://feedproxy.google.com/~r/jgutsch/~3/MSVgCgk6Elc/aspnetcore30-endpoint-routing.htmlJürgen GutschASP.NET Core 3.0: Endpoint Routing<p>The last two posts were just a quick look into the <code>Program.cs</code> and the <code>Startup.cs</code>. This time I want to have a little deeper look into the new endpoint routing.</p>
<p><strong>Wait!</strong></p>
<p>Sometimes I have an Idea about a specific topic to write about and start writing. While writing I'm remembering that I maybe already wrote about it. Than I take a look into the blog archive and there it is:</p>
<p><a href="https://asp.net-hacker.rocks/2019/04/10/routed-middlewares.html">Implement Middlewares using Endpoint Routing in ASP.NET Core 3.0</a></p>
<p>Maybe I get old now... ;-)</p>
<p>This is why I just link to the already existing post.</p>
<p>Anyways. The next two posts are a quick glimpse into Blazor Server Side and Blazor Client Side.</p>
<p><strong>Why?</strong> Because I also want to focus on the different Hosting models and Blazor Client Side is using a different one.</p><img src="http://feeds.feedburner.com/~r/jgutsch/~4/MSVgCgk6Elc" height="1" width="1" alt=""/>Fri, 16 Aug 2019 00:00:00 Z2019-08-16T00:00:00ZJürgen GutschJürgen GutschJürgen Gutschhttps://asp.net-hacker.rocks/2019/08/12/aspnetcore30-look-into-startup.htmlhttp://feedproxy.google.com/~r/jgutsch/~3/3KTYeyqqp94/aspnetcore30-look-into-startup.htmlJürgen GutschNew in ASP.NET Core 3.0 - Taking a quick look into the Startup.cs<p>I the <a href="https://asp.net-hacker.rocks/2019/08/05/aspnetcore30-generic-hosting-environment.html">last post</a>, I took a quick look into the <code>Program.cs</code> of ASP.NET Core 3.0 and I quickly explored the Generic Hosting Model. But also the Startup class has something new in it. We will see some small but important changes.</p>
<blockquote>
<p>Just one thing I forgot to mention in the last post: It should just work ASP.NET Core 2.1 code of the <code>Program.cs</code> and the <code>Startup.cs</code> in ASP.NET Core 3.0, if there is no or less customizing. The <code>IWebHostBuilder</code> is still there and can be uses the 2.1 way and also the default 2.1 <code>Startup.cs</code> should run in ASP.NET Core 3.0. It may be that you only need to do some small changes there.</p>
</blockquote>
<p>The next snippet is the <code>Startup</code> class of an newly created empty web project:</p>
<pre><code class="language-csharp">public class Startup
{
// This method gets called by the runtime. Use this method to add services to the container.
// For more information on how to configure your application, visit https://go.microsoft.com/fwlink/?LinkID=398940
public void ConfigureServices(IServiceCollection services)
{
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
app.UseRouting();
app.UseEndpoints(endpoints =&gt;
{
endpoints.MapGet(&quot;/&quot;, async context =&gt;
{
await context.Response.WriteAsync(&quot;Hello World!&quot;);
});
});
}
}
</code></pre>
<p>The empty web project is a ASP.NET Core project without any ASP.NET Core UI feature. This is why the <code>ConfigureServices</code> method is empty. There is no additional service added to the dependency injection container.</p>
<p>The new stuff is into in the <code>Configure</code> method. The first lines look familiar. Depending on the hosting environment the development exception page will be shown.</p>
<p><code>app.UseRouting()</code> is new. This is a middleware that enables the new endpoint routing. The new thing is, that routing is decoupled from the specific ASP.NET Feature. In the previous Version every feature (MVC, Razor Pages, SIgnalR, etc.) had its own endpoint implementation. Now the endpoint and routing configuration can be done independently. The Middlewares that need to handle a specific endpoint will now be mapped to a specific endpoint or route. So the Middlewares don't need to handle the routes anymore.</p>
<p>If you wrote a Middleware in the past which needs to work on a specific endpoint, you added the logic to check the endpoint inside the middleware or you used the <code>MapWhen()</code> extension method on the <code>IApplicationBuilder</code> to add the Middleware to a specific endpoint.</p>
<p>Now you create a new pipeline (using <code>IApplicationBuilder)</code> per endpoint and Map the Middleware to the specific new pipeline.</p>
<p>The <code>MapGet()</code> method above does this implicitly. It created a new endpoint &quot;/&quot; and maps the delegate Middleware to the new pipeline that was created internally.</p>
<p>That was a simple snippet. Now let's have a look into the <code>Startup.cs</code> of a new full blown web application using individual authentication. Created by using this .NET CLI command:</p>
<pre><code class="language-shell">dotnet new mvc --auth Individual
</code></pre>
<p>Overall this also looks pretty familiar if you already know the previous versions:</p>
<pre><code class="language-csharp">public class Startup
{
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}
public IConfiguration Configuration { get; }
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
services.AddDbContext&lt;ApplicationDbContext&gt;(options =&gt;
options.UseSqlite(
Configuration.GetConnectionString(&quot;DefaultConnection&quot;)));
services.AddDefaultIdentity&lt;IdentityUser&gt;(options =&gt; options.SignIn.RequireConfirmedAccount = true)
.AddEntityFrameworkStores&lt;ApplicationDbContext&gt;();
services.AddControllersWithViews();
services.AddRazorPages();
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
app.UseDatabaseErrorPage();
}
else
{
app.UseExceptionHandler(&quot;/Home/Error&quot;);
// The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
app.UseHsts();
}
app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseRouting();
app.UseAuthentication();
app.UseAuthorization();
app.UseEndpoints(endpoints =&gt;
{
endpoints.MapControllerRoute(
name: &quot;default&quot;,
pattern: &quot;{controller=Home}/{action=Index}/{id?}&quot;);
endpoints.MapRazorPages();
});
}
}
</code></pre>
<p>This is a MVC application, but did you see the lines where MVC is added? I'm sure you did. It is not longer called MVC, even if it is the MVC pattern used, because it was a little bit confusing with Web API.</p>
<p>To add MVC you now need to add <code>AddControllersWithViews()</code>. If you want to add Web API only you just need to add <code>AddControllers()</code>. I think this is a small but useful change. This way you can be more specific by adding ASP.NET Core features. In this case also Razor pages where added to the project. It is absolutely no problem to mix ASP.NET Core features.</p>
<blockquote>
<p><code>AddMvc()</code> still exists and is still working in ASP.NET Core</p>
</blockquote>
<p>The Configure method doesn't really change, except the new endpoint routing part. There are two endpoints configured. One for controller routes (Which is Web API and MVC) and one for RazorPages.</p>
<h2>Conclusion</h2>
<p>This is also just a quick look into the <code>Startup.cs</code> with just some small but useful changes.</p>
<p>In the next post I'm going to do a little more detailed look into the new endpoint routing. While working on the <a href="https://github.com/JuergenGutsch/graphql-aspnetcore">GraphQL endpoint for ASP.NET Core</a>, I learned a lot about the endpoint routing. This feature makes a lot of sense to me, even if it means to rethink some things, when you build and provide a Middleware.</p><img src="http://feeds.feedburner.com/~r/jgutsch/~4/3KTYeyqqp94" height="1" width="1" alt=""/>Mon, 12 Aug 2019 00:00:00 Z2019-08-12T00:00:00ZJürgen GutschJürgen GutschJürgen Gutschhttp://heise.de/-4485160Golo RodenFunktionale Programmierung mit ObjektenJavaScript kennt verschiedene Methoden zur funktionalen Programmierung, beispielsweise map, reduce und filter. Allerdings stehen sie nur für Arrays zur Verfügung, nicht für Objekte. Mit ECMAScript 2019 lässt sich das jedoch auf elegante Weise ändern.Mon, 05 Aug 2019 11:23:00 +02002019-08-05T11:23:00+02:00Golo RodenGolo RodenGolo Rodenhttps://asp.net-hacker.rocks/2019/08/05/aspnetcore30-generic-hosting-environment.htmlhttp://feedproxy.google.com/~r/jgutsch/~3/oRgqaMoQ_F0/aspnetcore30-generic-hosting-environment.htmlJürgen GutschNew in ASP.NET Core 3.0 - Generic Hosting Environment<p>In ASP.NET Core 3.0 the hosting environment changes to get more generic. Hosting is not longer bound to Kestrel and not longer bound to ASP.NET Core. This means you are able to create a host, that doesn't start the Kestrel web server and doesn't need to use the ASP.NET Core Framework.</p>
<p>This is a small introduction post about the Generic Hosting Environment in ASP.NET Core 3.0. During the next posts I'm going to write more about it and what you can do with it in combination with some more ASP.NET Core 3.0 features.</p>
<p>In the next posts we will see a lot more details about why this makes sense. For the short term: There are different hosting models. One is the already known web hosting. One other model is running a worker service without a web server and without ASP.NET Core. Also Blazor uses a different hosting model inside the web assembly.</p>
<p>How does it look like in ASP.NET Core 3.0?</p>
<p>First let's recap how it looks in previous versions. This is a ASP.NET Core 2.2 <code>Startup.cs</code> that creates an <code>IWebHostBuilder</code> to start up Kestrel and to bootstrap ASP.NET Core using the <code>Startup</code> class:</p>
<pre><code class="language-csharp">public class Program
{
public static void Main(string[] args)
{
CreateWebHostBuilder(args).Build().Run();
}
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =&gt;
WebHost.CreateDefaultBuilder(args)
.UseStartup&lt;Startup&gt;();
}
</code></pre>
<p>The next snippet shows the <code>Program.cs</code> of a new ASP.NET Core 3.0 web project:</p>
<pre><code class="language-csharp">public class Program
{
public static void Main(string[] args)
{
CreateHostBuilder(args).Build().Run();
}
public static IHostBuilder CreateHostBuilder(string[] args) =&gt;
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =&gt;
{
webBuilder.UseStartup&lt;Startup&gt;();
});
}
</code></pre>
<p>Now a <code>IHostBuilder</code> will be created and configured first. If the default host builder is created, a <code>IWebHostBuilder</code> is created to use the configured <code>Startup</code> class.</p>
<p>The typical .NET Core App features like configuration, logging and dependency injection are configured on the level of the <code>IHostBuilder</code>. All the ASP.NET specific features like authentication, Middlewares, ActionFilters, Formatters, etc. are configured on the level of the <code>IWebHostBuilder</code>.</p>
<h2>Conclusion</h2>
<p>This makes the Hosting environment a lot more generic and flexible.</p>
<p>I'm going to write about specific scenarios during the next posts about the new ASP.NET Core 3.0 features. But first I will have a look into Startup.cs to see what is new in ASP.NET Core 3.0.</p><img src="http://feeds.feedburner.com/~r/jgutsch/~4/oRgqaMoQ_F0" height="1" width="1" alt=""/>Mon, 05 Aug 2019 00:00:00 Z2019-08-05T00:00:00ZJürgen GutschJürgen GutschJürgen Gutschhttps://marcoscheel.de/post/186728523052https://marcoscheel.de/post/186728523052Marco ScheelMange Microsoft Teams membership with Azure AD Access Review<p>This post will introduce you to the Azure AD Access Review feature. With the introduction of modern collaboration through Microsoft 365 and Microsoft Teams being the main tool it is important to mange who is a member to the underlying Office 365 Group (Azure AD Group).</p><p>&lt;DE&gt;Für eine erhöhte Reichweite wird der Post heute auf Englisch erscheinen. Es geht um die Einführung von Access Reviews (Azure AD) im Zusammenspiel mit Microsoft Teams. Das Verwalten der Mitgliedschaft eines Teams wird durch den Einsatz von diesem Feature unterstützt und stellt die Besitzer weiter in den Mittelpunkt. Sollte großes Interesse an einer komplett deutschen Version bestehen, dann lasst es mich bitte wissen.&lt;/DE&gt;</p><p>Microsoft has great <a href="https://docs.microsoft.com/en-us/azure/active-directory/governance/access-reviews-overview" target="_blank">resources</a> to get started on a technical level. The feature enables a set of people to review another set of people. Azure AD is leveraging this capability (all under the bigger umbrella called <a href="https://docs.microsoft.com/en-us/azure/active-directory/governance/identity-governance-overview" target="_blank">Identity Governance)</a> on two assets: Azure AD Groups and Azure AD Apps. Microsoft Teams as a hub for collaboration is build on top of Office 365 Groups and so we will have a closer look at the Access Review part for Azure AD Groups.</p><p>Each Office 365 Group (each Team) is build from a set of owners and members. With the open nature of Office 365, members can be employees, contractors, or people outside of the organization.</p><figure data-orig-width="2057" data-orig-height="1518" class="tmblr-full"><img src="https://66.media.tumblr.com/427fda55ddd245c289b70f2de9b9e84c/fa6d5741988a71ae-9d/s540x810/3833d9d3ec6dbc8f327692b06dc6ded445dac0b6.png" alt="image" data-orig-width="2057" data-orig-height="1518"/></figure><p>In our modern collaboration (Teams, SharePoint, &hellip;) implementation we strongly recommend to leverage full self service group creation that is already built into the system. With this setup everyone is able to create and manage/own a group. Permanent user education is needed for everyone to understand the concept behind modern groups. Many organizations also have a strong set of internal rules that forces a so called information owner (could be equal to the owner of a group) to review who has access to their data. Most organization rely on the fact people are fulfilling their duties as demanded, but lets face it owners are just human beings that need to do their “real” job. With the introduction of Azure AD Access Review we can support these owner duties and make the process documented and easy to execute.</p><p>AAD Access Review can do the following to support an up to date group membership:</p><ul><li>Setup an Access Review for an Azure AD Group</li><li>Specify the duration (start date, recurrence, duration, &hellip;)</li><li>Specify who will do the review (owner, self, specific people, &hellip;)</li><li>Specify who will be reviewed (all members, guests, &hellip;)</li><li>Specify what will happen if the review is not executed (remove members, &hellip;)</li></ul><p>Before we start we need to talk about licensing. It is obvious that M365 E5 is the best SKU to start with ;) but if you are not that lucky, you need at least an Azure AD P2 license. It is not a “very” common license as it was only part of the EMS E5 SKU, but Microsoft started some time ago <a href="https://www.microsoft.com/en-us/microsoft-365/blog/2019/01/02/introducing-new-advanced-security-and-compliance-offerings-for-microsoft-365/" target="_blank">really attractive license bundles</a>. Many orgs with strong security requirements will at some point hit a license SKU that will include AAD P2. For your trusty lab tenants start a EMS E5 trial to test these features today. To be precise only the accounts reviewing (executing the Access Review) need the license, at least this is my understanding and as always with licensing ask your usual licensing people to get the definitive answer.</p><p>The setup of an Access Review (if not automated through <a href="https://docs.microsoft.com/en-us/graph/api/resources/accessreviews-root?view=graph-rest-beta" target="_blank">MS Graph</a> Beta) is setup in the Azure Portal in the <a href="https://docs.microsoft.com/en-us/azure/active-directory/governance/access-reviews-overview#onboard-access-reviews" target="_blank">identity governance blade</a> of AAD. To create our first Access Review we need to on-board to this feature.</p><figure data-orig-width="1262" data-orig-height="987" class="tmblr-full"><img src="https://66.media.tumblr.com/4950f0f0f196d7b8905e892873b9b0f9/fa6d5741988a71ae-39/s540x810/131bf1a2e37b7224171047bdadcdd48db93f2ca9.png" alt="image" data-orig-width="1262" data-orig-height="987"/></figure><p>Please note we are looking at Access Review in the context of modern collaboration (groups created by Teams, SharePoint, Outlook, &hellip;). Access Review can be used to review any AAD group that you use to grant access to a specific resource or keep a list of trusted users for an infrastructure piece of tech in Azure. The following information might not always be valid for your scenario!</p><p>This is the first half of the screen we need to fill-out for a new Access Review:</p><figure data-orig-width="1072" data-orig-height="834" class="tmblr-full"><img src="https://66.media.tumblr.com/436e902bef405b859036a465e7c6f59b/fa6d5741988a71ae-54/s540x810/dd3f4449c97075beb4f44dbe4b041f57f10472e2.png" alt="image" data-orig-width="1072" data-orig-height="834"/></figure><p><br/></p><p><b>Review name</b>: This is a really important piece! The Review name will be the “only” visible clue for the reviewer once they get the email about the outstanding review. With self service setup and with the nature of how people name their groups we need to ensure people are understanding what they review. We try to automate the creation of the reviews so we put the review timing, the group name and the groups object id in the review name. The ID is helping during support if you send out 4000 Access Reviews and people ask why they got this email they can provide you with the ID and things get easier. For example: 2019-Q1 GRP New Order (af01a33c-df0b-4a97-a7de-c6954bd569ef)</p><p><b>Frequency</b>: Also very important! You have to understand that an Access Review is somehow static. You could do a recurring review, but some information will be out of sync. For example the group could be renamed, but the title will not be updated and people might get confused about misleading information in the email that is send out. If you choose to let the owner of a group do the review, the owners will be “copied” to the Access Review config and not updated for future reviews. Technically this could be fixed by Microsoft, but as of now we ran into problems in the context of modern collaboration.</p><figure data-orig-width="1072" data-orig-height="796" class="tmblr-full"><img src="https://66.media.tumblr.com/e072c8b4245996a391dda545c453e818/fa6d5741988a71ae-78/s540x810/39ad247fabc0461bd4e86f9d1ff5571358ed7c7a.png" alt="image" data-orig-width="1072" data-orig-height="796"/></figure><p><b>Users</b>: &ldquo;Members of a group” is our choice for collaboration. The other option is “Assigned to an application” and not our focus. For a group we have the option to do a guest only review or review everybody as a member of a group. Based on organizational needs and information like the confidentiality we can make a decision. As a starting point it could be a good option to go with guests only because guests are not very well controlled in most environments. An employee at least has a contract and the general trust level should be higher.</p><p><b>Group</b>: Select a group the review should apply to. The latest changes to the Access Review feature allowed to select multiple groups at once. From a collaboration perspective I would avoid it, because at the end of the creation process each group will have its own Access Review instance and the settings are no longer shared. Once again from a collab point of view we need some kind of automation because it is not feasible to create these reviews by an manual task in a foreseeable future.</p><p><b>Reviewers</b>: The natural choice for an Office 365 Group (Team) is to go with the “Group owners” option. Especially if we automate the process and don’t have an extra database to lookup who is the information owner. For static groups or highly confidential groups the option “Selected users” could make sense. An interesting option is also the last one called “Members (self)”. This option will &quot;force” each member to take a decision if the user is any longer part of this project, team or group. We at <a href="https://glueckkanja.com" target="_blank">Glück &amp; Kanja</a> are currently thinking about doing this for some of our internal clients teams. Most of our groups are public and accessible by most of the employee, but membership will document some kind of current involvement for the client represented by the group. This could also naturally reduce the number of teams that show up in your Microsoft Teams client app. As mentioned earlier at the moment it seems that the option “Group owners” will be resolved once the Access Review starts and the instance of the review is then fixed. So any owner change could be not reflected in future instances in recurring reviews. Hopefully this will be fixed by Microsoft.</p><p><b>Program</b>: This is a logical grouping of access reviews. For example we could add all collaboration related reviews to one program vs administration reviews with a more static route.</p><figure data-orig-width="1051" data-orig-height="749" class="tmblr-full"><img src="https://66.media.tumblr.com/3a0680f9ed18d3fb273c4727eadbfe35/fa6d5741988a71ae-82/s540x810/cb8e404eb60560d318fefa47c1138ec68dd01289.png" alt="image" data-orig-width="1051" data-orig-height="749"/></figure><p>More advanced settings are collapsed, but should definitely be reviews.</p><p><b>Upon completion settings</b>: Allows to automatically apply the review results. I would suggest to try this settings, because it will not only document the review but take the required action on the membership. If group owners are not aware what these Access Review email are, then we talk about potential loss of access for members not reviewed, but at the end that is what we want. People need to take this part of identity governance for real and take care of their data. Any change by the system is document (Audit log of the group) and can be reverse manually. If the system is not executing the results of the review, someone must look up results regularly and then ensure to remove the users based on the outcome. If you go for Access Review, I strongly recommend on automatically applying the results (after you own internal tests).</p><p>Lets take a look on the created Access Review.</p><figure data-orig-width="1628" data-orig-height="1193" class="tmblr-full"><img src="https://66.media.tumblr.com/2619834dc34ed12dd62964eca9d061e1/fa6d5741988a71ae-1d/s540x810/a4e8eec1feb6b6f7eb05128c11edb482b1cdf403.png" alt="image" data-orig-width="1628" data-orig-height="1193"/></figure><p><br/></p><p><b>Azure Portal</b>: This is an overview for the admin (non recurring access review).</p><figure data-orig-width="1496" data-orig-height="1802" class="tmblr-full"><img src="https://66.media.tumblr.com/b3f79eab3c00650d3fe63e87f556e97d/fa6d5741988a71ae-3b/s540x810/84162819d943b877402c2115cd86f61ea094f4a6.png" alt="image" data-orig-width="1496" data-orig-height="1802"/></figure><p><br/></p><p><b>Email</b>: As you can see the prominent Review name is what is standing out to the user. The group name (also highlighted red) is buried within all other text.</p><figure data-orig-width="1630" data-orig-height="1766" class="tmblr-full"><img src="https://66.media.tumblr.com/9abef3bb6ad7a1e8937284baf2ce79a2/fa6d5741988a71ae-b4/s540x810/601f8f99c17eb56c639b08ff6dff8a83f180ff26.png" alt="image" data-orig-width="1630" data-orig-height="1766"/></figure><p><br/></p><p><b>Click on “Start Review” from the email</b>: The user now can take action based on recommendations (missing in my lab tenant due to inactivity of my lab users).</p><figure data-orig-width="1206" data-orig-height="972" class="tmblr-full"><img src="https://66.media.tumblr.com/2417a368874ab0559d1fbbd05b1f6415/fa6d5741988a71ae-e9/s540x810/0831c4c9704f2e0ed48e75078717141d3b4ed030.png" alt="image" data-orig-width="1206" data-orig-height="972"/></figure><p><b>Take Review</b>: Accept 6 users.<br/></p><figure data-orig-width="1630" data-orig-height="1766" class="tmblr-full"><img src="https://66.media.tumblr.com/2bec0d76e7db725b1bf740e70876bce4/fa6d5741988a71ae-3e/s540x810/73539444402823155cb061fbff4625ad82d1e3da.png" alt="image" data-orig-width="1630" data-orig-height="1766"/></figure><p><b>Review Summary</b>: This is the summary if the owner has taken all actions.</p><figure data-orig-width="1132" data-orig-height="1571" class="tmblr-full"><img src="https://66.media.tumblr.com/d0429db7fd70dc35512fcb42f5e0e298/fa6d5741988a71ae-73/s540x810/9d2e7362e3743ed3954c060d230ceb3bb2645afb.png" alt="image" data-orig-width="1132" data-orig-height="1571"/></figure><p><b>Azure Portal</b>: Audit log information for the group.</p><p>After the user completed the review the system didn’t make a change to the group. Based on the configuration if actions should be automatically applied the results apply at the end of the review process! Until this time the owners can change their mind. Once the review period is over the system will apply the needed changes.</p><p>I really love this feature in the context of modern collaboration. The process of keeping a current list of involved members in a team is a big benefit for productivity and security. The “need to know” principal is supported by a technical implementation “free of cost” (a mentioned everyone should have AAD P2 through some SKU 😎).</p><p>Our GK O365 Lifecycle tool was extended to allow the creation of Access Reviews through the Microsoft Graph based on the Group/Team classification. Once customers read or get a demo about this feature and own the license we immediately start a POC implementation. If our tool is already in place it is only a matter of some JSON configuration to be up and running.</p><div class="feedflare">
<a href="http://feeds.marcoscheel.de/~ff/marcoscheel?a=BK1tfwJa-Kw:xR95Z5ohx2g:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/marcoscheel?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.marcoscheel.de/~ff/marcoscheel?a=BK1tfwJa-Kw:xR95Z5ohx2g:7Q72WNTAKBA"><img src="http://feeds.feedburner.com/~ff/marcoscheel?d=7Q72WNTAKBA" border="0"></img></a> <a href="http://feeds.marcoscheel.de/~ff/marcoscheel?a=BK1tfwJa-Kw:xR95Z5ohx2g:qj6IDK7rITs"><img src="http://feeds.feedburner.com/~ff/marcoscheel?d=qj6IDK7rITs" border="0"></img></a>
</div><img src="http://feeds.feedburner.com/~r/marcoscheel/~4/BK1tfwJa-Kw" height="1" width="1" alt=""/>Fri, 02 Aug 2019 21:41:30 +0200Marco ScheelMarco ScheelMarco Scheelhttps://blog.codeinside.eu/2019/07/31/sql-server-named-instances-and-the-windows-firewallhttp://feedproxy.google.com/~r/Code-insideBlog/~3/n3vcO2EQdJs/sql-server-named-instances-and-the-windows-firewallCode-Inside BlogSQL Server, Named Instances & the Windows Firewall<h1 id="the-problem">The problem</h1>
<p><em>“Cannot connect to sql\instance. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1)”</em></p>
<p>Let’s say we have a system with a running SQL Server (Express or Standard Edition - doesn’t matter) and want to connect to this database from another machine. The chances are high that you will see the above error message.</p>
<p><strong>Be aware:</strong> You can customize more or less anything, so this blogposts does only cover a very “common” installation.</p>
<p>I struggled last week with this problem and I learned that this is a pretty “old” issue. To enlighten my dear readers I made the following checklist:</p>
<h1 id="checklist">Checklist:</h1>
<ul>
<li>Does the SQL Server allow remote connections?</li>
<li>Does the SQL Server allow your authentication schema of choice (Windows or SQL Authentication)?</li>
<li>Check the “SQL Server Configuration Manager” if the needed TCP/IP protocol is enabled for your SQL Instance.</li>
<li>Check your Windows Firewall (see details below!)</li>
</ul>
<h2 id="windows-firewall-settings">Windows Firewall settings:</h2>
<p>Per default SQL Server uses TCP Port 1433 which is the minimum requirement without any special needs - use this command:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>netsh advfirewall firewall add rule name = SQLPort dir = in protocol = tcp action = allow localport = 1433 remoteip = localsubnet profile = DOMAIN,PRIVATE,PUBLIC
</code></pre></div></div>
<p>If you use <strong>named instances</strong> we need (at least) two additional ports:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>netsh advfirewall firewall add rule name = SQLPortUDP dir = in protocol = udp action = allow localport = 1434 remoteip = localsubnet profile = DOMAIN,PRIVATE,PUBLIC
</code></pre></div></div>
<p>This UDP Port 1434 is used to query the real TCP port for the named instance.</p>
<p>Now the most important part: The SQL Server will use a (kind of) random dynamic port for the named instance. To avoid this behavior (which is really a killer for Firewall settings) you can set a fixed port in the <strong>SQL Server Configuration Manager</strong>.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>SQL Server Configuration Manager -&gt; Instance -&gt; TCP/IP Protocol (make sure this is "enabled") -&gt; *Details via double click* -&gt; Under IPAll set a fixed port under "TCP Port", e.g. 1435
</code></pre></div></div>
<p>After this configuration, allow this port to communicate to the world with this command:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>netsh advfirewall firewall add rule name = SQLPortInstance dir = in protocol = tcp action = allow localport = 1435 remoteip = localsubnet profile = DOMAIN,PRIVATE,PUBLIC
</code></pre></div></div>
<p>(Thanks <a href="https://dba.stackexchange.com/a/107766">Stackoverflow</a>!)</p>
<p>Check the <a href="https://docs.microsoft.com/en-us/sql/sql-server/install/configure-the-windows-firewall-to-allow-sql-server-access">official Microsoft Docs</a> for further information on this topic, but these commands helped me to connect to my SQL Server.</p>
<p>The “dynamic” port was my main problem - after some hours of Googling I found the answer on Stackoverflow and I could establish a connection to my SQL Server with the SQL Server Management Studio.</p>
<p>Hope this helps!</p><img src="http://feeds.feedburner.com/~r/Code-insideBlog/~4/n3vcO2EQdJs" height="1" width="1" alt=""/>Wed, 31 Jul 2019 23:45:00 ZCode-Inside BlogCode-Inside BlogCode-Inside Bloghttp://www.developa.org/?p=2003http://www.developa.org/Frankfurt/kuenstliche-intelligenz-fuer-net-anwendungen/Kazim BaharKünstliche Intelligenz für .NET AnwendungenMit dem neuen ML.NET Framework aus dem Hause Microsoft lassen sich bestehende .NET Applikationen mit&#46;&#46;&#46;Wed, 31 Jul 2019 23:38:27 ZKazim BaharKazim BaharKazim Baharhttp://stefanhenneken.wordpress.com/?p=1528https://stefanhenneken.wordpress.com/2019/07/29/iec-61131-3-exception-handling-with-__try-__catch/Stefan HennekenIEC 61131-3: Exception Handling with __TRY/__CATCHWhen executing a program, there is always the possibility of an unexpected runtime error occurring. These occur when a program tries to perform an illegal operation. This kind of scenario can be triggered by events such as division by 0 or a pointer which tries to reference an invalid memory address. We can significantly improve [&#8230;]Mon, 29 Jul 2019 19:32:00 ZStefan Henneken<p>When executing a program, there is always the possibility of an unexpected runtime error occurring. These occur when a program tries to perform an illegal operation. This kind of scenario can be triggered by events such as division by 0 or a pointer which tries to reference an invalid memory address. We can significantly improve the way these exceptions are handled by using the keywords <font face="Courier New">__TRY</font> and <font face="Courier New">__CATCH</font>.</p>
<p><span id="more-1528"></span></p>
<p>The list of possible causes for runtime errors is endless. What all these errors have in common is that they cause the program to crash. Ideally, there should at least be an error message with details of the runtime error: </p>
<p><a href="https://stefanhenneken.files.wordpress.com/2019/07/pic01-2.png"><img title="Pic01" style="border-top:0;border-right:0;background-image:none;border-bottom:0;padding-top:0;padding-left:0;border-left:0;display:inline;padding-right:0;" border="0" alt="Pic01" src="https://stefanhenneken.files.wordpress.com/2019/07/pic01_thumb-2.png?w=562&#038;h=260" width="562" height="260"></a> </p>
<p>Because this leaves the program in an undefined state, runtime errors cause the system to halt. This is indicated by the yellow TwinCAT icon:</p>
<p><a href="https://stefanhenneken.files.wordpress.com/2019/07/pic02-2.png"><img title="Pic02" style="border-top:0;border-right:0;background-image:none;border-bottom:0;padding-top:0;padding-left:0;border-left:0;display:inline;padding-right:0;" border="0" alt="Pic02" src="https://stefanhenneken.files.wordpress.com/2019/07/pic02_thumb-2.png?w=212&#038;h=77" width="212" height="77"></a> </p>
<p>For an operational system, an uncontrolled stop is not always the optimal response. In addition, the error message does not provide enough information about where in the program the error occurred. This makes improving the software a tricky task. </p>
<p>To help track down errors more quickly, you can add check functions to your program. </p>
<p><a href="https://stefanhenneken.files.wordpress.com/2019/07/pic03-2.png"><img title="Pic03" style="border-top:0;border-right:0;background-image:none;border-bottom:0;padding-top:0;padding-left:0;border-left:0;display:inline;padding-right:0;" border="0" alt="Pic03" src="https://stefanhenneken.files.wordpress.com/2019/07/pic03_thumb-2.png?w=477&#038;h=469" width="477" height="469"></a> </p>
<p>Check functions are called whenever the relevant operation is executed. The best known is probably <font face="Courier New">CheckBounds()</font>. Each time an array element is accessed, this function is implicitly called beforehand. The parameters passed to this function are the array bounds and the index of the element being accessed. This function can be configured to automatically correct attempts to access elements which are out of bounds. This approach does, however, have some disadvantages. </p>
<ol>
<li><font face="Courier New">CheckBounds()</font> is not able to determine which array is being accessed, so error correction has to be the same for all arrays. </li>
<li>Because <font face="Courier New">CheckBounds()</font> is called whenever an array element is accessed, it can significantly slow down program execution.</li>
</ol>
<p>It’s a similar story with other check functions. </p>
<p>It is not unusual for check functions to be used during development only. Check functions include breakpoints, which stop the program when an operation throws up an error. The call stack can then be used to determine where in the program the error has occurred. </p>
<h1>The &#8216;try/catch&#8217; statement</h1>
<p>Runtime errors in general are also known as exceptions. IEC 61131-3 includes <font face="Courier New">__TRY</font>, <font face="Courier New">__CATCH</font> and <font face="Courier New">__ENDTRY</font> statements for detecting and handling these exceptions:</p>
<pre class="brush: plain; pad-line-numbers: true; title: ; notranslate">
__TRY
// statements
__CATCH (exception type)
// statements
__ENDTRY
// statements
</pre>
<p>The <font face="Courier New">TRY</font> block (the statements between <font face="Courier New">__TRY</font> and <font face="Courier New">__CATCH</font>) contains the code with the potential to throw up an exception. Assuming that no exception occurs, all of the statements in the <font face="Courier New">TRY</font> block will be executed as normal. The program will then continue from the line immediately following the <font face="Courier New">__ENDTRY</font> statement. If, however, one of the statements within the <font face="Courier New">TRY</font> block causes an exception, the program will jump straight to the <font face="Courier New">CATCH</font> block (the statements between <font face="Courier New">__CATCH</font> and <font face="Courier New">__ENDTRY</font>). All subsequent statements within the <font face="Courier New">TRY</font> block will be skipped. </p>
<p>The <font face="Courier New">CATCH</font> block is only executed if an exception occurs; it contains the error handling code. After processing the <font face="Courier New">CATCH</font> block, the program continues from the statement immediately following <font face="Courier New">__ENDTRY</font>. </p>
<p>The <font face="Courier New">__CATCH</font> statement takes the form of the keyword <font face="Courier New">__CATCH</font> followed, in brackets, by a variable of type <font face="Courier New">__SYSTEM.ExceptionCode</font>. The <font face="Courier New">__SYSTEM.ExceptionCode</font> data type contains a list of all possible exceptions. If an exception occurs, causing the <font face="Courier New">CATCH</font> block to be called, this variable can be used to query the cause of the exception. </p>
<p>The following example divides two elements of an array by each other. The array is passed to the function using a pointer. If the return value is negative, an error has occurred. The negative return value provides additional information on the cause of the exception:</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
FUNCTION F_Calc : LREAL
VAR_INPUT
pData : POINTER TO ARRAY [0..9] OF LREAL;
nElementA : INT;
nElementB : INT;
END_VAR
VAR
exc : __SYSTEM.ExceptionCode;
END_VAR
__TRY
F_Calc := pData^[nElementA] / pData^[nElementB];
__CATCH (exc)
IF (exc = __SYSTEM.ExceptionCode.RTSEXCPT_ARRAYBOUNDS) THEN
F_Calc := -1;
ELSIF ((exc = __SYSTEM.ExceptionCode.RTSEXCPT_FPU_DIVIDEBYZERO) OR
(exc = __SYSTEM.ExceptionCode.RTSEXCPT_DIVIDEBYZERO)) THEN
F_Calc := -2;
ELSIF (exc = __SYSTEM.ExceptionCode.RTSEXCPT_ACCESS_VIOLATION) THEN
F_Calc := -3;
ELSE
F_Calc := -4;
END_IF
__ENDTRY
</pre>
<h1>The ‘finally’ statement</h1>
<p>The optional <font face="Courier New">__FINALLY</font> statement can be used to define a block of code that will always be called whether or not an exception has occurred. There’s only one condition: the program must step into the <font face="Courier New">TRY</font> block. </p>
<p>We&#8217;re going to extend our example so that a value of one is added to the result of the calculation. We&#8217;re going to do this whether or not an error has occurred.</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
FUNCTION F_Calc : LREAL
VAR_INPUT
pData : POINTER TO ARRAY [0..9] OF LREAL;
nElementA : INT;
nElementB : INT;
END_VAR
VAR
exc : __SYSTEM.ExceptionCode;
END_VAR
__TRY
F_Calc := pData^[nElementA] / pData^[nElementB];
__CATCH (exc)
IF (exc = __SYSTEM.ExceptionCode.RTSEXCPT_ARRAYBOUNDS) THEN
F_Calc := -1;
ELSIF ((exc = __SYSTEM.ExceptionCode.RTSEXCPT_FPU_DIVIDEBYZERO) OR
(exc = __SYSTEM.ExceptionCode.RTSEXCPT_DIVIDEBYZERO)) THEN
F_Calc := -2;
ELSIF (exc = __SYSTEM.ExceptionCode.RTSEXCPT_ACCESS_VIOLATION) THEN
F_Calc := -3;
ELSE
F_Calc := -4;
END_IF
__FINALLY
F_Calc := F_Calc + 1;
__ENDTRY
</pre>
<p><a href="https://github.com/StefanHenneken/Blog-2019-06-IEC61131-TryCatch-Sample01" target="_blank">Sample 1 (TwinCAT 3.1.4024 / 32 Bit) on GitHub</a></p>
<p>The statement in the <font face="Courier New">FINALLY</font> block (line 24) will always be executed whether or not an exception has occurred. </p>
<p>If no exception occurs within the <font face="Courier New">TRY</font> block, the <font face="Courier New">FINALLY</font> block will be called straight after the <font face="Courier New">TRY</font> block. </p>
<p>If an exception does occur, the <font face="Courier New">CATCH</font> block will be executed first, followed by the <font face="Courier New">FINALLY</font> block. Only then will the program exit the function. </p>
<p><font face="Courier New">__FINALLY</font> therefore enables you to perform various operations irrespective of whether or not an exception has occurred. This generally involves releasing resources, for example closing a file or dropping a network connection. </p>
<p>Extra care should be taken in implementing the <font face="Courier New">CATCH</font> and <font face="Courier New">FINALLY</font> blocks. If an exception occurs within these blocks, it will give rise to an unexpected runtime error, resulting in an immediate uncontrolled program stop. </p>
<p><strong>The sample program runs under 32-bit TwinCAT 3.1.4024 or higher. 64-bit systems are not currently supported.</strong></p>
Stefan HennekenStefan Hennekenhttp://stefanhenneken.wordpress.com/?p=1517https://stefanhenneken.wordpress.com/2019/07/29/iec-61131-3-ausnahmebehandlung-mit-__try-__catch/Stefan HennekenIEC 61131-3: Ausnahmebehandlung mit __TRY/__CATCHBei der Ausführung eines SPS-Programms kann es zu unerwarteten Laufzeitfehlern kommen. Diese treten auf, sobald das SPS-Programm versucht eine unzulässige Operation auszuführen. Auslöser solcher Szenarien kann z.B. eine Division durch 0 sein oder ein Pointer verweist auf einen ungültigen Speicherbereich. Mit den Schlüsselwörtern __TRY und __CATCH kann auf diese Ausnahmen deutlich besser reagiert werden als [&#8230;]Mon, 29 Jul 2019 17:24:00 ZStefan Henneken<p>Bei der Ausführung eines SPS-Programms kann es zu unerwarteten Laufzeitfehlern kommen. Diese treten auf, sobald das SPS-Programm versucht eine unzulässige Operation auszuführen. Auslöser solcher Szenarien kann z.B. eine Division durch 0 sein oder ein Pointer verweist auf einen ungültigen Speicherbereich. Mit den Schlüsselwörtern <font face="Courier New">__TRY</font> und <font face="Courier New">__CATCH</font> kann auf diese Ausnahmen deutlich besser reagiert werden als bisher.</p>
<p><span id="more-1517"></span></p>
<p>Die Liste der möglichen Ursachen für Laufzeitfehler kann endlos erweitert werden. Allen Fehlern ist aber gemeinsam: Sie führen zum Absturz des Programms. Bestenfalls wird durch eine Meldung auf den Laufzeitfehler hingewiesen: </p>
<p><a href="https://stefanhenneken.files.wordpress.com/2019/07/pic01-1.png"><img title="Pic01" style="background-image:none;padding-top:0;padding-left:0;display:inline;padding-right:0;border-width:0;" border="0" alt="Pic01" src="https://stefanhenneken.files.wordpress.com/2019/07/pic01_thumb-1.png?w=562&#038;h=260" width="562" height="260"></a> </p>
<p>Da sich anschließend das SPS-Programm in einem nicht definierten Zustand befindet, wird das System gestoppt. Dies ist anhand des gelben TwinCAT Icon in der Windows Taskleiste zu erkennen: </p>
<p><a href="https://stefanhenneken.files.wordpress.com/2019/07/pic02-1.png"><img title="Pic02" style="background-image:none;padding-top:0;padding-left:0;display:inline;padding-right:0;border-width:0;" border="0" alt="Pic02" src="https://stefanhenneken.files.wordpress.com/2019/07/pic02_thumb-1.png?w=212&#038;h=77" width="212" height="77"></a> </p>
<p>Für in Betrieb befindliche Anlagen ist das unkontrollierte Stoppen nicht immer die optimalste Reaktion. Außerdem gibt die Meldung nur unzureichend Auskunft darüber, wo genau im SPS-Programm der Fehler aufgetreten ist. Eine Optimierung der Software ist dadurch nur schwer möglich. </p>
<p>Um Fehler schneller ausfindig zu machen, können in dem SPS-Programm Überprüfungsfunktionen eingefügt werden.</p>
<p><a href="https://stefanhenneken.files.wordpress.com/2019/07/pic03-1.png"><img title="Pic03" style="background-image:none;padding-top:0;padding-left:0;display:inline;padding-right:0;border-width:0;" border="0" alt="Pic03" src="https://stefanhenneken.files.wordpress.com/2019/07/pic03_thumb-1.png?w=477&#038;h=469" width="477" height="469"></a> </p>
<p>Überprüfungsfunktionen werden jedes Mal aufgerufen, wenn die entsprechende Operation ausgeführt wird. Am bekanntesten dürfte die Funktion <font face="Courier New">CheckBounds()</font> sein. Bei jedem Zugriff auf ein Arrayelement wird vorher diese Funktion implizit aufgerufen. Als Parameter erhält die Funktion die Arraygrenzen und den Index des Elements, auf das der Zugriff erfolgen soll. Die Funktion kann so angepasst werden, dass bei einem Zugriff außerhalb der Arraygrenzen eine Korrektur erfolgt. Dieser Ansatz hat allerdings einige Nachteile: </p>
<ol>
<li>In <font face="Courier New">CheckBounds()</font> kann nicht festgestellt werden auf welches Array zugegriffen wird. Somit kann nur für alle Arrays die gleiche Fehlerkorrektur implementiert werden.
<li>Da bei jedem Arrayzugriff die Überprüfungsfunktion aufgerufen wird, kann sich die Laufzeit des Programms erblich verschlechtern.</li>
</ol>
<p>Ähnlich verhält es sich auch bei den anderen Überprüfungsfunktionen. </p>
<p>Nicht selten werden die Überprüfungsfunktionen nur während der Entwicklungsphase eingesetzt. In den Funktionen werden Breakpoints aktiviert, die, sobald eine fehlerhafte Operation ausgeführt wird, das SPS-Programm anhalten. Über den Callstack kann anschließend die entsprechende Stelle im SPS-Programm ermittelt werden.</p>
<h1>Die ‚try/catch‘-Anweisung</h1>
<p>Allgemein werden Laufzeitfehler als Ausnahmen (Exceptions) bezeichnet. Für das Erkennen und Bearbeiten von Exceptions gibt es in der IEC 61131-3 die Anweisungen <font face="Courier New">__TRY</font>, <font face="Courier New">__CATCH</font> und <font face="Courier New">__ENDTRY</font>:</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
__TRY
// statements
__CATCH (exception type)
// statements
__ENDTRY
// statements
</pre>
<p>Der <font face="Courier New">TRY</font>-Block (die Anweisungen zwischen <font face="Courier New">__TRY</font> und <font face="Courier New">__CATCH</font>) beinhaltet die Anweisungen, die potenziell eine Exception verursachen können. Tritt keine Exception auf, werden alle Anweisungen im <font face="Courier New">TRY</font>-Block ausgeführt. Anschließend setzt das SPS-Programm hinter <font face="Courier New">__ENDTRY</font> seine Arbeit fort. Verursacht eine der Anweisungen innerhalb des <font face="Courier New">TRY</font>-Blocks jedoch eine Exception, so wird der Programmablauf unmittelbar im <font face="Courier New">CATCH</font>-Block (die Anweisungen zwischen <font face="Courier New">__CATCH</font> und <font face="Courier New">__ENTRY</font>) fortgeführt. Alle übrigen Anweisungen innerhalb des <font face="Courier New">TRY</font>-Blocks werden dabei übersprungen. </p>
<p>Der <font face="Courier New">CATCH</font>-Block wird nur im Falle einer Exception ausgeführt und enthält die gewünschte Fehlerbehandlung. Nach der Abarbeitung des <font face="Courier New">CATCH</font>-Blocks wird das SPS-Programm mit den Anweisungen nach <font face="Courier New">__ENDTRY</font> fortgesetzt. </p>
<p>Hinter der <font face="Courier New">__CATCH</font>-Anweisung wird in runden Klammern eine Variable vom Typ <font face="Courier New">__SYSTEM.ExceptionCode</font> angegeben. Der Datentyp <font face="Courier New">__SYSTEM.ExceptionCode</font> enthält eine Auflistung aller möglichen Exceptions. Wird der <font face="Courier New">CATCH</font>-Block durch eine Exception aufgerufen, so kann über diese Variable die Ursache der Exception abgefragt werden. </p>
<p>In dem folgenden Beispiel werden zwei Elemente aus einem Array dividiert. Das Array wird hierbei durch einen Pointer an die Funktion übergeben. Ist der Rückgabewert der Funktion negativ, so ist bei der Ausführung ein Fehler aufgetreten. Der negative Rückgabewert der Funktion gibt genauere Informationen über die Ursache der Exception:</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
FUNCTION F_Calc : LREAL
VAR_INPUT
pData : POINTER TO ARRAY [0..9] OF LREAL;
nElementA : INT;
nElementB : INT;
END_VAR
VAR
exc : __SYSTEM.ExceptionCode;
END_VAR
__TRY
F_Calc := pData^[nElementA] / pData^[nElementB];
__CATCH (exc)
IF (exc = __SYSTEM.ExceptionCode.RTSEXCPT_ARRAYBOUNDS) THEN
F_Calc := -1;
ELSIF ((exc = __SYSTEM.ExceptionCode.RTSEXCPT_FPU_DIVIDEBYZERO) OR
(exc = __SYSTEM.ExceptionCode.RTSEXCPT_DIVIDEBYZERO)) THEN
F_Calc := -2;
ELSIF (exc = __SYSTEM.ExceptionCode.RTSEXCPT_ACCESS_VIOLATION) THEN
F_Calc := -3;
ELSE
F_Calc := -4;
END_IF
__ENDTRY
</pre>
<h1>Die ‚finally‘-Anweisung</h1>
<p>Mit <font face="Courier New">__FINALLY</font> kann optional ein Codeblock definiert werden, der immer aufgerufen wird, unabhängig davon ob eine Exception aufgetreten ist oder nicht. Es gibt nur eine einzige Randbedingung: Das SPS-Programm muss zumindest in den <font face="Courier New">TRY</font>-Anweisungsblock eintreten. </p>
<p>Das Beispiel soll so erweitert werden, dass das Ergebnis der Berechnung zusätzlich um Eins erhöht wird. Dieses soll unabhängig davon erfolgen, ob ein Fehler aufgetreten ist oder nicht.</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
FUNCTION F_Calc : LREAL
VAR_INPUT
pData : POINTER TO ARRAY [0..9] OF LREAL;
nElementA : INT;
nElementB : INT;
END_VAR
VAR
exc : __SYSTEM.ExceptionCode;
END_VAR
__TRY
F_Calc := pData^[nElementA] / pData^[nElementB];
__CATCH (exc)
IF (exc = __SYSTEM.ExceptionCode.RTSEXCPT_ARRAYBOUNDS) THEN
F_Calc := -1;
ELSIF ((exc = __SYSTEM.ExceptionCode.RTSEXCPT_FPU_DIVIDEBYZERO) OR
(exc = __SYSTEM.ExceptionCode.RTSEXCPT_DIVIDEBYZERO)) THEN
F_Calc := -2;
ELSIF (exc = __SYSTEM.ExceptionCode.RTSEXCPT_ACCESS_VIOLATION) THEN
F_Calc := -3;
ELSE
F_Calc := -4;
END_IF
__FINALLY
F_Calc := F_Calc + 1;
__ENDTRY
</pre>
<p><a href="https://github.com/StefanHenneken/Blog-2019-06-IEC61131-TryCatch-Sample01" target="_blank">Beispiel 1 (TwinCAT 3.1.4024 / 32 Bit) auf GitHub</a> </p>
<p>Die Anweisung im <font face="Courier New">FINALLY</font>-Block (Zeile 24) wird immer aufgerufen, unabhängig davon ob eine Exception erzeugt wird oder nicht. </p>
<p>Wird im <font face="Courier New">TRY</font>-Block keine Exception ausgelöst, so wird der <font face="Courier New">FINALLY</font>-Block direkt nach dem <font face="Courier New">TRY</font>-Block ausgerufen. </p>
<p>Tritt eine Exception auf, so wird erst der <font face="Courier New">CATCH</font>-Block ausgeführt und anschließend auch der <font face="Courier New">FINALLY</font>-Block. Erst danach wird die Funktion verlassen. </p>
<p><font face="Courier New">__FINALLY</font> gestattet es somit, diverse Operationen unabhängig davon auszuführen, ob eine Exception aufgetreten ist oder nicht. Dabei handelt es sich in der Regel um die Freigabe von Ressourcen, wie z.B. das Schließen einer Datei oder das Beenden einer Netzwerkverbindung. </p>
<p>Besonders sorgfältig sollte man die Implementierung der <font face="Courier New">CATCH</font>&#8211; und <font face="Courier New">FINALLY</font>-Blöcke vornehmen. Tritt in einem dieser Codeblöcke eine Exception auf, so löst dieses einen unerwarteten Laufzeitfehler aus. Mit dem Ergebnis, dass das SPS-Programm unmittelbar gestoppt wird. </p>
<p>An dieser Stelle möchte ich noch auf den Blog von Matthias Gehring hinweisen. In einem seiner Posts (<a href="https://www.codesys-blog.com/tipps/exceptionhandling-in-iec-applikationen-mit-codesys">https://www.codesys-blog.com/tipps/exceptionhandling-in-iec-applikationen-mit-codesys</a>) wird das Thema Exception Handling ebenfalls behandelt. </p>
<p><strong>Das Beispielprogramm ist unter 32-Bit Systemen ab TwinCAT 3.1.4024 lauffähig. 64-Bit Systeme werden derzeit noch nicht unterstützt.</strong></p>
Stefan HennekenStefan Hennekenhttp://stefanhenneken.wordpress.com/?p=1499https://stefanhenneken.wordpress.com/2019/07/26/iec-61131-3-parameter-transfer-via-fb_init/Stefan HennekenIEC 61131-3: Parameter transfer via FB_initDepending on the task, it may be necessary for function blocks to require parameters that are only used once for initialization tasks. One possible way to pass them elegantly is to use the FB_init() method. Before TwinCAT 3, initialisation parameters were very often transferred via input variables. This had the disadvantage that the function blocks [&#8230;]Fri, 26 Jul 2019 15:38:00 ZStefan Henneken<p>Depending on the task, it may be necessary for function blocks to require parameters that are only used once for initialization tasks. One possible way to pass them elegantly is to use the <font face="Courier New">FB_init()</font> method.</p>
<p><span id="more-1499"></span></p>
<p>Before TwinCAT 3, initialisation parameters were very often transferred via input variables.</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
(* TwinCAT 2 *)
FUNCTION_BLOCK FB_SerialCommunication
VAR_INPUT
nDatabits : BYTE(7..8);
eParity : E_Parity;
nStopbits : BYTE(1..2);
END_VAR
</pre>
<p>This had the disadvantage that the function blocks became unnecessarily large in the graphic display modes. It was also not possible to prevent changing the parameters at runtime. </p>
<p>Very helpful is the method <font face="Courier New">FB_init()</font>. This method is implicitly executed one time before the PLC task is started and can be used to perform initialization tasks. </p>
<p>The dialog for adding methods offers a finished template for this purpose. </p>
<p><a href="https://stefanhenneken.files.wordpress.com/2019/07/pic01.png"><img title="Pic01" style="background-image:none;padding-top:0;padding-left:0;display:inline;padding-right:0;border-width:0;" border="0" alt="Pic01" src="https://stefanhenneken.files.wordpress.com/2019/07/pic01_thumb.png?w=477&#038;h=404" width="477" height="404"></a> </p>
<p>The method contains two input variables that provide information about the conditions under which the method is executed. The variables may not be deleted or changed. However, <font face="Courier New">FB_init()</font> can be supplemented with further input variables. </p>
<h1>Example</h1>
<p>An example is a block for communication via a serial interface (<font face="Courier New">FB_SerialCommunication</font>). This block should also initialize the serial interface with the necessary parameters. For this reason, three variables are added to <font face="Courier New">FB_init()</font>:</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
METHOD FB_init : BOOL
VAR_INPUT
bInitRetains : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
bInCopyCode : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
nDatabits : BYTE(7..8);
eParity : E_Parity;
nStopbits : BYTE(1..2);
END_VAR
</pre>
<p>The serial interface is not initialized directly in <font face="Courier New">FB_init()</font>. Therefore, the parameters must be copied into variables located in the function block.</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
FUNCTION_BLOCK PUBLIC FB_SerialCommunication
VAR
nInternalDatabits : BYTE(7..8);
eInternalParity : E_Parity;
nInternalStopbits : BYTE(1..2);
END_VAR
</pre>
<p>During initialization, the values from <font face="Courier New">FB_init()</font> are copied in these three variables.</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
METHOD FB_init : BOOL
VAR_INPUT
bInitRetains : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
bInCopyCode : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
nDatabits : BYTE(7..8);
eParity : E_Parity;
nStopbits : BYTE(1..2);
END_VAR
THIS^.nInternalDatabits := nDatabits;
THIS^.eInternalParity := eParity;
THIS^.nInternalStopbits := nStopbits;
</pre>
<p>If an instance of <font face="Courier New">FB_SerialCommunication</font> is created, these three additional parameters must also be specified. The values are specified directly after the name of the function block in round brackets:</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
fbSerialCommunication : FB_SerialCommunication(nDatabits := 8,
eParity := E_Parity.None,
nStopbits := 1);
</pre>
<p>Even before the PLC task starts, the <font face="Courier New">FB_init()</font> method is implicitly called, so that the internal variables of the function block receive the desired values. </p>
<p><a href="https://stefanhenneken.files.wordpress.com/2019/07/pic02.png"><img title="Pic02" style="background-image:none;padding-top:0;padding-left:0;display:inline;padding-right:0;border-width:0;" border="0" alt="Pic02" src="https://stefanhenneken.files.wordpress.com/2019/07/pic02_thumb.png?w=557&#038;h=152" width="557" height="152"></a> </p>
<p>With the start of the PLC task and the call of the instance of <font face="Courier New">FB_SerialCommunication</font>, the serial interface can now be initialized. </p>
<p>It is always necessary to specify all parameters. A declaration without a complete list of the parameters is not allowed and generates an error message when compiling: </p>
<p><a href="https://stefanhenneken.files.wordpress.com/2019/07/pic03.png"><img title="Pic03" style="background-image:none;padding-top:0;padding-left:0;display:inline;padding-right:0;border-width:0;" border="0" alt="Pic03" src="https://stefanhenneken.files.wordpress.com/2019/07/pic03_thumb.png?w=562&#038;h=155" width="562" height="155"></a> </p>
<h1>Arrays</h1>
<p>If <font face="Courier New">FB_init()</font> is used for arrays, the complete parameters must be specified for each element (with square brackets):</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
aSerialCommunication : ARRAY[1..2] OF FB_SerialCommunication[
(nDatabits := 8, eParity := E_Parity.None, nStopbits := 1),
(nDatabits := 7, eParity := E_Parity.Even, nStopbits := 1)];
</pre>
<p>If all elements are to have the same initialization values, it is sufficient if the parameters exist once (without square brackets):</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
aSerialCommunication : ARRAY[1..2] OF FB_SerialCommunication(nDatabits := 8,
eParity := E_Parity.None,
nStopbits := 1);
</pre>
<p>Multidimensional arrays are also possible. All initialization values must also be specified here:</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
aSerialCommunication : ARRAY[1..2, 5..6] OF FB_SerialCommunication[
(nDatabits := 8, eParity := E_Parity.None, nStopbits := 1),
(nDatabits := 7, eParity := E_Parity.Even, nStopbits := 1),
(nDatabits := 8, eParity := E_Parity.Odd, nStopbits := 2),
(nDatabits := 7, eParity := E_Parity.Even, nStopbits := 2)];
</pre>
<h1>Inheritance</h1>
<p>If inheritance is used, the method <font face="Courier New">FB_init()</font> is always inherited. <font face="Courier New">FB_SerialCommunicationRS232</font> is used here as an example:</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
FUNCTION_BLOCK PUBLIC FB_SerialCommunicationRS232 EXTENDS FB_SerialCommunication
</pre>
<p>If an instance of <font face="Courier New">FB_SerialCommunicationRS232</font> is created, the parameters of <font face="Courier New">FB_init()</font>, which were inherited from <font face="Courier New">FB_SerialCommunication</font>, must also be specified:</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
fbSerialCommunicationRS232 : FB_SerialCommunicationRS232(nDatabits := 8,
eParity := E_Parity.Odd,
nStopbits := 1);
</pre>
<p>It is also possible to overwrite <font face="Courier New">FB_init()</font>. In this case, the same input variables must exist in the same order and be of the same data type as in the basic FB (<font face="Courier New">FB_SerialCommunication</font>). However, further input variables can be added so that the derived function block (<font face="Courier New">FB_SerialCommunicationRS232</font>) receives additional parameters:</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
METHOD FB_init : BOOL
VAR_INPUT
bInitRetains : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
bInCopyCode : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
nDatabits : BYTE(7..8);
eParity : E_Parity;
nStopbits : BYTE(1..2);
nBaudrate : UDINT;
END_VAR
THIS^.nInternalBaudrate := nBaudrate;
</pre>
<p>If an instance of <font face="Courier New">FB_SerialCommunicationRS232</font> is created, all parameters, including those of <font face="Courier New">FB_SerialCommunication</font>, must be specified:</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
fbSerialCommunicationRS232 : FB_SerialCommunicationRS232(nDatabits := 8,
eParity := E_Parity.Odd,
nStopbits := 1,
nBaudRate := 19200);
</pre>
<p>In the method <font face="Courier New">FB_init()</font> of <font face="Courier New">FB_SerialCommunicationRS232</font>, only the copying of the new parameter (<font face="Courier New">nBaudrate</font>) is necessary. Because <font face="Courier New">FB_SerialCommunicationRS232</font> inherits from <font face="Courier New">FB_SerialCommunication</font>, <font face="Courier New">FB_init()</font> of <font face="Courier New">FB_SerialCommunication</font> is also executed implicitly before the PLC task is started. Both <font face="Courier New">FB_init()</font> methods of <font face="Courier New">FB_SerialCommunication</font> and of <font face="Courier New">FB_SerialCommunicationRS232</font> are always called implicitly. When inherited, <font face="Courier New">FB_init()</font> is always called from &#8216;bottom&#8217; to &#8216;top&#8217;, first from <font face="Courier New">FB_SerialCommunication</font> and then from <font face="Courier New">FB_SerialCommunicationRS232</font>. </p>
<h1>Forward parameters</h1>
<p>The function block (<font face="Courier New">FB_SerialCommunicationCluster</font>) is used as an example, in which several instances of <font face="Courier New">FB_SerialCommunication</font> are declared:</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
FUNCTION_BLOCK PUBLIC FB_SerialCommunicationCluster
VAR
fbSerialCommunication01 : FB_SerialCommunication(nDatabits := nInternalDatabits, eParity := eInternalParity, nStopbits := nInternalStopbits);
fbSerialCommunication02 : FB_SerialCommunication(nDatabits := nInternalDatabits, eParity := eInternalParity, nStopbits := nInternalStopbits);
nInternalDatabits : BYTE(7..8);
eInternalParity : E_Parity;
nInternalStopbits : BYTE(1..2);
END_VAR
</pre>
<p><font face="Courier New">FB_SerialCommunicationCluster</font> also receives the method <font face="Courier New">FB_init()</font> with the necessary input variables so that the parameters of the instances can be set externally.</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
METHOD FB_init : BOOL
VAR_INPUT
bInitRetains : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
bInCopyCode : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
nDatabits : BYTE(7..8);
eParity : E_Parity;
nStopbits : BYTE(1..2);
END_VAR
THIS^.nInternalDatabits := nDatabits;
THIS^.eInternalParity := eParity;
THIS^.nInternalStopbits := nStopbits;
</pre>
<p>However, there are some things to be taken into consideration here. The call sequence of <font face="Courier New">FB_init()</font> is not clearly defined in this case. In my test environment the calls are made from &#8216;inside&#8217; to &#8216;outside&#8217;. First <font face="Courier New">fbSerialCommunication01.FB_init()</font> and <font face="Courier New">fbSerialCommunication02.FB_init()</font> are called, then <font face="Courier New">fbSerialCommunicationCluster.FB_init()</font>. It is not possible to pass the parameters from &#8216;outside&#8217; to &#8216;inside&#8217;. The parameters are therefore not available in the two inner instances of <font face="Courier New">FB_SerialCommunication</font>. </p>
<p>The sequence of the calls changes as soon as <font face="Courier New">FB_SerialCommunication</font> and <font face="Courier New">FB_SerialCommunicationRS232</font> are derived from the same basic FB. In this case <font face="Courier New">FB_init()</font> is called from &#8216;outside&#8217; to &#8216;inside&#8217;. This approach cannot always be implemented for two reasons: </p>
<ol>
<li>If <font face="Courier New">FB_SerialCommunication</font> is located in a library, the inheritance cannot be changed just offhand.
<li>The call sequence of <font face="Courier New">FB_init()</font> is not further defined with nesting. So it cannot be excluded that this can change in future versions.</li>
</ol>
<p>One way to solve the problem is to explicitly call <font face="Courier New">FB_SerialCommunication.FB_init()</font> from <font face="Courier New">FB_SerialCommunicationCluster.FB_init()</font>.</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
fbSerialCommunication01.FB_init(bInitRetains := bInitRetains, bInCopyCode := bInCopyCode, nDatabits := 7, eParity := E_Parity.Even, nStopbits := nStopbits);
fbSerialCommunication02.FB_init(bInitRetains := bInitRetains, bInCopyCode := bInCopyCode, nDatabits := 8, eParity := E_Parity.Even, nStopbits := nStopbits);
</pre>
<p>All parameters, including <font face="Courier New">bInitRetains</font> and <font face="Courier New">bInCopyCode</font>, are passed on directly. </p>
<p>Attention: Calling <font face="Courier New">FB_init()</font> always initializes all local variables of the instance. This must be considered as soon as <font face="Courier New">FB_init()</font> is explicitly called from the PLC task instead of implicitly before the PLC task. </p>
<h1>Access via properties</h1>
<p>By passing the parameters by <font face="Courier New">FB_init()</font>, they can neither be read from outside nor changed at runtime. The only exception would be the explicit call of <font face="Courier New">FB_init()</font> from the PLC task. However, this should principally be avoided, since all local variables of the instance will be reinitialized in this case. </p>
<p>If, however, access should still be possible, appropriate properties can be created for the parameters: </p>
<p><a href="https://stefanhenneken.files.wordpress.com/2019/07/pic04.png"><img title="Pic04" style="background-image:none;padding-top:0;padding-left:0;display:inline;padding-right:0;border-width:0;" border="0" alt="Pic04" src="https://stefanhenneken.files.wordpress.com/2019/07/pic04_thumb.png?w=247&#038;h=253" width="247" height="253"></a> </p>
<p>The setter and getter of the respective properties access the corresponding local variables in the function block (<font face="Courier New">nInternalDatabits</font>, <font face="Courier New">eInternalParity</font> and <font face="Courier New">nInternalStopbits</font>). Thus, the parameters can be specified in the declaration as well as at runtime. </p>
<p>By removing the setter, you can prevent the parameters from being changed at runtime. If the setter is available, <font face="Courier New">FB_init()</font> can be omitted. Properties can also be initialized directly when declaring an instance.</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
fbSerialCommunication : FB_SerialCommunication := (Databits := 8,
Parity := E_Parity.Odd,
Stopbits := 1);
</pre>
<p>The parameters of <font face="Courier New">FB_init()</font> and the properties can also be specified simultaneously:</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
fbSerialCommunication : FB_SerialCommunication(nDatabits := 8, eParity := E_Parity.Odd, nStopbits := 1) :=
(Databits := 8, Parity := E_Parity.Odd, Stopbits := 1);
</pre>
<p>In this case, the initialization values of the properties have priority. The transfer by property and <font face="Courier New">FB_init()</font> has the disadvantage that the declaration of the function block becomes unnecessarily long. To implement both does not seem necessary to me either. If all parameters can also be written via properties, the initialization via <font face="Courier New">FB_init()</font> can be omitted. Conclusion: If parameters must not be changeable at runtime, the use of <font face="Courier New">FB_init()</font> has to be considered. If the write access is possible, properties are another opportunity. </p>
<p><a href="https://github.com/StefanHenneken/Blog-2019-04-IEC61131-FBinit-Sample01" target="_blank">Sample 1 (TwinCAT 3.1.4022) on GitHub</a></p>
Stefan HennekenStefan Hennekenhttps://david-tielke.de/post.aspx?id=d2202a05-6168-4327-8003-217c53a041c0https://david-tielke.de/post/dwx2019-inhalt-meiner-sessionsDavid Tielke#DWX2019 - Inhalt meiner Sessions<p>Das war sie wieder, die Developer Week 2019 in Nürnberg. An drei Konferenztagen und natürlich dem traditionellen Workshoptag am Donnerstag sind wir alle erschöpft aber glücklich zuhause wieder angekommen. Neben Sessions zu CoCo 2.0 und Softwarequalität, gab es in diesem Jahr auch zwei Abendveranstaltungen von mir, eine davon mit Kollege Christian Giesswein. Nachdem mein Mitarbeiter Sebastian und ich nun die Nacharbeit abgeschlossen haben, stellen wir hier nun die Inhalte meiner Sessions und unseres gemeinsamen Workshops am Donnerstag zur Verfügung.</p><h2>Softwarequalität</h2>
<div style="text-align: center;"><iframe src="//www.slideshare.net/slideshow/embed_code/key/yO6L0NfjlrnJha" width="595" height="485" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" allowfullscreen="" style="border-width: 1px; border-style: solid; border-color: rgb(204, 204, 204); margin-bottom: 5px; max-width: 100%;"></iframe></div><div style="text-align: center; margin-bottom: 5px;"> <strong> <a href="https://david-tielke.de//www.slideshare.net/DavidTielke/softwarequalitt" title="Softwarequalität" target="_blank">Softwarequalität</a> </strong> from <strong><a href="https://www.slideshare.net/DavidTielke" target="_blank">David Tielke</a></strong> </div><div style="text-align: center; margin-bottom: 5px;"><br></div><h2 style="text-align: left; margin-bottom: 5px;">Composite Components 2.0</h2><div>Da während er Session mein Notebook fast vollständig den Dienst mit einem Zeichenstift verweigert hat, gibt es an dieser Stelle leider nicht die von mir gewohnten Drawings dazu. Dafür hier nun die Repos zu den Beispielimplementierungen der Composite Components 1.0 &amp; 2.0 auf github:</div><div><br></div><div><a href="https://github.com/DavidTielke/CoCo1.0" target="_blank">https://github.com/DavidTielke/CoCo1.0</a><br></div><div><a href="https://github.com/DavidTielke/CoCo2.0" target="_blank">https://github.com/DavidTielke/CoCo2.0</a><br></div><div><br></div><h2>Workshop: Architektur 2.0</h2><div><br></div>
<div style="text-align: center;"><iframe src="//www.slideshare.net/slideshow/embed_code/key/1xad1Q6VvHKwAS" width="668" height="714" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" allowfullscreen="" style="border-width: 1px; border-style: solid; border-color: rgb(204, 204, 204); margin-bottom: 5px; max-width: 100%;"></iframe></div><div style="text-align: center; margin-bottom: 5px;"> <strong> <a href="https://david-tielke.de//www.slideshare.net/DavidTielke/workshop-coco-20-auf-dwx-2019" title="Workshop CoCo 2.0 auf DWX 2019" target="_blank">Workshop CoCo 2.0 auf DWX 2019</a> </strong> from <strong><a href="https://www.slideshare.net/DavidTielke" target="_blank">David Tielke</a></strong> </div><div style="text-align: center; margin-bottom: 5px;"><br></div><div style="margin-bottom:5px">Hier noch die entwickelten Beispielprojekte zu beiden Versionen der Architektur.</div><div style="margin-bottom:5px"><a href="https://github.com/DavidTielke/CoCo1.0" target="_blank" style="background-color: rgb(255, 255, 255);">https://github.com/DavidTielke/CoCo1.0</a></div><div style="margin-bottom:5px"><div><a href="https://github.com/DavidTielke/CoCo2.0" target="_blank">https://github.com/DavidTielke/CoCo2.0</a></div></div>Fri, 19 Jul 2019 10:49:00 ZDavid TielkeDavid TielkeDavid Tielkehttp://heise.de/-4467673Holger SchwichtenbergDie VSTS CLI ist tot – es lebe die Azure DevOps CLIDie "Azure DevOps CLI", der Nachfolger der "VSTS CLI", hat seit dem 8.7.2019 den Status "General Availability" – ist aber keineswegs fertig.Thu, 11 Jul 2019 08:04:00 +02002019-07-11T08:04:00+02:00Holger SchwichtenbergHolger SchwichtenbergHolger Schwichtenberghttps://asp.net-hacker.rocks/2019/07/11/five-times-in-a-row.htmlhttp://feedproxy.google.com/~r/jgutsch/~3/Iz_fNtmsUSI/five-times-in-a-row.htmlJürgen GutschMVP for four times in a row<p>Another year later, again it was the July 1st and I got the email from the Global MVP Administrator I'm waiting for :-)</p>
<p><img src="../img/MVP_Logo_Horizontal_Preferred_Cyan300_CMYK_72ppi.png" alt="" /></p>
<p>Yes, this is kind of a yearly series of posts. But I'm really excited that I got re-awarded to be an MVP for the fifth year in a row. This is absolutely amazing and makes me really proud.</p>
<p>Even though some folks reduces the MVP to just a marketing instrument of Microsoft and they say MVPs are just selling Microsoft to the rest of the world, it tells me that my work in my spare time is important for some people outside. These folks are right anyway. Sure I'm selling Microsoft to the rest of the world, but this is my hobby. I don't sell it explicitly, I'm just telling other people about stuff I work with, stuff I use to get things done and to earn money at the end. It is about .NET and ASP.NET as well as about software development and the developer community. It is also about stuff I just learned while looking into new technology.</p>
<p>Selling Microsoft is just a side effect with no additional effort and it doesn't feel wrong.</p>
<p>I'm not sure whether I put a lot more effort into my hobby since I'm a MVP or not. I think it was a bit more, because being a MVP makes me proud, makes me feel successful and tells me that my work is important for some folks. Who cares :-)</p>
<p>While some folks are reading my blog, attending the user group meetings or watching my live streams. I will continue doing that kind work.</p>
<p>As already written I'm proud of it and proud to get the fifth ring to my MVP award trophy, which will be blue this time.</p>
<p>And I'm feeling lucky that I'm able to attend the Global MVP summit the fifth time next year in March and to see all the MVP friends again. I'm really looking forward to that event and to be in the nice and always sunny Seattle area. (Yes, it is always sunny in Seattle, when I'm there.)</p>
<p>I'm also happy to see that almost all MVP friends got re-awarded.</p>
<p>Congratulations to all awarded and re-awarded MVP</p>
<p>Many thanks to developer community for being a part of it. And many thanks for that amazing feedback I get as a result of my work. It is a lot of fun to help and to contribute to that awesome community :-)</p><img src="http://feeds.feedburner.com/~r/jgutsch/~4/Iz_fNtmsUSI" height="1" width="1" alt=""/>Thu, 11 Jul 2019 00:00:00 Z2019-07-11T00:00:00ZJürgen GutschJürgen GutschJürgen Gutschhttps://marcoscheel.de/post/186138885112https://marcoscheel.de/post/186138885112Marco ScheelApp Permissions für Microsoft Graph Calls automatisiert einrichten<p>Für unser Glück &amp; Kanja Lifecycle Tool setze ich im Schwerpunkt auf Microsoft Graph Calls. Für ein sauberes Setup habe ich mittlerweile ein Script. Es nutzt die <a href="https://docs.microsoft.com/en-us/powershell/azure/install-az-ps?view=azps-2.0.0" target="_blank">PowerShell AZ</a> und die <a href="https://docs.microsoft.com/en-US/cli/azure/install-azure-cli?view=azure-cli-latest" target="_blank">Azure CLI</a>. Besonders beim Erstellen einer Azure AD App (genauer Berechtigen und Granten) ist die Azure CLI noch ein Stück besser bzw. umfangreicher als die AZ PowerShell.</p><p>Die Lifecycle App arbeitet mit AD Settings und Groups. Erweiterte Funktionen setzen auf <a href="https://docs.microsoft.com/en-us/azure/active-directory/governance/access-reviews-overview" target="_blank">Access Reviews</a> Feature aus dem AAD P2 Lizenzset. Diese Graph Berechtigungen setze ich direkt per CLI Script:</p><blockquote><p>az ad app permission add &ndash;id $adapp.ApplicationId &ndash;api 00000003-0000-0000-c000-000000000000 &ndash;api-permissions 19dbc75e-c2e2-444c-a770-ec69d8559fc7=Role #msgraph Directory.ReadWrite.All</p><p>az ad app permission add &ndash;id $adapp.ApplicationId &ndash;api 00000003-0000-0000-c000-000000000000 &ndash;api-permissions 62a82d76-70ea-41e2-9197-370581804d09=Role #msgraph Group.ReadWrite.All</p><p>az ad app permission add &ndash;id $adapp.ApplicationId &ndash;api 00000003-0000-0000-c000-000000000000 &ndash;api-permissions ef5f7d5c-338f-44b0-86c3-351f46c8bb5f=Role #msgraph AccessReview.ReadWrite.All</p><p>az ad app permission add &ndash;id $adapp.ApplicationId &ndash;api 00000003-0000-0000-c000-000000000000 &ndash;api-permissions 60a901ed-09f7-4aa5-a16e-7dd3d6f9de36=Role #msgraph ProgramControl.ReadWrite.All
<br/></p></blockquote><p>Die Azure CLI kann dann auch gleich noch den Admin Grant erledigen (wenn man nicht in der Azure Cloud Shell läuft!):</p><blockquote><p>az ad app permission admin-consent &ndash;id $adapp.ApplicationId
<br/></p></blockquote><p>Hier ein Beispiel, wie das Ergebnis dann im Azure AD Portal aussieht:</p><figure class="tmblr-full" data-orig-width="1703" data-orig-height="650"><img alt="image" src="https://66.media.tumblr.com/11bb83d3f42b7361ef4cb71e719c3ec7/tumblr_inline_ptyyrbu8je1qd9b02_540.png" data-orig-width="1703" data-orig-height="650"/></figure><p>Wer nun die Guid für seine Berechtigung sucht, kann ganz einfach mit diesem Befehlt (<a href="https://docs.microsoft.com/en-us/powershell/azure/active-directory/install-adv2?view=azureadps-2.0" target="_blank">Azure Active Directory PowerShell 2.0</a>) auf das ständig wachsende Set an App Permissions zugreifen:</p><blockquote><p>(Get-AzureADServicePrincipal -filter &ldquo;DisplayName eq &lsquo;Microsoft Graph&rsquo;&rdquo;).AppRoles | Select Id, Value | Sort Value</p><p>Id Value<br/>&ndash; &mdash;&ndash;<br/>d07a8cc0-3d51-4b77-b3b0-32704d1f69fa AccessReview.Read.All<br/>ef5f7d5c-338f-44b0-86c3-351f46c8bb5f AccessReview.ReadWrite.All<br/>18228521-a591-40f1-b215-5fad4488c117 AccessReview.ReadWrite.Membership<br/>134fd756-38ce-4afd-ba33-e9623dbe66c2 AdministrativeUnit.Read.All<br/>5eb59dd3-1da2-4329-8733-9dabdc435916 AdministrativeUnit.ReadWrite.All<br/>1bfefb4e-e0b5-418b-a88f-73c46d2cc8e9 Application.ReadWrite.All<br/>18a4783c-866b-4cc7-a460-3d5e5662c884 Application.ReadWrite.OwnedBy<br/>b0afded3-3588-46d8-8b3d-9842eff778da AuditLog.Read.All<br/>798ee544-9d2d-430c-a058-570e29e34338 Calendars.Read<br/>ef54d2bf-783f-4e0f-bca1-3210c0444d99 Calendars.ReadWrite<br/>a7a681dc-756e-4909-b988-f160edc6655f Calls.AccessMedia.All<br/>284383ee-7f6e-4e40-a2a8-e85dcb029101 Calls.Initiate.All<br/>4c277553-8a09-487b-8023-29ee378d8324 Calls.InitiateGroupCall.All<br/>f6b49018-60ab-4f81-83bd-22caeabfed2d Calls.JoinGroupCall.All<br/>fd7ccf6b-3d28-418b-9701-cd10f5cd2fd4 Calls.JoinGroupCallAsGuest.All<br/>7b2449af-6ccd-4f4d-9f78-e550c193f0d1 ChannelMessage.Read.All<br/>4d02b0cc-d90b-441f-8d82-4fb55c34d6bb ChannelMessage.UpdatePolicyViolation.All<br/>6b7d71aa-70aa-4810-a8d9-5d9fb2830017 Chat.Read.All<br/>294ce7c9-31ba-490a-ad7d-97a7d075e4ed Chat.ReadWrite.All<br/>7e847308-e030-4183-9899-5235d7270f58 Chat.UpdatePolicyViolation.All<br/>089fe4d0-434a-44c5-8827-41ba8a0b17f5 Contacts.Read<br/>6918b873-d17a-4dc1-b314-35f528134491 Contacts.ReadWrite<br/>1138cb37-bd11-4084-a2b7-9f71582aeddb Device.ReadWrite.All<br/>7a6ee1e7-141e-4cec-ae74-d9db155731ff DeviceManagementApps.Read.All<br/>dc377aa6-52d8-4e23-b271-2a7ae04cedf3 DeviceManagementConfiguration.Read.All<br/>2f51be20-0bb4-4fed-bf7b-db946066c75e DeviceManagementManagedDevices.Read.All<br/>58ca0d9a-1575-47e1-a3cb-007ef2e4583b DeviceManagementRBAC.Read.All<br/>06a5fe6d-c49d-46a7-b082-56b1b14103c7 DeviceManagementServiceConfig.Read.All<br/>7ab1d382-f21e-4acd-a863-ba3e13f7da61 Directory.Read.All<br/>19dbc75e-c2e2-444c-a770-ec69d8559fc7 Directory.ReadWrite.All<br/>7e05723c-0bb0-42da-be95-ae9f08a6e53c Domain.ReadWrite.All<br/>7c9db06a-ec2d-4e7b-a592-5a1e30992566 EduAdministration.Read.All<br/>9bc431c3-b8bc-4a8d-a219-40f10f92eff6 EduAdministration.ReadWrite.All<br/>4c37e1b6-35a1-43bf-926a-6f30f2cdf585 EduAssignments.Read.All<br/>6e0a958b-b7fc-4348-b7c4-a6ab9fd3dd0e EduAssignments.ReadBasic.All<br/>0d22204b-6cad-4dd0-8362-3e3f2ae699d9 EduAssignments.ReadWrite.All<br/>f431cc63-a2de-48c4-8054-a34bc093af84 EduAssignments.ReadWriteBasic.All<br/>e0ac9e1b-cb65-4fc5-87c5-1a8bc181f648 EduRoster.Read.All<br/>0d412a8c-a06c-439f-b3ec-8abcf54d2f96 EduRoster.ReadBasic.All<br/>d1808e82-ce13-47af-ae0d-f9b254e6d58a EduRoster.ReadWrite.All<br/>38c3d6ee-69ee-422f-b954-e17819665354 ExternalItem.ReadWrite.All<br/>01d4889c-1287-42c6-ac1f-5d1e02578ef6 Files.Read.All<br/>75359482-378d-4052-8f01-80520e7db3cd Files.ReadWrite.All<br/>5b567255-7703-4780-807c-7be8301ae99b Group.Read.All<br/>62a82d76-70ea-41e2-9197-370581804d09 Group.ReadWrite.All<br/>e321f0bb-e7f7-481e-bb28-e3b0b32d4bd0 IdentityProvider.Read.All<br/>90db2b9a-d928-4d33-a4dd-8442ae3d41e4 IdentityProvider.ReadWrite.All<br/>6e472fd1-ad78-48da-a0f0-97ab2c6b769e IdentityRiskEvent.Read.All<br/>db06fb33-1953-4b7b-a2ac-f1e2c854f7ae IdentityRiskEvent.ReadWrite.All<br/>dc5007c0-2d7d-4c42-879c-2dab87571379 IdentityRiskyUser.Read.All<br/>656f6061-f9fe-4807-9708-6a2e0934df76 IdentityRiskyUser.ReadWrite.All<br/>19da66cb-0fb0-4390-b071-ebc76a349482 InformationProtectionPolicy.Read.All<br/>810c84a8-4a9e-49e6-bf7d-12d183f40d01 Mail.Read<br/>e2a3a72e-5f79-4c64-b1b1-878b674786c9 Mail.ReadWrite<br/>b633e1c5-b582-4048-a93e-9f11b44c7e96 Mail.Send<br/>40f97065-369a-49f4-947c-6a255697ae91 MailboxSettings.Read<br/>6931bccd-447a-43d1-b442-00a195474933 MailboxSettings.ReadWrite<br/>658aa5d8-239f-45c4-aa12-864f4fc7e490 Member.Read.Hidden<br/>3aeca27b-ee3a-4c2b-8ded-80376e2134a4 Notes.Read.All<br/>0c458cef-11f3-48c2-a568-c66751c238c0 Notes.ReadWrite.All<br/>c1684f21-1984-47fa-9d61-2dc8c296bb70 OnlineMeetings.Read.All<br/>b8bb2037-6e08-44ac-a4ea-4674e010e2a4 OnlineMeetings.ReadWrite.All<br/>0b57845e-aa49-4e6f-8109-ce654fffa618 OnPremisesPublishingProfiles.ReadWrite.All<br/>b528084d-ad10-4598-8b93-929746b4d7d6 People.Read.All<br/>246dd0d5-5bd0-4def-940b-0421030a5b68 Policy.Read.All<br/>79a677f7-b79d-40d0-a36a-3e6f8688dd7a Policy.ReadWrite.TrustFramework<br/>eedb7fdd-7539-4345-a38b-4839e4a84cbd ProgramControl.Read.All<br/>60a901ed-09f7-4aa5-a16e-7dd3d6f9de36 ProgramControl.ReadWrite.All<br/>230c1aed-a721-4c5d-9cb4-a90514e508ef Reports.Read.All<br/>5e0edab9-c148-49d0-b423-ac253e121825 SecurityActions.Read.All<br/>f2bf083f-0179-402a-bedb-b2784de8a49b SecurityActions.ReadWrite.All<br/>bf394140-e372-4bf9-a898-299cfc7564e5 SecurityEvents.Read.All<br/>d903a879-88e0-4c09-b0c9-82f6a1333f84 SecurityEvents.ReadWrite.All<br/>a82116e5-55eb-4c41-a434-62fe8a61c773 Sites.FullControl.All<br/>0c0bf378-bf22-4481-8f81-9e89a9b4960a Sites.Manage.All<br/>332a536c-c7ef-4017-ab91-336970924f0d Sites.Read.All<br/>9492366f-7969-46a4-8d15-ed1a20078fff Sites.ReadWrite.All<br/>21792b6c-c986-4ffc-85de-df9da54b52fa ThreatIndicators.ReadWrite.OwnedBy<br/>fff194f1-7dce-4428-8301-1badb5518201 TrustFrameworkKeySet.Read.All<br/>4a771c9a-1cf2-4609-b88e-3d3e02d539cd TrustFrameworkKeySet.ReadWrite.All<br/>405a51b5-8d8d-430b-9842-8be4b0e9f324 User.Export.All<br/>09850681-111b-4a89-9bed-3f2cae46d706 User.Invite.All<br/>df021288-bdef-4463-88db-98f22de89214 User.Read.All<br/>741f803b-c850-494e-b5df-cde7c675a1ca User.ReadWrite.All</p></blockquote><div class="feedflare">
<a href="http://feeds.marcoscheel.de/~ff/marcoscheel?a=YyJW24MD4_M:4C0M2kjo8vw:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/marcoscheel?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.marcoscheel.de/~ff/marcoscheel?a=YyJW24MD4_M:4C0M2kjo8vw:7Q72WNTAKBA"><img src="http://feeds.feedburner.com/~ff/marcoscheel?d=7Q72WNTAKBA" border="0"></img></a> <a href="http://feeds.marcoscheel.de/~ff/marcoscheel?a=YyJW24MD4_M:4C0M2kjo8vw:qj6IDK7rITs"><img src="http://feeds.feedburner.com/~ff/marcoscheel?d=qj6IDK7rITs" border="0"></img></a>
</div><img src="http://feeds.feedburner.com/~r/marcoscheel/~4/YyJW24MD4_M" height="1" width="1" alt=""/>Mon, 08 Jul 2019 13:47:21 +0200Marco ScheelMarco ScheelMarco Scheelhttps://asp.net-hacker.rocks/2019/07/08/book.htmlhttp://feedproxy.google.com/~r/jgutsch/~3/x2D3yNdQrdA/book.htmlJürgen GutschSelf-publishing a book<p>While writing on the Customizing ASP.NET Core series, a reader asked me to bundle all the posts into a book. I was thinking about it for a while. Also because I tried to write a book in the past together with a collogue at the <a href="https://yoo.digital/">YOO</a>. But publishing a book with a publisher in behind turned out to be stress. Since we have a family with small kids and a job where we work on different projects, the book never has priority one. The publisher didn't see that fact. Fortunately the publisher quits the contract because we weren't able to deliver a chapter per week.</p>
<p>This is the planned cover for the bundled series:</p>
<p><img src="https://asp.net-hacker.rocks/img/book/title.png" alt="" /></p>
<p>(I took that photo at the Tschentenalp above Adelboden in Switzerland. It is the View to the Lohner Mountains)</p>
<h2>Leanpub</h2>
<p>In the past I already had a look into different self publishing platforms like <a href="https://leanpub.com/">Leanpub</a> which looks pretty easy and modern. But it also has a downside:</p>
<ul>
<li>Leanpub gives me 80 % of the salary, but we need to do the publishing and the marketing to sell that book</li>
<li>A publisher only gives me 20%, but does a professional publishing and marketing. He will sell a lot more books.</li>
</ul>
<p>At the end you cannot get rich by publishing a book like this. But it is anyway nice to get some money out of your effort. Also Amazon provides a possibility to publish a book by yourself which looks nice for self-publisher. I'm going to try this as well.</p>
<p>In the past Leanpub also provides print on demand. This seemed to be stopped now. I couldn't found any information about it now. Anyway, it is good enough to publish in various eBook formats.</p>
<p>So I decided to go with Leanpub to try the self-publishing way.</p>
<h2>Writing</h2>
<p>Even if the most of the contents are already written for the blog, I decided to go over all the parts to also update all the stuff to ASP.NET Core 3.0. I also decided to also leave the ASP.NET Core 2.2 information, because this will also be valid for a while. So the chapter will handle 3.0 and 2.2.</p>
<p>Writing for Leanpub also works with GitHub and Markdown files, which also reduces the effort. I'm able to bind a GitHub repository to Leanpub and push Markdown files into it. I need to structure and order the different files in a book.txt file. Every markdown file is a chapter in that book.</p>
<p>Currently I have 13 chapters a preface, a about me chapter, a chapter to describe the technical requirements for this book and a small postface. All in all about 80 pages.</p>
<h2>Rewriting</h2>
<p>Sometimes it was hard to rewrite the demos and contents to ASP.NET Core. If you are writing about customizing that goes deeply into the APIs, you will definitely face some significant changes. So it wasn't that easy to get a custom DI container running in ASP.NET Core 3. Also adding the Middlewares using a custom Route changes from 2.2 to 3.0 Preview 3 and changes again from the preview 3 to the preview 6. Iven though I already had some experience with 3.0 there where some changes between the different previews.</p>
<p>But luckily I also have some chapters without any differences between 2.2 and 3.0</p>
<h2>Updating the blog posts</h2>
<p>I'm not yet sure whether I need to update the blog post or not. My current idea is to create new posts and to mention the new post in the old ones.</p>
<p>There is definitely enough stuff for a lot of new posts about About ASP.NET Core. One thing for example is the new Framework reference that was a pain in the ass during a live stream where I tried to update a preview 3 solution to preview 6.</p>
<h2>Publishing</h2>
<p>Currently I'm not sure when I'm able to publish this book. At the moment it is review by to people doing the non technical review and one guy doing the technical review.</p>
<p>I think I'm going to publish this book during the summer.</p>
<h2>Contributing</h2>
<p>If you want to help making this book better, feel free to go to the repositories, fork them and to create PRs.</p>
<ul>
<li>Demo project: <a href="https://github.com/JuergenGutsch/customizing-aspnetcore">https://github.com/JuergenGutsch/customizing-aspnetcore</a></li>
<li>Book repository: <a href="https://github.com/JuergenGutsch/customizing-aspnetcore-book">https://github.com/JuergenGutsch/customizing-aspnetcore-book</a></li>
</ul>
<p>It would also be helpful to propose a price you would pay for such a book. Until yet I got some proposals, but his seem to be a pretty high price from my perspective. It seems some folks are really willing to pay around 25 EUR. <a href="https://leanpub.com/customizing-aspnetcore/">https://leanpub.com/customizing-aspnetcore/</a>. What do you think?</p><img src="http://feeds.feedburner.com/~r/jgutsch/~4/x2D3yNdQrdA" height="1" width="1" alt=""/>Mon, 08 Jul 2019 00:00:00 Z2019-07-08T00:00:00ZJürgen GutschJürgen GutschJürgen Gutschhttps://marcoscheel.de/post/185978935612https://marcoscheel.de/post/185978935612Marco ScheelMicrosoft Graph, Postman und Wie bekomme ich ein App Only Token?<p>Der Microsoft Graph ist das “Schweizer Taschenmesser” für alle im Microsoft 365 Umfeld. Eine API für “alle” Dienste und noch besser immer das gleiche Authentifizierungsmodel. Im <a href="https://hairlessinthecloud.com" target="_blank">Hairless in the Cloud</a> Podcast Nummer <a href="https://anchor.fm/hairlessinthecloud/episodes/018---Wir-kennen-die-Passwrter-Eurer-User-und-Microsoft-Graph-e42di4/a-aff2ff" target="_blank">18</a> habe ich meine Eindrücke zum Microsoft Graph schon geschildert. Der Graph Explorer auf der Website ist eine gute Methode den Graph kennenzulernen. Ich für meinen Teil bewege mich aber überwiegend ohne Benutzerinteraktion im Graph und somit nutze ich in meinen Anwendungen die Application Permissions. Die meisten APIs (vgl. Teams) kommen allerdings erstmal ohne App Permissions daher. Die Enttäuschung ist groß, wenn man über den Graph Explorer sein Research gemacht hat und dann feststellt, dass die Calls als App Permission scheitern.</p><p>Jeremy Thake aus dem Microsoft Graph Team hat vor einigen Monaten angefangen, die Samples (und mehr) aus dem <a href="https://developer.microsoft.com/en-us/graph/blogs/postman-collections/" target="_blank">Graph Explorer in einer Collection für Postman</a> zu veröffentlichen. Diese Collection vereinfacht das Testen der eigenen Calls und gibt Anregung für neue Szenarien.</p><p>In der Vergangenheit habe ich mir aus meiner Azure Function das Token “geklaut” und dann im Postman als Bearer Token direkt hinterlegt:</p><figure class="tmblr-full" data-orig-height="304" data-orig-width="1197"><img alt="image" src="https://66.media.tumblr.com/630a46cbbcfa0ae9878109fd704455f0/tumblr_inline_ptyub5DAug1qd9b02_540.png" data-orig-height="304" data-orig-width="1197"/></figure><p>Es gibt aber eine viel elegantere Version. Die MS Graph Postman Collection arbeitet mit dem Environment und Variablen. Eine Methode, die eigentlich dem Code in der eigene App (bei mir eine Azure Function) entspricht, ist aber auch mit an Bord. Postman bietet eine native OAuth Integration an. Man wählt einfach OAuth 2.0 aus und kann dann folgende Informationen aus seiner eigenen App hinterlegen:</p><figure class="tmblr-full" data-orig-height="454" data-orig-width="616"><img alt="image" src="https://66.media.tumblr.com/e5cc2ab721b7dfff0d744089fca4dbe1/tumblr_inline_ptyun0Nx1r1qd9b02_540.png" data-orig-height="454" data-orig-width="616"/></figure><p><br/></p><ul><li>Grant Type: Client Credentials<br/></li><li>Access Token URL: h<a href="https://login.microsoftonline.com/malachor.onmicrosoft.com/oauth2/v2.0/token" target="_blank">ttps://login.microsoftonline.com/malachor.onmicrosoft.com/oauth2/v2.0/token<br/></a></li><li>Client ID: 50641771-73ac-42fa-9b6f-f25e49ec6871 <br/></li><li>Client Secret: dvMR0c_*_RlxvV*JQQZGDICH6N04ZT2/ <br/></li><li>Scope : <a href="https://graph.microsoft.com/.default" target="_blank">https://graph.microsoft.com/.default</a><br/>Wenn man immer alle Scopes/Berechtigungen möchte, dann ist dieser Scope der einfachste</li></ul><p><b>Hinweis</b>: Ich habe meine App schon wieder gelöscht. Sie ist nicht länger nutzbar, also ist das Secret im Code auch kein Geheimnis mehr.<br/></p><p>Über “Request Token” kann ich dann ein Token holen und für alle weiteren Requests verwenden. Zum Prüfen des Token (hat der Scope geklappt) kann man einfach auf <a href="https://jwt.io" target="_blank">jwt.io</a> oder auf den Microsoft Service <a href="https://jwt.ms" target="_blank">jwt.ms</a> gehen.</p><p><b>Hinweis</b>: Solche Token Decoder sind eine tolle Sache, aber bitte denkt dran, wenn ihr das mit produktiven Token macht, dann müsst ihr dem Service vertrauen, denn er hat in dem Moment eure Berechtigung! In meinem Fall könnten die beiden Websites das Token nehmen und gegen meinen Tenant einsetzen! Ich nutze hier mein LAB Tenant und ich glaube, dass ich weiß was ich tue :) Also alles gut!</p><figure class="tmblr-full" data-orig-height="1318" data-orig-width="1078"><img alt="image" src="https://66.media.tumblr.com/9fff07426248468464c060ef97805a59/tumblr_inline_ptyuxunkD41qd9b02_540.png" data-orig-height="1318" data-orig-width="1078"/></figure><p>Mit dem Token kann man dann zum Beispiel in meinem Fall die Azure AD Access Reviews einsehen.</p><figure class="tmblr-full" data-orig-height="1171" data-orig-width="1197"><img alt="image" src="https://66.media.tumblr.com/26c020f6d74e647fe10b2292c206da17/tumblr_inline_ptyuzrHZw21qd9b02_540.png" data-orig-height="1171" data-orig-width="1197"/></figure><p>Mein Debugging wurde extrem vereinfacht, da ich so einfach meine App Permissions testen kann.</p><div class="feedflare">
<a href="http://feeds.marcoscheel.de/~ff/marcoscheel?a=Xpbl2ge5Z4Y:Dp9OLPlVZf0:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/marcoscheel?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.marcoscheel.de/~ff/marcoscheel?a=Xpbl2ge5Z4Y:Dp9OLPlVZf0:7Q72WNTAKBA"><img src="http://feeds.feedburner.com/~ff/marcoscheel?d=7Q72WNTAKBA" border="0"></img></a> <a href="http://feeds.marcoscheel.de/~ff/marcoscheel?a=Xpbl2ge5Z4Y:Dp9OLPlVZf0:qj6IDK7rITs"><img src="http://feeds.feedburner.com/~ff/marcoscheel?d=qj6IDK7rITs" border="0"></img></a>
</div><img src="http://feeds.feedburner.com/~r/marcoscheel/~4/Xpbl2ge5Z4Y" height="1" width="1" alt=""/>Mon, 01 Jul 2019 17:06:23 +0200Marco ScheelMarco ScheelMarco Scheelhttps://blog.codeinside.eu/2019/06/30/jint-invoke-javascript-from-dotnethttp://feedproxy.google.com/~r/Code-insideBlog/~3/BLnTSCwz1I4/jint-invoke-javascript-from-dotnetCode-Inside BlogJint: Invoke Javascript from .NET<p>If you ever dreamed to use Javascript in your .NET application there is a simple way: Use <strong><a href="https://github.com/sebastienros/jint">Jint</a></strong>.</p>
<p>Jint implements the ECMA 5.1 spec and can be use from any .NET implementation (Xamarin, .NET Framework, .NET Core). Just use the <a href="https://www.nuget.org/packages/Jint/">NuGet package</a> and has <strong>no</strong> dependencies to other stuff - it’s a single .dll and you are done!</p>
<h2 id="why-should-integrate-javascript-in-my-application">Why should integrate Javascript in my application?</h2>
<p>In our product “OneOffixx” we use Javascript as a scripting language with some “OneOffixx” specific objects.</p>
<p>The pro arguments for Javascript:</p>
<ul>
<li>It’s a well known language (even with all the brainfuck in it)</li>
<li>You can sandbox it quite simple</li>
<li>With a library like Jint it is super simple to interate</li>
</ul>
<p>I highly recommend to checkout the GitHub page, but here a some simple examples, which should show how to use it:</p>
<h2 id="example-1-simple-start">Example 1: Simple start</h2>
<p>After the NuGet action you can use the following code to see one of the most basic implementations:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>public static void SimpleStart()
{
var engine = new Jint.Engine();
Console.WriteLine(engine.Execute("1 + 2 + 3 + 4").GetCompletionValue());
}
</code></pre></div></div>
<p>We create a new “Engine” and execute some simple Javascript and returen the completion value - easy as that!</p>
<h2 id="example-2-use-c-function-from-javascript">Example 2: Use C# function from Javascript</h2>
<p>Let’s say we want to provide a scripting environment and the script can access some C# based functions. This “bridge” is created via the “Engine” object. We create a value, which points to our C# implementation.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>public static void DefinedDotNetApi()
{
var engine = new Jint.Engine();
engine.SetValue("demoJSApi", new DemoJavascriptApi());
var result = engine.Execute("demoJSApi.helloWorldFromDotNet('TestTest')").GetCompletionValue();
Console.WriteLine(result);
}
public class DemoJavascriptApi
{
public string helloWorldFromDotNet(string name)
{
return $"Hello {name} - this is executed in {typeof(Program).FullName}";
}
}
</code></pre></div></div>
<h2 id="example-3-use-javascript-from-c">Example 3: Use Javascript from C#</h2>
<p>Of course we also can do the other way around:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>public static void InvokeFunctionFromDotNet()
{
var engine = new Engine();
var fromValue = engine.Execute("function jsAdd(a, b) { return a + b; }").GetValue("jsAdd");
Console.WriteLine(fromValue.Invoke(5, 5));
Console.WriteLine(engine.Invoke("jsAdd", 3, 3));
}
</code></pre></div></div>
<h2 id="example-4-use-a-common-javascript-library">Example 4: Use a common Javascript library</h2>
<p>Jint allows you to inject any Javascript code (be aware: There is no DOM, so only “libraries” can be used).</p>
<p>In this example we use <a href="https://handlebarsjs.com/">handlebars.js</a>:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>public static void Handlebars()
{
var engine = new Jint.Engine();
engine.Execute(File.ReadAllText("handlebars-v4.0.11.js"));
engine.SetValue("context", new
{
cats = new[]
{
new {name = "Feivel"},
new {name = "Lilly"}
}
});
engine.SetValue("source", " says meow!!!\n");
engine.Execute("var template = Handlebars.compile(source);");
var result = engine.Execute("template(context)").GetCompletionValue();
Console.WriteLine(result);
}
</code></pre></div></div>
<h2 id="example-5-repl">Example 5: REPL</h2>
<p>If you are crazy enough, you can build a simple REPL like this (not sure if this would be a good idea for production, but it works!)</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>public static void Repl()
{
var engine = new Jint.Engine();
while (true)
{
Console.Write("&gt; ");
var statement = Console.ReadLine();
var result = engine.Execute(statement).GetCompletionValue();
Console.WriteLine(result);
}
}
</code></pre></div></div>
<h2 id="jint-javascript-integration-done-right">Jint: Javascript integration done right!</h2>
<p>As you can see: Jint is quite powerfull and if you feel the need to integrate Javascript in your application, checkout Jint!</p>
<p>The sample code can be found <a href="https://github.com/Code-Inside/Samples/tree/master/2018/JintSample/JintPlayground">here </a>.</p>
<p>Hope this helps!</p><img src="http://feeds.feedburner.com/~r/Code-insideBlog/~4/BLnTSCwz1I4" height="1" width="1" alt=""/>Sun, 30 Jun 2019 23:45:00 ZCode-Inside BlogCode-Inside BlogCode-Inside Bloghttps://www.norberteder.com/?p=6499https://www.norberteder.com/scratch-kinder-lernen-programmieren/Norbert EderScratch – Kinder lernen programmieren<p>Ohne Computer läuft heute gar nichts mehr. Umso wichtiger ist es, zu verstehen, wie sowohl Computer, als auch die darauf laufende Software, funktionieren. Um dieses so wichtige Verständnis zu schüren, sollten schon Kinder mit dem Thema des Programmierens in Berührung kommen. Dazu gibt es unterschiedlichste Werkzeuge. Eines, das ich &#8211; aus Erfahrung &#8211; sehr empfehlen [&#8230;]</p>
<p>Der Beitrag <a rel="nofollow" href="https://www.norberteder.com/scratch-kinder-lernen-programmieren/">Scratch &#8211; Kinder lernen programmieren</a> erschien zuerst auf <a rel="nofollow" href="https://www.norberteder.com">Norbert Eder</a>.</p>
Thu, 13 Jun 2019 06:50:39 ZNorbert Eder<p>Ohne Computer läuft heute gar nichts mehr. Umso wichtiger ist es, zu verstehen, wie sowohl Computer, als auch die darauf laufende Software, funktionieren. Um dieses so wichtige Verständnis zu schüren, sollten schon Kinder mit dem Thema des Programmierens in Berührung kommen.</p>
<p>Dazu gibt es unterschiedlichste Werkzeuge. Eines, das ich &#8211; aus Erfahrung &#8211; sehr empfehlen kann, ist <a href="https://scratch.mit.edu/" title="Scratch">Scratch</a>.</p>
<p>Scratch ist ein tolles Hilfsmittel für Neueinsteiger, vor allem aber Kinder und Jugendliche. Programme bestehen hier aus interaktiven Komponenten, die zusammengesetzt und mit &#8222;Leben&#8220; versehen werden können. Mittels unterschiedlicher Bausteine können die Komponenten bewegt werden, es ist möglich, auf Ereignisse zu reagieren oder aber auch Sound abzuspielen und vieles mehr.</p>
<p>Durch das <strong>Bausteinsystem</strong> werden Syntaxfehler vermieden. Statt Frust gibt es schnelle Erfolge und treiben zu weiteren &#8222;Spielereien&#8220; ein. Innerhalb kürzester Zeit können so zum Beispiel kleine Spiele entwickelt werden.</p>
<p>Kinder lernen so spielerisch einige Grundkonzepte der Programmierung kennen und können so in kurzer Zeit auf komplexere Sprachen umsteigen und sich weiterentwickeln.</p>
<p>Die <strong>Voraussetzungen für Scratch sind gering</strong>: Ein Computer und ein Browser werden benötigt. Die Entwicklung findet komplett im Browser statt. Die Programme können abgespeichert oder geladen werden und stehen so auch sofort zur Verfügung. Es kann auch offline entwickelt werden. Dazu steht <a href="https://scratch.mit.edu/download" title="Scratch-Desktop">Scratch-Desktop</a> für Windows 10 und MacOS 10.13+ zur Verfügung.</p>
<div id="attachment_6501" style="width: 660px" class="wp-caption aligncenter"><a href="https://www.norberteder.com/wp-content/uploads/2019/06/scratch-programmieren-fuer-kinder-norbert-eder.png"><img aria-describedby="caption-attachment-6501" src="https://www.norberteder.com/wp-content/uploads/2019/06/scratch-programmieren-fuer-kinder-norbert-eder-650x485.png" alt="Scratch - Programmieren lernen" width="650" height="485" class="size-large wp-image-6501" srcset="https://www.norberteder.com/wp-content/uploads/2019/06/scratch-programmieren-fuer-kinder-norbert-eder-650x485.png 650w, https://www.norberteder.com/wp-content/uploads/2019/06/scratch-programmieren-fuer-kinder-norbert-eder-300x224.png 300w, https://www.norberteder.com/wp-content/uploads/2019/06/scratch-programmieren-fuer-kinder-norbert-eder-768x573.png 768w, https://www.norberteder.com/wp-content/uploads/2019/06/scratch-programmieren-fuer-kinder-norbert-eder-810x605.png 810w, https://www.norberteder.com/wp-content/uploads/2019/06/scratch-programmieren-fuer-kinder-norbert-eder-1140x851.png 1140w, https://www.norberteder.com/wp-content/uploads/2019/06/scratch-programmieren-fuer-kinder-norbert-eder.png 1385w" sizes="(max-width: 650px) 100vw, 650px" /></a><p id="caption-attachment-6501" class="wp-caption-text">Scratch &#8211; Programmieren lernen</p></div>
<p>Damit man nicht ganz alleine starten muss, gibt es auch eine große Community und zahlreiche Hilfen für den Einstieg. Vielleicht gibt es ja auch in deiner Nähe ein CoderDojo. Hier in Österreich gibt es das <a href="https://coderdojo-linz.github.io/" title="CoderDojo Linz">CoderDojo Linz</a> und das <a href="https://www.coderdojograz.com/" title="CoderDojo Graz">CoderDojo Graz</a>. Hier bekommt man Unterstützung, wenn man als Elternteil nicht ganz so firm in diesen Dingen ist.</p>
<p>Besonders hilfreich ist die Liste der <a href="https://coderdojo-linz.github.io/infos/uebungsbeispiele.html" title="Übungsbeispiele CoderDojo Linz">Übungsbeispiele</a> des CoderDojo Linz zu Scratch und HTML.</p>
<p>In diesem Sinne wünsche ich <strong>Happy Coding</strong> und interessante, lehrreiche Stunden mit dem Nachwuchs.</p>
<p>Der Beitrag <a rel="nofollow" href="https://www.norberteder.com/scratch-kinder-lernen-programmieren/">Scratch &#8211; Kinder lernen programmieren</a> erschien zuerst auf <a rel="nofollow" href="https://www.norberteder.com">Norbert Eder</a>.</p>
Norbert EderNorbert Ederhttp://heise.de/-4442983Holger Schwichtenberg.NET Framework 4.8 erkennenWie bei den Vorgängern stellt man ein vorhandenes .NET Framework 4.8 über einen Registry-Eintrag fest.Sun, 09 Jun 2019 09:01:00 +02002019-06-09T09:01:00+02:00Holger SchwichtenbergHolger SchwichtenbergHolger Schwichtenberghttp://stefanhenneken.wordpress.com/?p=1482https://stefanhenneken.wordpress.com/2019/06/07/parameterbergabe-per-fb_init/Stefan HennekenIEC 61131-3: Parameterübergabe per FB_initJe nach Aufgabenstellung kann es erforderlich sein, dass Funktionsblöcke Parameter benötigen, die nur einmalig für Initialisierungsaufgaben verwendet werden. Ein möglicher Weg, diese elegant zu übergeben, bietet die Methode FB_init(). Vor TwinCAT 3 wurden Initialisierungs-Parameter sehr häufig über Eingangsvariablen übergeben. Dieses hatte den Nachteil, dass in den graphischen Darstellungsarten die Funktionsblöcke unnötig groß wurden. Auch war [&#8230;]Fri, 07 Jun 2019 16:47:00 ZStefan Henneken<p>Je nach Aufgabenstellung kann es erforderlich sein, dass Funktionsblöcke Parameter benötigen, die nur einmalig für Initialisierungsaufgaben verwendet werden. Ein möglicher Weg, diese elegant zu übergeben, bietet die Methode <font face="Courier New">FB_init()</font>.</p>
<p><span id="more-1482"></span></p>
<p>Vor TwinCAT 3 wurden Initialisierungs-Parameter sehr häufig über Eingangsvariablen übergeben.</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
(* TwinCAT 2 *)
FUNCTION_BLOCK FB_SerialCommunication
VAR_INPUT
nDatabits : BYTE(7..8);
eParity : E_Parity;
nStopbits : BYTE(1..2);
END_VAR
</pre>
<p>Dieses hatte den Nachteil, dass in den graphischen Darstellungsarten die Funktionsblöcke unnötig groß wurden. Auch war es nicht möglich, ein Ändern der Parameter zur Laufzeit zu verhindern. </p>
<p>Sehr hilfreich ist hierbei die Methode <font face="Courier New">FB_init()</font>. Diese Methode wird vor dem Start der SPS-Task einmal implizit ausgeführt und kann dazu dienen, Initialisierungsaufgaben durchzuführen. </p>
<p>Der Dialog zum Hinzufügen von Methoden bietet hierzu eine fertige Vorlage an. </p>
<p><a href="https://stefanhenneken.files.wordpress.com/2019/06/pic01.png"><img title="Pic01" style="background-image:none;padding-top:0;padding-left:0;display:inline;padding-right:0;border-width:0;" border="0" alt="Pic01" src="https://stefanhenneken.files.wordpress.com/2019/06/pic01_thumb.png?w=477&#038;h=404" width="477" height="404"></a></p>
<p>In der Methode sind zwei Eingangsvariablen enthalten, welche Auskunft darüber geben, unter welchen Bedingungen die Methode ausgeführt wird. Die Variablen dürfen weder gelöscht noch verändert werden. Allerdings kann <font face="Courier New">FB_init()</font> um weitere Eingangsvariablen ergänzt werden. </p>
<h1>Beispiel</h1>
<p>Als Beispiel soll ein Baustein zur Kommunikation über eine serielle Schnittstelle dienen (<font face="Courier New">FB_SerialCommunication</font>). Dieser Baustein soll ebenfalls die serielle Schnittstelle mit den notwendigen Parametern initialisieren. Aus diesem Grund werden zu <font face="Courier New">FB_init()</font> drei Variablen hinzugefügt:</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
METHOD FB_init : BOOL
VAR_INPUT
bInitRetains : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
bInCopyCode : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
nDatabits : BYTE(7..8);
eParity : E_Parity;
nStopbits : BYTE(1..2);
END_VAR
</pre>
<p>Das Initialisieren der seriellen Schnittstelle erfolgt nicht direkt in <font face="Courier New">FB_init()</font>. Deshalb müssen die Parameter in Variablen kopiert werden, die sich im Funktionsblock befinden.</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
FUNCTION_BLOCK PUBLIC FB_SerialCommunication
VAR
nInternalDatabits : BYTE(7..8);
eInternalParity : E_Parity;
nInternalStopbits : BYTE(1..2);
END_VAR
</pre>
<p>In diesen drei Variablen werden die Werte aus <font face="Courier New">FB_init()</font> während der Initialisierung kopiert.</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
METHOD FB_init : BOOL
VAR_INPUT
bInitRetains : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
bInCopyCode : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
nDatabits : BYTE(7..8);
eParity : E_Parity;
nStopbits : BYTE(1..2);
END_VAR
THIS^.nInternalDatabits := nDatabits;
THIS^.eInternalParity := eParity;
THIS^.nInternalStopbits := nStopbits;
</pre>
<p>Wird eine Instanz von <font face="Courier New">FB_SerialCommunication</font> angelegt, so sind diese drei zusätzlichen Parameter mit anzugeben. Die Werte werden direkt nach dem Namen des Funktionsblocks in runden Klammern angegeben:</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
fbSerialCommunication : FB_SerialCommunication(nDatabits := 8,
eParity := E_Parity.None,
nStopbits := 1);
</pre>
<p>Noch bevor die SPS-Task startet, wird die Methode <font face="Courier New">FB_init()</font> implizit aufgerufen, so dass die internen Variablen des Funktionsblocks die gewünschten Werte erhalten. </p>
<p><a href="https://stefanhenneken.files.wordpress.com/2019/06/pic02.png"><img title="Pic02" style="background-image:none;padding-top:0;padding-left:0;display:inline;padding-right:0;border-width:0;" border="0" alt="Pic02" src="https://stefanhenneken.files.wordpress.com/2019/06/pic02_thumb.png?w=557&#038;h=152" width="557" height="152"></a> </p>
<p>Mit dem Start der SPS-Task und dem Aufruf der Instanz von <font face="Courier New">FB_SerialCommunication</font> kann jetzt die Initialisierung der seriellen Schnittstelle erfolgen. </p>
<p>Es ist immer notwendig alle Parameter anzugeben. Eine Deklaration ohne eine vollständige Auflistung der Parameter ist nicht erlaubt und erzeugt beim Compilieren eine Fehlermeldung: </p>
<p><a href="https://stefanhenneken.files.wordpress.com/2019/06/pic03.png"><img title="Pic03" style="background-image:none;padding-top:0;padding-left:0;display:inline;padding-right:0;border-width:0;" border="0" alt="Pic03" src="https://stefanhenneken.files.wordpress.com/2019/06/pic03_thumb.png?w=562&#038;h=155" width="562" height="155"></a> </p>
<h3>Arrays</h3>
<p>Wird <font face="Courier New">FB_init()</font> bei Arrays verwendet, so sind für jedes Element die vollständigen Parameter anzugeben (mit eckige Klammern):</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
aSerialCommunication : ARRAY[1..2] OF FB_SerialCommunication[
(nDatabits := 8, eParity := E_Parity.None, nStopbits := 1),
(nDatabits := 7, eParity := E_Parity.Even, nStopbits := 1)];
</pre>
<p>Sollen alle Elemente die gleichen Initialisierungswerte erhalten, so ist es ausreichend, wenn die Parameter einmal vorhanden sind (ohne eckige Klammern):</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
aSerialCommunication : ARRAY[1..2] OF FB_SerialCommunication(nDatabits := 8,
eParity := E_Parity.None,
nStopbits := 1);
</pre>
<p>Mehrdimensionale Arrays sind ebenfalls möglich. Auch hier müssen alle Initialisierungswerte angegeben werden:</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
aSerialCommunication : ARRAY[1..2, 5..6] OF FB_SerialCommunication[
(nDatabits := 8, eParity := E_Parity.None, nStopbits := 1),
(nDatabits := 7, eParity := E_Parity.Even, nStopbits := 1),
(nDatabits := 8, eParity := E_Parity.Odd, nStopbits := 2),
(nDatabits := 7, eParity := E_Parity.Even, nStopbits := 2)];
</pre>
<h3>Vererbung</h3>
<p>Kommt Vererbung zum Einsatz, so wird die Methode <font face="Courier New">FB_init()</font> immer mit vererbt. Als Beispiel soll hier <font face="Courier New">FB_SerialCommunicationRS232</font> dienen:</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
FUNCTION_BLOCK PUBLIC FB_SerialCommunicationRS232 EXTENDS FB_SerialCommunication
</pre>
<p>Wird eine Instanz von <font face="Courier New">FB_SerialCommunicationRS232</font> angelegt, so müssen auch die Parameter von <font face="Courier New">FB_init()</font> angegeben werden, welche von <font face="Courier New">FB_SerialCommunication</font> geerbt wurden:</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
fbSerialCommunicationRS232 : FB_SerialCommunicationRS232(nDatabits := 8,
eParity := E_Parity.Odd,
nStopbits := 1);
</pre>
<p>Es besteht außerdem die Möglichkeit <font face="Courier New">FB_init()</font> zu überschreiben. In diesem Fall müssen die gleichen Eingangsvariablen in der gleichen Reihenfolge und vom gleichen Datentyp vorhanden sein, wie bei dem Basis-FB (<font face="Courier New">FB_SerialCommunication</font>). Es können aber weitere Eingangsvariablen hinzugefügt werden, so dass der abgeleitete Funktionsblock (<font face="Courier New">FB_SerialCommunicationRS232</font>) zusätzliche Parameter erhält:</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
METHOD FB_init : BOOL
VAR_INPUT
bInitRetains : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
bInCopyCode : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
nDatabits : BYTE(7..8);
eParity : E_Parity;
nStopbits : BYTE(1..2);
nBaudrate : UDINT;
END_VAR
THIS^.nInternalBaudrate := nBaudrate;
</pre>
<p>Wird eine Instanz von <font face="Courier New">FB_SerialCommunicationRS232</font> angelegt, so sind alle Parameter, auch die von <font face="Courier New">FB_SerialCommunication</font>, anzugeben:</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
fbSerialCommunicationRS232 : FB_SerialCommunicationRS232(nDatabits := 8,
eParity := E_Parity.Odd,
nStopbits := 1,
nBaudRate := 19200);
</pre>
<p>In der Methode <font face="Courier New">FB_init()</font> von <font face="Courier New">FB_SerialCommunicationRS232</font> ist nur das Kopieren des neuen Parameters (<font face="Courier New">nBaudrate</font>) notwendig. Dadurch, dass <font face="Courier New">FB_SerialCommunicationRS232</font> von <font face="Courier New">FB_SerialCommunication</font> erbt, wird vor dem Start der SPS-Task auch <font face="Courier New">FB_init()</font> von <font face="Courier New">FB_SerialCommunication</font> implizit ausgeführt. Es werden immer beide <font face="Courier New">FB_init()</font> Methoden implizit aufgerufen, sowohl die von <font face="Courier New">FB_SerialCommunication</font>, als auch die von <font face="Courier New">FB_SerialCommunicationRS232</font>. Der Aufruf von <font face="Courier New">FB_init()</font> erfolgt bei Vererbung immer von ‚unten‘ nach ‚oben‘. Also erst von <font face="Courier New">FB_SerialCommunication</font> und anschließend von <font face="Courier New">FB_SerialCommunicationRS232</font>.</p>
<h3>Parameter weiterleiten</h3>
<p>Als Beispiel soll der Funktionsblock (<font face="Courier New">FB_SerialCommunicationCluster</font>) dienen, in dem mehrere Instanzen von <font face="Courier New">FB_SerialCommunication</font> deklariert werden:</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
FUNCTION_BLOCK PUBLIC FB_SerialCommunicationCluster
VAR
fbSerialCommunication01 : FB_SerialCommunication(nDatabits := nInternalDatabits, eParity := eInternalParity, nStopbits := nInternalStopbits);
fbSerialCommunication02 : FB_SerialCommunication(nDatabits := nInternalDatabits, eParity := eInternalParity, nStopbits := nInternalStopbits);
nInternalDatabits : BYTE(7..8);
eInternalParity : E_Parity;
nInternalStopbits : BYTE(1..2);
END_VAR
</pre>
<p>Damit die Parameter der Instanzen von außen einstellbar sind, erhält auch <font face="Courier New">FB_SerialCommunicationCluster</font> die Methode <font face="Courier New">FB_init()</font> mit den notwendigen Eingangsvariablen.</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
METHOD FB_init : BOOL
VAR_INPUT
bInitRetains : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
bInCopyCode : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
nDatabits : BYTE(7..8);
eParity : E_Parity;
nStopbits : BYTE(1..2);
END_VAR
THIS^.nInternalDatabits := nDatabits;
THIS^.eInternalParity := eParity;
THIS^.nInternalStopbits := nStopbits;
</pre>
<p>Hierbei gibt es allerdings einiges zu beachten. Die Aufrufreihenfolge von <font face="Courier New">FB_init()</font> ist in diesem Fall nicht eindeutig definiert. In meiner Testumgebung erfolgen die Aufrufe von ‚innen‘ nach ‚außen‘. Erst wird <font face="Courier New">fbSerialCommunication01.FB_init()</font> und <font face="Courier New">fbSerialCommunication02.FB_init()</font> aufgerufen, danach erst <font face="Courier New">fbSerialCommunicationCluster.FB_init()</font>. Es ist nicht möglich, die Parameter von ‚außen‘ nach ‚innen‘ durchzureichen. Die Parameter stehen in den beiden inneren Instanzen von <font face="Courier New">FB_SerialCommunication</font> somit nicht zur Verfügung. </p>
<p>Die Reihenfolge der Aufrufe ändert sich, sobald <font face="Courier New">FB_SerialCommunication</font> und <font face="Courier New">FB_SerialCommunicationRS232</font> vom gleichen Basis-FB abgeleitet werden. In diesem Fall wird <font face="Courier New">FB_init()</font> von ‚außen‘ nach ‚innen‘ aufgerufen. Dieser Ansatz ist aus zwei Gründen nicht immer umzusetzen: </p>
<ol>
<li>Liegt <font face="Courier New">FB_SerialCommunication</font> in einer Bibliothek, so kann die Vererbung nicht ohne weiteres geändert werden.
<li>Die Aufrufreihenfolge von <font face="Courier New">FB_init()</font> ist bei Verschachtelung nicht weiter definiert. Es ist also nicht auszuschließen, dass sich dieses in zukünftigen Versionen ändern kann.</li>
</ol>
<p>Eine Variante das Problem zu lösen, ist der explizite Aufruf von <font face="Courier New">FB_SerialCommunication.FB_init()</font> aus <font face="Courier New">FB_SerialCommunicationCluster.FB_init()</font>.</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
fbSerialCommunication01.FB_init(bInitRetains := bInitRetains, bInCopyCode := bInCopyCode, nDatabits := 7, eParity := E_Parity.Even, nStopbits := nStopbits);
fbSerialCommunication02.FB_init(bInitRetains := bInitRetains, bInCopyCode := bInCopyCode, nDatabits := 8, eParity := E_Parity.Even, nStopbits := nStopbits);
</pre>
<p>Alle Parameter, auch <font face="Courier New">bInitRetains</font> und <font face="Courier New">bInCopyCode</font>, werden direkt weitergegeben. </p>
<p>Achtung: Der Aufruf von <font face="Courier New">FB_init()</font> hat immer zur Folge das alle lokalen Variablen der Instanz initialisiert werden. Das muss beachtet werden, sobald <font face="Courier New">FB_init()</font> aus der SPS-Task explizit aufgerufen wird, statt implizit vor der SPS-Task. </p>
<h3>Zugriff über Eigenschaften</h3>
<p>Durch die Übergabe der Parameter per <font face="Courier New">FB_init()</font>, können diese zur Laufzeit weder von Außen gelesen noch verändert werden. Die einzige Ausnahme wäre der explizite Aufruf von <font face="Courier New">FB_init()</font> aus der SPS-Task. Dieses sollte aber grundsätzlich vermieden werden, da dadurch alle lokalen Variablen der Instanz werden neu initialisiert. </p>
<p>Soll der Zugriff aber dennoch möglich sein, so können für die Parameter entsprechende Eigenschaften angelegt werden: </p>
<p><a href="https://stefanhenneken.files.wordpress.com/2019/06/pic04.png"><img title="Pic04" style="background-image:none;padding-top:0;padding-left:0;display:inline;padding-right:0;border-width:0;" border="0" alt="Pic04" src="https://stefanhenneken.files.wordpress.com/2019/06/pic04_thumb.png?w=247&#038;h=253" width="247" height="253"></a> </p>
<p>Die Setter und Getter der jeweiligen Eigenschaften greifen auf die entsprechenden lokalen Variablen in dem Funktionsblock zu (<font face="Courier New">nInternalDatabits</font>, <font face="Courier New">eInternalParity</font> und <font face="Courier New">nInternalStopbits</font>). Somit lassen sich die Parameter bei der Deklaration, als auch zur Laufzeit vorgeben. </p>
<p>Durch das Entfernen der Setter kann ein Ändern der Parameter zur Laufzeit verhindert werden. Sind die Setter vorhanden kann allerdings auch auf <font face="Courier New">FB_init()</font> verzichtet werden. Eigenschaften können ebenfalls direkt bei der Deklaration einer Instanz initialisiert werden.</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
fbSerialCommunication : FB_SerialCommunication := (Databits := 8,
Parity := E_Parity.Odd,
Stopbits := 1);
</pre>
<p>Es können die Parameter von <font face="Courier New">FB_init()</font> und die Eigenschaften auch gleichzeitig angegeben werden:</p>
<pre class="brush: plain; title: ; wrap-lines: false; notranslate">
fbSerialCommunication : FB_SerialCommunication(nDatabits := 8, eParity := E_Parity.Odd, nStopbits := 1) :=
(Databits := 8, Parity := E_Parity.Odd, Stopbits := 1);
</pre>
<p>Vorrang haben in diesem Fall die Initialisierungswerte der Eigenschaften. Die Übergabe per Eigenschaft und <font face="Courier New">FB_init()</font> hat hier den Nachteil, das die Deklaration des Funktionsblocks unnötig lang wird. Beides zu implementieren erscheint mir auch nicht notwendig. Sind alle Parameter auch über Eigenschaften schreibbar, so kann auf die Initialisierung per <font face="Courier New">FB_init()</font> verzichtet werden. Als Fazit gilt: Dürfen Parameter zur Laufzeit nicht änderbar sein, so ist der Einsatz von <font face="Courier New">FB_init()</font> in Betracht zu ziehen. Soll der Schreibzugriff möglich sein, so bieten sich Eigenschaften an. </p>
<p><a href="https://github.com/StefanHenneken/Blog-2019-04-IEC61131-FBinit-Sample01" target="_blank">Beispiel 1 (TwinCAT 3.1.4022) auf GitHub</a></p>
Stefan HennekenStefan Hennekenhttps://blog.codeinside.eu/2019/05/31/build-windows-2016-docker-images-under-windows-2019http://feedproxy.google.com/~r/Code-insideBlog/~3/JugBb7ic6sg/build-windows-2016-docker-images-under-windows-2019Code-Inside BlogBuild Windows Server 2016 Docker Images under Windows Server 2019<p>Since the uprising of Docker on Windows we also invested some time into it and packages our OneOffixx server side stack in a Docker image.</p>
<p><strong>Windows Server 2016 situation:</strong></p>
<p>We rely on Windows Docker Images, because we still have some “legacy” parts that requires the full .NET Framework, thats why we are using this base image:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>FROM microsoft/aspnet:4.7.2-windowsservercore-ltsc2016
</code></pre></div></div>
<p>As you can already guess: This is based on a Windows Server 2016 and besides the “legacy” parts of our application, we need to support Windows Server 2016, because Windows Server 2019 is currently not available on our customer systems.</p>
<p>In our build pipeline we could easily invoke Docker and build our images based on the LTSC 2016 base image and everything was “fine”.</p>
<p><strong>Problem: Move to Windows Server 2019</strong></p>
<p>Some weeks ago my collegue updated our Azure DevOps Build servers from Windows Server 2016 to Windows Server 2019 and our builds began to fail.</p>
<p><strong>Solution: Hyper-V isolation!</strong></p>
<p>After some internet research this site popped up: <a href="https://docs.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/version-compatibility">Windows container version compatibility
</a></p>
<p>Microsoft made some great enhancements to Docker in Windows Server 2019, but if you need to “support” older versions, you need to take care of it, which means:</p>
<p>If you have a Windows Server 2019, but want to use Windows Server 2016 base images, you need to activate <a href="https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container">Hyper-V isolation</a>.</p>
<p>Example from our own cake build script:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>var exitCode = StartProcess("Docker", new ProcessSettings { Arguments = "build -t " + dockerImageName + " . --isolation=hyperv", WorkingDirectory = packageDir});
</code></pre></div></div>
<p>Hope this helps!</p><img src="http://feeds.feedburner.com/~r/Code-insideBlog/~4/JugBb7ic6sg" height="1" width="1" alt=""/>Fri, 31 May 2019 23:45:00 ZCode-Inside BlogCode-Inside BlogCode-Inside Bloghttp://heise.de/-4411902Holger SchwichtenbergViele Breaking Changes in Entity Framework Core 3.0Von Entity Framework Core 3.0 gibt es mittlerweile eine vierte Preview-Version, in der man aber noch nicht keine der unten genannten neuen Funktionen findet. Vielmehr hat Microsoft eine erhebliche Anzahl von Breaking Changes eingebaut. Die Frage ist warum?Mon, 06 May 2019 09:30:00 +02002019-05-06T09:30:00+02:00Holger SchwichtenbergHolger SchwichtenbergHolger Schwichtenberghttp://graberj.wordpress.com/?p=3565https://graberj.wordpress.com/2019/05/03/buch-rezension-zu-java-by-comparison/Johnny GraberBuch-Rezension zu „Java by Comparison“&#8222;Java by Comparison&#8220; von Simon Harrer, Jörg Lenhard und Linus Dietz erschien 2018 bei The Pragmatic Programmers. Dieses Buch wagt sich an eine grosse Herausforderung: Wie kann man das über Jahre angeeignete Expertenwissen in einfacher Form Programmier-Anfängern zugänglich machen? Die Autoren nutzen dazu 70 Beispiele, in denen ein funktionierender erster Wurf einer wartbaren und durchdachten &#8230; <a href="https://graberj.wordpress.com/2019/05/03/buch-rezension-zu-java-by-comparison/" class="more-link"><span class="screen-reader-text">Buch-Rezension zu &#8222;Java by Comparison&#8220;</span> weiterlesen</a>Fri, 03 May 2019 06:00:38 ZJohnny GraberJohnny GraberJohnny Graberhttp://heise.de/-4411824Holger SchwichtenbergWie man Entity Framework Core dazu bringt, die Klassennamen statt der DbSet-Namen als Tabellennamen zu verwendenMicrosofts objektrelationaler Mapper Entity Framework Core hat eine unangenehme Grundeinstellung: Die Datenbanktabellen heißen nicht wie die Klassennamen der Entitätsklassen, sondern wie die Property-Namen, die in der Kontextklasse bei der Deklaration des DbSet<T> verwendet werden. Thu, 02 May 2019 14:29:00 +02002019-05-02T14:29:00+02:00Holger SchwichtenbergHolger SchwichtenbergHolger Schwichtenberghttps://blog.codeinside.eu/2019/04/30/update-onprem-tfs-2018-to-azuredevops-server-2019http://feedproxy.google.com/~r/Code-insideBlog/~3/U1lx5YFbb08/update-onprem-tfs-2018-to-azuredevops-server-2019Code-Inside BlogUpdate OnPrem TFS 2018 to AzureDevOps Server 2019<p>We recently updated our OnPrem TFS 2018 installation to the newest release: <strong><a href="https://azure.microsoft.com/en-us/services/devops/server/">Azure DevOps Server</a></strong></p>
<p>The product has the same core features as TFS 2018, but with a new UI and other improvements. For a full list you should read the <a href="https://docs.microsoft.com/en-us/azure/devops/server/release-notes/azuredevops2019?view=azure-devops">Release Notes</a>.</p>
<p>*Be aware: This is the <strong>OnPrem</strong> solution, even with the slightly missleading name “Azure DevOps Server”. If you are looking for the <strong>Cloud</strong> solution you should read the <a href="https://azure.microsoft.com/en-us/services/devops/migrate/">Migration-Guide</a>.</p>
<h1 id="updating-a-tfs-2018-installation">“Updating” a TFS 2018 installation</h1>
<p>Our setup is quite simple: One server for the “Application Tier” and another SQL database server for the “Data Tier”.
The “Data Tier” was already running with SQL Server 2016 (or above), so we only needed to touch the “Application Tier”.</p>
<h1 id="application-tier-update">Application Tier Update</h1>
<p>In our TFS 2018 world the “Application Tier” was running on a Windows Server 2016, but we decided to create a new (clean) server with Windows Server 2019 and doing a “clean” Azure DevOps Server install, but pointing to the existing “Data Tier”.</p>
<p>In theory it is quite possible to update the actual TFS 2018 installation, but because “new is always better”, we also switched the underlying OS.</p>
<h1 id="update-process">Update process</h1>
<p>The actual update was really easy. We did a “test run” with a copy of the database and everything worked as expected, so we reinstalled the Azure DevOps Server and run the update on the production data.</p>
<h2 id="steps">Steps:</h2>
<p><img src="https://blog.codeinside.eu/assets/md-images/2019-04-30/0.png" alt="x" title="Start" /></p>
<p><img src="https://blog.codeinside.eu/assets/md-images/2019-04-30/1.png" alt="x" title="Wizard" /></p>
<p><img src="https://blog.codeinside.eu/assets/md-images/2019-04-30/2.png" alt="x" title="Existing or new" /></p>
<p><img src="https://blog.codeinside.eu/assets/md-images/2019-04-30/3.png" alt="x" title="SQL instance" /></p>
<p><img src="https://blog.codeinside.eu/assets/md-images/2019-04-30/4.png" alt="x" title="Production update" /></p>
<p><img src="https://blog.codeinside.eu/assets/md-images/2019-04-30/5.png" alt="x" title="Service Account" /></p>
<p><img src="https://blog.codeinside.eu/assets/md-images/2019-04-30/6.png" alt="x" title="Settings" /></p>
<p><img src="https://blog.codeinside.eu/assets/md-images/2019-04-30/7.png" alt="x" title="Search Service" /></p>
<p><img src="https://blog.codeinside.eu/assets/md-images/2019-04-30/8.png" alt="x" title="Reporting" /></p>
<p><img src="https://blog.codeinside.eu/assets/md-images/2019-04-30/9.png" alt="x" title="Confirm" /></p>
<p><img src="https://blog.codeinside.eu/assets/md-images/2019-04-30/10.png" alt="x" title="Check" /></p>
<p><img src="https://blog.codeinside.eu/assets/md-images/2019-04-30/11.png" alt="x" title="Configuration" /></p>
<p><img src="https://blog.codeinside.eu/assets/md-images/2019-04-30/12.png" alt="x" title="Configuration done" /></p>
<p><img src="https://blog.codeinside.eu/assets/md-images/2019-04-30/13.png" alt="x" title="Success" /></p>
<h2 id="summary">Summary</h2>
<p>If you are running a TFS installation, don’t be afraid to do an update. The update itself was done in 10-15 minutes on our 30GB-ish database.</p>
<p>Just download the setup from the <a href="https://azure.microsoft.com/en-us/services/devops/server/">Azure DevOps Server</a> site (“Free trial…”) and you should be ready to go!</p>
<p>Hope this helps!</p><img src="http://feeds.feedburner.com/~r/Code-insideBlog/~4/U1lx5YFbb08" height="1" width="1" alt=""/>Tue, 30 Apr 2019 23:45:00 ZCode-Inside BlogCode-Inside BlogCode-Inside Bloghttps://asp.net-hacker.rocks/2019/04/29/customizing-aspnetcore-12-hosting.htmlhttp://feedproxy.google.com/~r/jgutsch/~3/hleOO8KAf8I/customizing-aspnetcore-12-hosting.htmlJürgen GutschCustomizing ASP.NET Core Part 12: Hosting <p>In this 12th part of this series, I'm going to write about how to customize hosting in ASP.NET Core. We will look into the hosting options, different kind of hosting and a quick look into hosting on the IIS. And while writing this post this again seems to get a long one.</p>
<blockquote>
<p>This will change in ASP.NET Core 3.0. I anyway decided to do this post about ASP.NET Core 2.2 because it still needs some time until ASP.NET Core 3.0 is released.</p>
</blockquote>
<p>This post is just an overview bout the different kind of application hosting. It is surely possible to go a lot more into the details for each topic, but this would increase the size of this post a lot and I need some more topics for future blog posts ;-)</p>
<h2>This series topics</h2>
<ul>
<li><a href="/2018/09/20/customizing-aspnetcore-01-logging.html">Customizing ASP.NET Core Part 01: Logging</a></li>
<li><a href="/2018/09/24/customizing-aspnetcore-02-configuration.html">Customizing ASP.NET Core Part 02: Configuration</a></li>
<li><a href="/2018/09/27/customizing-aspnetcore-03-dependency-injection.html">Customizing ASP.NET Core Part 03: Dependency Injection</a></li>
<li><a href="/2018/10/01/customizing-aspnetcore-04-https.html">Customizing ASP.NET Core Part 04: HTTPS</a></li>
<li><a href="/2018/10/04/customizing-aspnetcore-05-hostedservices.html">Customizing ASP.NET Core Part 05: HostedServices</a></li>
<li><a href="/2018/10/08/customizing-aspnetcore-06-middlewares.html">Customizing ASP.NET Core Part 06: Middlewares</a></li>
<li><a href="/2018/10/11/customizing-aspnetcore-07-outputformatter.html">Customizing ASP.NET Core Part 07: OutputFormatter</a></li>
<li><a href="/2018/10/17/customizing-aspnetcore-08-modelbinders.html">Customizing ASP.NET Core Part 08: ModelBinders</a></li>
<li><a href="/2018/10/29/customizing-aspnetcore-09-actionfilters.html">Customizing ASP.NET Core Part 09: ActionFilter</a></li>
<li><a href="/2018/11/13/customizing-aspnetcore-10-taghelpers.html">Customizing ASP.NET Core Part 10: TagHelpers</a></li>
<li><a href="/2019/01/30/customizing-aspnetcore-11-webhostbuilder.html">Customizing ASP.NET Core Part 11: WebHostBuilder</a></li>
<li><strong>Customizing ASP.NET Core Part 12: Hosting - This article</strong></li>
</ul>
<h2>Quick setup</h2>
<p>For this series we just need to setup a small empty web application.</p>
<pre><code class="language-shell">dotnet new web -n ExploreHosting -o ExploreHosting
</code></pre>
<p>That's it. Open it with Visual Studio Code:</p>
<pre><code class="language-shell">cd ExploreHosting
code .
</code></pre>
<p>And voila, we get a simple project open in VS Code:</p>
<p><img src="https://asp.net-hacker.rocks/img/customize-aspnetcore/simpleproject.PNG" alt="" /></p>
<h2>WebHostBuilder</h2>
<p>Like in the last post, we will focus on the <code>Program.cs</code>. The <code>WebHostBuilder</code> is our friend. This is where we configure and create the web host. The next snippet is the default configuration of every new ASP.NET Core web we create using <code>File =&gt; New =&gt; Project</code> in Visual Studio or <code>dotnet new</code> with the .NET CLI:</p>
<pre><code class="language-csharp">public class Program
{
public static void Main(string[] args)
{
CreateWebHostBuilder(args).Build().Run();
}
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =&gt;
WebHost.CreateDefaultBuilder(args)
.UseStartup&lt;Startup&gt;();
}
</code></pre>
<p>As we already know from the previous posts the default build has all the needed stuff pre-configured. All you need to run an application successfully on Azure or on an on-premise IIS is configured for you.</p>
<p>But you are able to override almost all of this default configurations. Also the hosting configuration.</p>
<h3>Kestrel</h3>
<p>After the <code>WebHostBuilder</code> is created we can use various functions to configure the builder. Here we already see one of them, which specifies the <code>Startup</code> class that should be used. In the last post we saw the <code>UseKestrel</code> method to configure the Kestrel options:</p>
<pre><code class="language-csharp">.UseKestrel((host, options) =&gt;
{
// ...
})
</code></pre>
<blockquote>
<p>Reminder: Kestrel is one possibility to host your application. Kestrel is a web server built in .NET and based on .NET socket implementations. Previously it was built on top of libuv, which is the same web server that is used by NodeJS. Microsoft removes the dependency to libuv and created an own web server implementation based on .NET sockets.</p>
<p>Docs: https://docs.microsoft.com/en-us/aspnet/core/fundamentals/servers/kestrel</p>
</blockquote>
<p>This first argument is a <code>WebHostBuilderContext</code> to access already configured hosting settings or the configuration itself. The second argument is an object to configure Kestrel. This snippet shows what we did in the last post to configure the socket endpoints where the host needs to listen to:</p>
<pre><code class="language-csharp">.UseKestrel((host, options) =&gt;
{
var filename = host.Configuration.GetValue(&quot;AppSettings:certfilename&quot;, &quot;&quot;);
var password = host.Configuration.GetValue(&quot;AppSettings:certpassword&quot;, &quot;&quot;);
options.Listen(IPAddress.Loopback, 5000);
options.Listen(IPAddress.Loopback, 5001, listenOptions =&gt;
{
listenOptions.UseHttps(filename, password);
});
})
</code></pre>
<p>This will override the default configuration where you are able to pass in URLs, eg. using the <code>applicationUrl</code> property of the <code>launchSettings.json</code> or an environment variable.</p>
<h3>HTTP.sys</h3>
<p>Do you know that there is another hosting option? A different web server implementation? It is <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/servers/httpsys">HTTP.sys</a>. This is a pretty mature library deep within Windows that can be used to host your ASP.NET Core application.</p>
<pre><code class="language-csharp">.UseHttpSys(options =&gt;
{
// ...
})
</code></pre>
<p>The HTTP.sys is different to Kestrel. It cannot be used in IIS because it is not compatible with the <a href="https://docs.microsoft.com/en-us/aspnet/core/host-and-deploy/aspnet-core-module?view=aspnetcore-2.2">ASP.NET Core Module</a> for IIS.</p>
<p>The main reason to use HTTP.sys instead of Kestrel is Windows Authentication which cannot be used in Kestrel only. Another reason is, if you need to expose it to the internet without the IIS.</p>
<p>Also the IIS is running on top of HTTP.sys for years. Which means <code>UseHttpSys()</code> and IIS are using the same web server implementation. To learn more about HTTP.sys please read the <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/servers/httpsys">docs</a>.</p>
<h3>Hosting on IIS</h3>
<p>An ASP.NET Core Application shouldn't be directly exposed to the internet, even if it's supported for even Kestrel or the HTTP.sys. It would be the best to have something like a reverse proxy in between or at least a service that watches the hosting process. For ASP.NET Core the IIS isn't only a reverse proxy. It also takes care of the hosting process in case it brakes because of an error or whatever. It'll restart the process in that case. Also Nginx may be used as an reverse proxy on Linux that also takes care of the hosting process.</p>
<p>To host an ASP.NET Core web on an IIS or on Azure you need to publish it first. Publishing doesn't only compiles the project. It also prepares the project to host it on IIS, on Azure or on an webserver on Linux like Nginx.</p>
<blockquote>
<p>dotnet publish -o ..\published -r win32-x64</p>
</blockquote>
<p><img src="https://asp.net-hacker.rocks/img/customize-aspnetcore/dotnet-publish.png" alt="" /></p>
<p>This produces an output that can be mapped in the IIS. It also creates a web.config to add settings for the IIS or Azure. It contains the compiled web application as a DLL.</p>
<p>If you publish a self-contained application it also contains the runtime itself. A self-contained application brings it's own .NET Core runtime, but the size of the delivery increases a lot.</p>
<p>And on the IIS? Just create a new web and map it to the folder where you placed the published output:</p>
<p><img src="https://asp.net-hacker.rocks/img/customize-aspnetcore/iis-hosting.png" alt="" /></p>
<p>It get's a little more complicated if you need to change the security, if you have some database connections and so on. This would be a topic for a separate blog post. But in this small sample it simply works:</p>
<p><img src="https://asp.net-hacker.rocks/img/customize-aspnetcore/iis-hosted.PNG" alt="" /></p>
<p>This is the output of the small Middleware in the <code>startup.cs</code> of the demo project:</p>
<pre><code class="language-csharp">app.Run(async (context) =&gt;
{
await context.Response.WriteAsync(&quot;Hello World!&quot;);
});
</code></pre>
<h2>Nginx</h2>
<p>Unfortunately I cannot write about Nginx, because I don't have a running Linux currently to play around with it. This is one of the many future projects I have. I just got ASP.NET Core running on Linux using the Kestrel webserver.</p>
<h2>Conclusion</h2>
<p>ASP.NET Core and the .NET CLI already contain all the tools to get it running on various platforms and to set it up to get it ready for Azure and the IIS, as well as Nginx. This is super easy and well described in the docs.</p>
<blockquote>
<p>BTW: What do you think about the new docs experience compared to the old MSDN documentation?</p>
</blockquote>
<p>I'll definitely go deeper into some of the topics and in ASP.NET Core there are some pretty cool hosting features that make it a lot more flexible to host your application:</p>
<p>Currently we have the <code>WebHostBuilder</code> that creates the hosting environment of the applications. In 3.0 we get the <code>HostBuilder</code> that is able to create a hosting environment that is completely independent from any web context. I'm going to write about the <code>HostBuilder</code> in one of the next blog posts.</p><img src="http://feeds.feedburner.com/~r/jgutsch/~4/hleOO8KAf8I" height="1" width="1" alt=""/>Mon, 29 Apr 2019 00:00:00 Z2019-04-29T00:00:00ZJürgen GutschJürgen GutschJürgen Gutschhttp://heise.de/-4399253Holger SchwichtenbergMagdeburger Developer Days vom 20. bis 22. Mai 2019Die Entwickler-Community-Konferenz "Magdeburger Developer Days" geht in die vierte Runde.Thu, 18 Apr 2019 07:50:00 +02002019-04-18T07:50:00+02:00Holger SchwichtenbergHolger SchwichtenbergHolger Schwichtenberghttps://asp.net-hacker.rocks/2019/04/16/sharpcms.htmlhttp://feedproxy.google.com/~r/jgutsch/~3/kRKuQ0OA8W4/sharpcms.htmlJürgen GutschSharpcms.Core - Migrating an old ASP.NET CMS to ASP.​NET Core on Twitch<p>On <a href="https://www.twitch.tv/juergengutsch">my Twitch stream</a> I planned to show how to migrate a legacy ASP.NET application to ASP.NET Core, to start a completely new ASP.NET Core project and to show some news about the .NET Developer Community. When I did the first stream and introduced the plans to the audience, it somehow turns into the direction to migrate the legacy application. So I chose the old <a href="https://github.com/JuergenGutschOnTwitch/Sharpcms.Core">Sharpcms</a> project to show the migration, which is maybe not the best choice because this CMS doesn't use the common ASP.NET patterns.</p>
<h2>About the sharpcms</h2>
<p>Initially the <a href="https://github.com/JuergenGutschOnTwitch/Sharpcms.Core">Sharpcms</a> was built by a Danish developer. Back when he stopped maintaining it, me and my friend Thomas Huber asked him to take over his project and to continue maintaining this project. He said yes and since than we were the main contributors and coordinators of this project.</p>
<blockquote>
<p>This is where my Twitter handle was born. Initially I planned to use this Twitter account to promote the sharpcms, but I used it off-topic. I promoted blog posts, community events using this account as well did some interesting discussions on twitter. I used it too much, it got linked everywhere and it didn't make sense to change it anymore.
Anyway the priorities changed. The sharpcms wasn't the main hobby project anymore, but I still used this Twitter handle. It still kinda makes sense to me, because I work with CSharp and I'm a kind of a CMS expert. (I developed on two different ones for years and used a lot more.)</p>
</blockquote>
<p>We had huge plans with this project, but as always plans and priorities change with new family members and new jobs. We haven't done anything on that CMS for years. Actually I'm not sure whether this CMS is still used or not.</p>
<p>Anyway. This is one of the best CMS systems from my perspective. Easy to setup, lightweight and fast to run and easy to use for users without a technical background. Creating templates for this CMS need a good knowledge of XML and XSLT, because XML is the base of this CMS and XSLT is used for the templates. This was super fast with the .NET Framework. Caching wasn't really needed for the sharpcms.</p>
<h2>Juergen.IO.Stream</h2>
<p>In the first show on Twitch I introduced the plans about to migrate the sharpcms and the other one about to start a plain new ASP.NET Core project. It turns out that the audience wanted to see the migration project. I introduced the sharpcms, showed the original sources and started to create .NET Standard libraries to show the difficulties.</p>
<p>I wasn't that pessimistic than the audience, cause I still knew that CMS. There where not too much dependencies to the classic ASP.NET and System.Web stuff. And as expected it wasn't that hard.</p>
<p>The rendering of the output in the sharpcms is completely based on XML and XSLT. The sharpcms creates a XML structure that get's interpreted and rendered using XSLT templates.</p>
<blockquote>
<p>XSLT is a XML based programming language that navigates through XML data and crates any output. It actually is a programming language, you are able to create decision statements, loops, functions and variables. It is limited, but as well as Razor, ASP or PHP you mix the programming language with the output you wanna create, which makes it easy and intuitive to use.</p>
</blockquote>
<p>This means there is no rendering logic in the C# codes. All the C# code does is to work on the request and to create the XML data containing the data to show. At the end it transforms the XML using the XSLT templates.</p>
<p>The main work I needed to do to create the Sharpcms running is to wrap the ASP.NET Core request context into a request context that looks similar to the System.Web version that was used inside the Sharpcms. Because it heavily uses the ASP.NET WebForm page object and its properties.</p>
<p>The migration strategy was to get it running even if it is kinda hacky and to clean it up later on. Know we are in this state. The old Sharpcms sources are working on ASP.NET Core using .NET Standard libraries.</p>
<p>The Sharpcms.Core running on Azure: <a href="https://sharpcms.azurewebsites.net/">https://sharpcms.azurewebsites.net</a></p>
<h2>Performance</h2>
<p><a href="https://blog.der-albert.com">Albert Weinert</a> (a community guy, former MVP and a <a href="https://www.twitch.tv/deralbertlive">Twitch streamer</a> as well) told me during the first stream, that XSLT isn't that fast in .NET Core. Unfortunately he was right. The transformation speed and the speed of reading the XML data isn't that fast.</p>
<p>We'll need to have a look into the performance and to find a way to speed it up. Maybe to create a alternative view engine to replace the XML and XSLT based view engine somewhen. It would also be possible to have multiple view engines. Razor, Handlebars or Liquid would be an option. All of these already have .NET implementations which can be used here.</p>
<h2>Next steps</h2>
<p>Even though the CMS is now running on ASP.NET Core, there's still a lot to do. Here are the next issues I need to work on:</p>
<ul>
<li>
<p>Build on Azure DevOps <a href="https://github.com/JuergenGutschOnTwitch/Sharpcms.Core/issues/8">#8</a></p>
</li>
<li>
<p>Performance:</p>
<ul>
<li>Get rid of the physical XML data and move the data to a database <a href="https://github.com/JuergenGutschOnTwitch/Sharpcms.Core/issues/4">#4</a></li>
<li>Speed up the XSL transformation <a href="https://github.com/JuergenGutschOnTwitch/Sharpcms.Core/issues/3">#3</a></li>
<li>Find another way to render the UI, maybe using razor, handlebars or liquid <a href="https://github.com/JuergenGutschOnTwitch/Sharpcms.Core/issues/2">#2</a></li>
<li>Add caching <a href="https://github.com/JuergenGutschOnTwitch/Sharpcms.Core/issues/1">#1</a></li>
</ul>
</li>
<li>
<p>Cleanup the codes <a href="https://github.com/JuergenGutschOnTwitch/Sharpcms.Core/issues/9">#9</a></p>
</li>
<li>
<p>User password encryption <a href="https://github.com/JuergenGutschOnTwitch/Sharpcms.Core/issues/5">#5</a></p>
</li>
<li>
<p>Provide NuGet packages to easily use the sharpcms <a href="https://github.com/JuergenGutschOnTwitch/Sharpcms.Core/issues/6">#6</a></p>
<ul>
<li>Provide a package for the frontend as well <a href="https://github.com/JuergenGutschOnTwitch/Sharpcms.Core/issues/7">#7</a></li>
</ul>
</li>
<li>
<p>Map the Middleware as routed one, like it should work in ASP.NET core 3.0</p>
</li>
</ul>
<h2>Join me</h2>
<p>If you like to join me in the stream to work together with me on the Sharpcms.Core, feel free to tell me. I would be super happy to do a pair programming session to work on a specific problem. It would be great to have experts on this topics in the stream:</p>
<ul>
<li>Razor or Handlebars to create an alternative view engine</li>
<li>Security and Encryption to make this CMS more secure</li>
<li>DevOps to create a build and release pipeline</li>
</ul>
<h2>Summary</h2>
<p>Migrating the old Sharpcms to ASP.NET Core was fun, but it's not yet done. There is a lot more to do. I'll continue working on it on my stream, but will also do some other stuff in the streams.</p>
<p>If you like to work on the Sharpcms to help me to solve some issues or to start creating a modern documentation. Feel free. This would help a lot.</p>
<ul>
<li>Create an Issue and I'll discuss it <a href="https://www.twitch.tv/juergengutsch">in the stream</a>.</li>
<li>Create a PR and I'll discuss and merge it during <a href="https://www.twitch.tv/juergengutsch">the stream</a>.</li>
</ul><img src="http://feeds.feedburner.com/~r/jgutsch/~4/kRKuQ0OA8W4" height="1" width="1" alt=""/>Tue, 16 Apr 2019 00:00:00 Z2019-04-16T00:00:00ZJürgen GutschJürgen GutschJürgen Gutschhttps://david-tielke.de/post.aspx?id=e4634993-8524-42ae-a140-351afda9de40https://david-tielke.de/post/webcast-softwarequalitat-teil-2-prozessqualitatDavid Tielke[Webcast] Softwarequalität Teil 2 - Prozessqualität<p style="text-align: center; "><iframe width="560" height="315" src="https://www.youtube.com/embed/2eGG4BDOGg4" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe></p><p style="text-align: justify;">Es ist mal wieder Webcast-Zeit. Nachdem wir uns im ersten Teil die Grundlagen zum Thema Softwarequalität angeschaut haben, widmen wir uns im zweiten Teil der Prozessqualität. Wie sollte also ein guter Softwareentwicklungsprozess aussehen und wie sollte er nicht aussehen? Worauf muss geachtet werden und was solltet Ihr machen oder auch besser die Finger davon lassen? All diese Fragen beschäftigen uns im zweiten Teil zum Thema Softwarequalität. Viel Spaß damit!</p>
Mon, 15 Apr 2019 12:30:00 ZDavid TielkeDavid TielkeDavid Tielkehttp://www.yellow-brick-code.org/?p=1209https://yellow-brick-code.org/?p=1209Christina Hirth Continuous Delivery Is a Journey – Part 3In the first part I described why I think that continuous delivery is important for an adequate developer experience and in the second part I draw a rough picture about how we implemented it in a 5-teams big product development. Now it is time to discuss about the big impact &#8211; and the biggest benefits &#8230; <a href="https://yellow-brick-code.org/?p=1209" class="more-link">Continue reading <span class="screen-reader-text">Continuous Delivery Is a Journey – Part 3</span></a>Sun, 14 Apr 2019 09:36:35 ZChristina Hirth
<p>In the <a href="https://yellow-brick-code.org/?p=1168">first part</a> I described why I think that continuous delivery is important for an adequate developer experience and in the <a href="https://yellow-brick-code.org/?p=1196">second part</a> I draw a rough picture about how we implemented it in a 5-teams big product development. Now it is time to discuss about the big impact &#8211; and the biggest benefits &#8211; regarding the development of the product itself.</p>
<p>Why do more and more companies, technical and non-technical people want to change towards an agile organisation? Maybe because the decision makers have understood that waterfall is rarely purposeful? There are a lot of motives &#8211; beside the rather wrong <del>dumb</del> one &#8220;because everybody else does this&#8221; &#8211; and I think there are two intertwined reasons for this: the speed at wich the digital world changes and the ever increasing complexity of the businesses we try to automate.</p>
<p>Companies/people have finally started to accept that they <strong>don&#8217;t know</strong> what their customer need. They have started to feel that the customer &#8211; also the market &#8211; has become more and more demanding regarding the quality of the solutions they get. This means that until Skynet is not born (sorry, I couldn&#8217;t resist <img src="https://s.w.org/images/core/emoji/11.2.0/72x72/1f601.png" alt="😁" class="wp-smiley" style="height: 1em; max-height: 1em;" />) we oftware developers, product owners, UX designers, etc. have to decide which solution would be the best to solve the problems in that specific business and we have to decide fast. </p>
<p style="text-align:center" class="has-background has-yellow-background-color"><strong>We have to deliver fast, get feedback fast, learn and adapt the consequences even faster. We have to do all this without down times, without breaking the existing features and &#8211; for most of us very important: without getting a heart attack every time we deploy to production.</strong></p>
<p>IMHO These are the most important reasons why every product development team should invest in CI/CD.</p>
<p>The last missing piece of the jigsaw which allows us to deliver the features fast (respectively continuously) without disturbing anybody and without losing the control how and when features are released is called <a rel="noreferrer noopener" aria-label="feature toggle (opens in a new tab)" href="https://en.wikipedia.org/wiki/Feature_toggle" target="_blank">feature toggle</a>.</p>
<blockquote class="wp-block-quote"><p> A&nbsp;<strong>feature toggle</strong><sup><a href="https://en.wikipedia.org/wiki/Feature_toggle#cite_note-:1-1">[1]</a></sup>&nbsp;(also&nbsp;<strong>feature switch</strong>,&nbsp;<strong>feature flag</strong>,&nbsp;<strong>feature flipper</strong>,&nbsp;<strong>conditional feature</strong>, etc.) is a technique in&nbsp;<a href="https://en.wikipedia.org/wiki/Software_development">software development</a>&nbsp;that attempts to provide an alternative to maintaining multiple&nbsp;<a href="https://en.wikipedia.org/wiki/Source_code">source-code</a>&nbsp;branches (known as feature branches), such that a feature can be tested even before it is completed and ready for release. Feature toggle is used to hide, enable or disable the feature during run time. For example, during the development process, a developer can enable the feature for testing and disable it for other users.<sup><a href="https://en.wikipedia.org/wiki/Feature_toggle#cite_note-:0-2">[2]</a></sup> </p><cite>Wikipedia<br></cite></blockquote>
<p>The concept is really simple: one feature should be hidden until somebody, something decides that it is allowed to be used.</p>
<pre class="brush: jscript; title: ; notranslate">
function useNewFeature(featureId) {
const e = document.getElementById(featureId);
const feat = config.getFeature(featureId);
if(!feat.isEnabled)
e.style.display = 'none';
else
e.style.display = 'block';
}
</pre>
<p>As you see, implementing feature toggles is really that simple. To adopt this concept will need some effort though:</p>
<ul><li>Strive for only one toggle (one <em>if</em>) per feature. At the beginning it will be hard or even impossible to achieve this but it is a very important to define this as a middle-term goal. Having only one toggle per feature means the code is highly decoupled and very good structured. </li><li>Place this (main) toggle at the entry point (a button, a new form, a new API endpoint) the first interaction point with the user (person or machine) and in disabled state it should hide this entry point.</li><li>The enabled state of the toggle should lead to new services (in micro service world), new arguments or to new functions, all of them implementing the behavior for <em>feature.enabled == true</em>. This will lead to code duplication: yes, this is totally ok. I look at it as a very careful refactoring without changing the initial code. Implementing a new feature should not break or eliminate existing features. The tests too (all kind of them) should be organized similarly: in different files, duplicated versions, implemented for each state.</li></ul>
<div class="wp-block-image"><figure class="aligncenter"><img src="https://yellow-brick-code.org/wp-content/uploads/feature_toggle-300x258.png" alt="" class="wp-image-1247" srcset="https://yellow-brick-code.org/wp-content/uploads/feature_toggle-300x258.png 300w, https://yellow-brick-code.org/wp-content/uploads/feature_toggle.png 541w" sizes="(max-width: 300px) 100vw, 300px" /><figcaption>the different states of the toggle lead to clearly separated paths </figcaption></figure></div>
<ul><li>Through the toggle you gain real freedom to make mistakes or just the wrong feature. At the same time you can always enable the feature and show it the product owner or the stake holders. This means a feedback loop is reduced to minimum. </li><li>This freedom has a price of course: after the feature is implemented, the feedback is collected, the decision for enabling the feature was made, after all this the source code must be cleaned up: all code for <em>feature.enabled == false</em> must be removed. This is why it is so important to create the different paths so that the risk of introducing a bug is virtually zero. We want to reduce workload not increase it.</li><li>Toggles don&#8217;t have to be temporary, business toggles (i.e. some premium features or &#8220;maintenance mode&#8221;) can stay forever. It is important to define beforehand what kind of toggle will be needed because the business toggles will be always part of your source code. The default value for this kind of toggles should be <em>false</em>.</li><li>The default value for the temporary toggles should be <em>true</em> and it should be deactivated on production, activated during the development.</li></ul>
<p>One advice regarding the tooling: start small, with a config map in kubernetes, a database table, a json file somewhere will suffice. Later on new requirements will appear, like notifying the client UI when a toggle changes or allowing the product owner to decide, when a feature will be enabled. That will be the moment to think about next steps but for the moment it is more important to adopt this workflow, adopt this mindset of discipline to keep the source code clean, learn the techniques how to organize the code basis and ENJOY HAVING THE CONTROL over the impact of deployments, feature decisions, stress!</p>
<p>That&#8217;s it, I shared all of my thoughts regarding this subject: your journey of delivering continuously can start or continued <img src="https://s.w.org/images/core/emoji/11.2.0/72x72/1f609.png" alt="😉" class="wp-smiley" style="height: 1em; max-height: 1em;" />) now.</p>
<p><em>p.s.</em> It is time for the one sentence about feature branches: <br><strong>Feature toggles will never work with feature branches.</strong> Period. This means you have to decide: move to trunk based development or forget continuous development.</p>
<p><em>p.p.s.</em> For the most languages exist feature toggle libraries, frameworks, even platforms, it is not necessary to write a new one. There are libraries for different complexities how the state can be calculated (like account state, persons, roles, time settings), just pick one.</p>
<p>Update:</p>
<p>As pointed out by <a rel="noreferrer noopener" aria-label="Gergely (opens in a new tab)" href="https://twitter.com/sigeWuzHere" target="_blank">Gergely</a> on Twitter, on Martin Fowlers blog is a very good article describing extensively the different feature toggles and the power of this technique: <a rel="noreferrer noopener" aria-label="Feature Toggles (aka Feature Flags) (opens in a new tab)" href="https://martinfowler.com/articles/feature-toggles.html" target="_blank">Feature Toggles (aka Feature Flags)</a></p>
Christina Hirth Christina Hirth https://david-tielke.de/post.aspx?id=1e5404a3-b1da-4f94-b790-4c63b92f5bf8https://david-tielke.de/post/webcast-softwarequalitat-teil-1-einfuhrungDavid Tielke[Webcast] Softwarequalität Teil 1 - Einführung<div style="text-align: center;"><iframe width="560" height="315" src="https://www.youtube.com/embed/0YYHahIz8po" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe></div><p>Es ist mal wieder Webcast-Zeit - nach etlichen Anfragen in den letzten Tagen habe ich heute wieder mein Studio-Equipment aufgebaut und den Vortrag des .NET Day Franken als Webcast aufgezeichnet. Da auf der Konferenz schon die vorgegebenen 70 Minuten sehr knapp bemessen waren, habe ich das Ganze auf mehrere Folgen aufgeteilt, welche in den nächsten Tagen und Wochen erscheinen werden. Viel Spaß damit!</p>Fri, 12 Apr 2019 09:51:00 ZDavid TielkeDavid TielkeDavid Tielkehttps://david-tielke.de/post.aspx?id=d2f9f9e5-b80c-408f-8034-93f582fbc214https://david-tielke.de/post/net-day-franken-2019-inhalte-meiner-session-softwarequaltat1David Tielke.NET Day Franken 2019 - Inhalte meiner Session "Softwarequaltät"<p><img src="https://www.xing.com/img/custom/events/events_files/7/7/e/1451902/full/DDayF_Logo_Small.jpg?1518722878" style="text-align: center; width: 321.221px; height: 108px; float: left; margin-right: 15px;">&nbsp;Früher als in den letzten Jahren ist für mich die Konferenzsaison dieses Mal schon im April gestartet und direkt mit einer neuen Konferenz, dem .NET Day Franken 2019. Die Communitykonferenz mit knapp 200 Teilnehmern wurde zum zehnten Mal in Nürnberg von der Community veranstaltet und bot neben einem tollen Programm, einer super Orga vor allem eine sensationelle Location. Ich durfte einen 70-minütigen Vortrag zum Thema "Softwarequalität" beisteuern, in welchem neben den Basics vor allem die diversen Probleme und deren vermeintlichen Lösungen im Fokus standen. An dieser Stelle möchte ich noch einmal allen Teilnehmern und natürlich auch den Organisatoren für eine erstklassige Veranstaltung danken. Es halt sehr viel Spaß gemacht, ich hoffe wir sehen uns im nächsten Jahr wieder. Hier stelle ich nun die Folien meines Vortrags zur Verfügung.</p>
<div style="text-align: center;"><iframe src="//www.slideshare.net/slideshow/embed_code/key/yO6L0NfjlrnJha" width="595" height="485" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" allowfullscreen="" style="border-width: 1px; border-style: solid; border-color: rgb(204, 204, 204); margin-bottom: 5px; max-width: 100%;"></iframe></div><div style="margin-bottom:5px"> <strong> <a href="https://david-tielke.de//www.slideshare.net/DavidTielke/softwarequalitt" title="Softwarequalität" target="_blank">Softwarequalität</a> </strong> von <strong><a href="https://www.slideshare.net/DavidTielke" target="_blank">David Tielke</a></strong> </div>Wed, 10 Apr 2019 11:16:00 ZDavid TielkeDavid TielkeDavid Tielkehttps://david-tielke.de/post.aspx?id=06447817-5f32-45b2-b205-7a720bac0477https://david-tielke.de/post/net-day-franken-2019-inhalte-meiner-session-softwarequaltatDavid Tielke.NET Day Franken 2019 - Inhalte meiner Session "Softwarequaltät"<p><img src="https://www.xing.com/img/custom/events/events_files/7/7/e/1451902/full/DDayF_Logo_Small.jpg?1518722878" style="text-align: center; width: 321.221px; height: 108px; float: left; margin-left: 15px;">&nbsp;Früher als in den letzten Jahren ist für mich die Konferenzsaison dieses Mal schon im April gestartet und direkt mit einer neuen Konferenz, dem .NET Day Franken 2019. Die Communitykonferenz mit knapp 200 Teilnehmern wurde zum zehnten Mal in Nürnberg von der Community veranstaltet und bot neben einem tollen Programm, einer super Orga vor allem eine sensationelle Location. Ich durfte einen 70-minütigen Vortrag zum Thema "Softwarequalität" beisteuern, in welchem neben den Basics vor allem die diversen Probleme und deren vermeintlichen Lösungen im Fokus standen. An dieser Stelle möchte ich noch einmal allen Teilnehmern und natürlich auch den Organisatoren für eine erstklassige Veranstaltung danken. Es halt sehr viel Spaß gemacht, ich hoffe wir sehen uns im nächsten Jahr wieder. Hier stelle ich nun die Folien meines Vortrags zur Verfügung.</p>
<div style="text-align: center;"><iframe src="//www.slideshare.net/slideshow/embed_code/key/yO6L0NfjlrnJha" width="595" height="485" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" allowfullscreen="" style="border-width: 1px; border-style: solid; border-color: rgb(204, 204, 204); margin-bottom: 5px; max-width: 100%;"></iframe></div><div style="margin-bottom:5px"> <strong> <a href="https://david-tielke.de//www.slideshare.net/DavidTielke/softwarequalitt" title="Softwarequalität" target="_blank">Softwarequalität</a> </strong> von <strong><a href="https://www.slideshare.net/DavidTielke" target="_blank">David Tielke</a></strong> </div>Wed, 10 Apr 2019 11:16:00 ZDavid TielkeDavid TielkeDavid Tielkehttps://asp.net-hacker.rocks/2019/04/10/routed-middlewares.htmlhttp://feedproxy.google.com/~r/jgutsch/~3/UpV2TudKLk0/routed-middlewares.htmlJürgen GutschImplement Middlewares using Endpoint Routing in ASP.​NET Core 3.0<p>If you have a Middleware that needs to work on a specific path, you should implement it by mapping it to a route in ASP.NET Core 3.0, instead of just checking the path names. This post doesn't handle regular Middlewares, which need to work all request, or all requests inside a <code>Map</code> or <code>MapWhen</code> branch.</p>
<p>At the Global MVP Summit 2019 in Redmond I attended the hackathon where I worked on <a href="https://github.com/JuergenGutsch/graphql-aspnetcore/">my GraphQL Middlewares for ASP.NET Core</a>. I asked <a href="https://twitter.com/condrong">Glen Condron</a> for a review of the API and the way the Middleware gets configured. He told me that we did it all right. We followed the proposed way to provide and configure an ASP.NET Core Middleware. But he also told me that there is a new way in ASP.NET Core 3.0 to use this kind of Middlewares.</p>
<p>Glen asked <a href="https://twitter.com/JamesNK">James Newton King</a> who works on the new Endpoint Routing to show me how this needs to be done in ASP.NET Core 3.0. James pointed me to the ASP.NET Core Health Checks and explained me the new way to go.</p>
<blockquote>
<p>BTW: That's kinda closing the loop: Four summits ago <a href="http://twitter.com/damienbod">Damien Bowden</a> and I where working on the initial drafts of the ASP.NET Core Health Checks together with Glen Condron. Awesome that this is now in production ;-)</p>
</blockquote>
<p>The new ASP.NET Core 3.0 implementation of the GraphQL Middlewares is in the <em>aspnetcore30</em> branch of the repository: <a href="https://github.com/JuergenGutsch/graphql-aspnetcore">https://github.com/JuergenGutsch/graphql-aspnetcore</a></p>
<h2>About Endpoint Routing</h2>
<p>The MVP fellow <a href="https://twitter.com/stevejgordon">Steve Gordon</a> had an <a href="https://www.stevejgordon.co.uk/asp-net-core-first-look-at-global-routing-dispatcher">early look into Endpoint Routing</a>. His great post may help you to understand Entpoint Routing.</p>
<h2>How it worked before:</h2>
<p>Until now you used <code>MapWhen()</code> to map the Middleware to a specific condition defined in a predicate:</p>
<pre><code class="language-csharp">Func&lt;HttpContext, bool&gt; predicate = context =&gt;
{
return context.Request.Path.StartsWithSegments(path, out var remaining) &amp;&amp;
string.IsNullOrEmpty(remaining);
};
return builder.MapWhen(predicate, b =&gt; b.UseMiddleware&lt;GraphQlMiddleware&gt;(schemaProvider, options));
</code></pre>
<p>(<a href="https://github.com/JuergenGutsch/graphql-aspnetcore/blob/feature/aspnetcore30/GraphQl.AspNetCore/ApplicationBuilderExtensions.cs">ApplicationBuilderExtensions.cs</a>)</p>
<p>In this case the path is checked. This is pretty common to not only map based on paths. This allows you to also map on all other kind of criteria based on the <code>HttpContext</code>.</p>
<p>Also the much simpler <code>Map()</code> was a way to go:</p>
<pre><code class="language-csharp">builder.Map(path, branch =&gt; branch.UseMiddleware&lt;GraphQlMiddleware&gt;(schemaProvider, options));
</code></pre>
<h2>How this should be done now</h2>
<p>In ASP.NET Core 3.0 these kind of mappings, where you may listen on a specific endpoint, should be done using the <code>EndpoiontRouteBuilder</code>. If you create a new ASP.NET Core 3.0 web application. MVC is now added a little different in the <code>Startup.cs</code> than before:</p>
<pre><code class="language-csharp">app.UseRouting(routes =&gt;
{
routes.MapControllerRoute(
name: &quot;default&quot;,
template: &quot;{controller=Home}/{action=Index}/{id?}&quot;);
routes.MapRazorPages();
});
</code></pre>
<p>The method <code>MapControllerRoute()</code> adds the controller based MVC and Web API. The new ASP.NET Core Health Checks, which also provide an own endpoint will also be added like this. Means we now have <code>Map()</code> methods as extension methods on the <code>IEndpointRouteBuilder</code> instead of <code>Use()</code> methods on the <code>IApplicationBuilder</code>. It is still possible to use the <code>Use</code> methods.</p>
<p>In case of the GraphQL Middleware it looks like this:</p>
<pre><code class="language-csharp">var pipeline = routes.CreateApplicationBuilder()
.UseMiddleware&lt;GraphQlMiddleware&gt;(schemaProvider, options)
.Build();
return routes.Map(pattern, pipeline)
.WithDisplayName(_defaultDisplayName);
</code></pre>
<p>(<a href="https://github.com/JuergenGutsch/graphql-aspnetcore/blob/feature/aspnetcore30/GraphQl.AspNetCore/EndpointRouteBuilderExtensions.cs">EndpointRouteBuilderExtensions.cs</a>)</p>
<p>Based on the current <code>IEndpointRouteBuilder</code> a new <code>IApplicationBuilder</code> is created, where we <code>Use</code> the GraphQL Middleware as before. We pass the <code>ISchemaProvider</code> and the <code>GraphQlMiddlewareOptions</code> as arguments to the Middleware. The result is a <code>RequestDelegate</code> in the <code>pipeline</code> variable.</p>
<p>The configured endpoint <code>pattern</code> and the <code>pipeline</code> than gets mapped to the <code>IEndpointRouteBuilder</code>. The small extension Method <code>WithDisplayName()</code> sets the configured display name to the endpoint.</p>
<blockquote>
<p>I needed to copy this extension method to from the ASP.NET Core repository to my code base, because the current development build of ASP.NET Core didn't contain this method two weeks ago. I need to check the latest version ASAP.</p>
</blockquote>
<p>In ASP.NET Core 3.0 the GraphQl and the GraphiQl Middleware can now added like this:</p>
<pre><code class="language-csharp">app.UseRouting(routes =&gt;
{
if (env.IsDevelopment())
{
routes.MapGraphiQl(&quot;/graphiql&quot;);
}
routes.MapGraphQl(&quot;/graphql&quot;);
routes.MapControllerRoute(
name: &quot;default&quot;,
template: &quot;{controller=Home}/{action=Index}/{id?}&quot;);
routes.MapRazorPages();
});
</code></pre>
<h2>Conclusion</h2>
<p>The new ASP.NET Core 3.0 implementation of the GraphQL Middlewares is on the aspnetcore30 branch of the repository: <a href="https://github.com/JuergenGutsch/graphql-aspnetcore">https://github.com/JuergenGutsch/graphql-aspnetcore</a></p>
<p>This approach feels a bit different. In my opinion it messes the <code>startup.cs</code> a little bit. Previously we added one middleware after another... line by line to the <code>IApplicationBuilder</code> method. With this approach we have some Middlewares still registered on the <code>IApplicationBuilder</code> and some others on the <code>IEndpointRouteBuilder</code> inside a lambda expression on a new <code>IApplicationBuilder</code>.</p>
<p>The other thing is, that the order isn't really clear anymore. When will the Middlewares inside the <code>UseRouting()</code> be executed and in which direction? I will dig deeper into this the next months.</p><img src="http://feeds.feedburner.com/~r/jgutsch/~4/UpV2TudKLk0" height="1" width="1" alt=""/>Wed, 10 Apr 2019 00:00:00 Z2019-04-10T00:00:00ZJürgen GutschJürgen GutschJürgen Gutschhttps://asp.net-hacker.rocks/2019/04/05/mvpsummit2019.htmlhttp://feedproxy.google.com/~r/jgutsch/~3/B5r9F6WTU-U/mvpsummit2019.htmlJürgen Gutsch#MVPSummit2019 - Impressions...<p>Also this year I was invited to attend the yearly Global MVP Summit in Redmond and Bellevue. It started last week Sunday until Thursday. As last year I add two days before and after the summit to get some time to explore Seattle. This is a small summery of the 8 days in the Seattle area.</p>
<p>Just to weeks before the summit starts there was the so called <a href="https://twitter.com/search?q=%23snowmageddon2019">#snowmageddon2019</a> in the north west of the US. Cold and a lot of snow from the US perspective. But I was sure, when I will arrive in Seattle it'll be sunny and warm. And it was. I never had a rainy day in Seattle. In Bellevue and Redmond I had, but never in Seattle. Also last year I stayed two nights before and two nights after the summit in downtown Seattle and it was sunny than, but rainy while staying in Bellevue. Anyway, Seattle is always sunny, people are happy and friendly because of that.</p>
<p><img src="https://asp.net-hacker.rocks/img/mvpsummit2019/sunnymarketplace.jpg" alt="" /></p>
<h2>Pre-Summit days in Seattle</h2>
<p>As well as last year I stayed the first two nights in the Green Tortoise Hostel in downtown Seattle near the Pike place. This is a cheap hostel, you need to share the room with six to eight other people. But it is anyway impressive. The weekend when I arrived it was ComiCon in Seattle and Sint Patrick's day. So the hostel was full of ComiCon attendees, people wearing green things, Backpackers, and some MVPs.</p>
<blockquote>
<p>I again met the South Korean Azure MVP in this hostel as last year, who gave me the sticker of his Korean Azure user group. I also met him the two nights after in the same hostel as well as during the summit</p>
</blockquote>
<p>Even if the hostel is cheap, compared with the hotels in Seattle, the location is absolutely awesome. If you leave the hostel, you will stumble into the only Starbucks restaurant, that serves the Pike Place Special Reserve outside the Pike Place. Leaving the restaurant, you will stumble into the public market of the Pike Place where you can grab some pastries for breakfast. Than leaving the Pike Place to take the breakfast in the sun within the Victor Steinbrueck Park.</p>
<p>I arrived on Friday and took the Light Rail to Downtown Seattle, checked in to the Green Tortoise and went for a walk threw the Pike Place and had the first awesome burger at Lowell's Restaurant while enjoying the nice view to the Puget Sound. Saturday starts slowly with the breakfast described in the last paragraph. Later than I joined some MVPs for the guided Market Experience tour. Where I learned a lot about the market.</p>
<blockquote>
<p>Did you know that the fist Starbucks isn't really the first one, but the oldest one? Did you know that you need to found your business on the Pike Place to get a spot to sell your stuff? All you want to sell on the market needs to be produced by yourself (except meat, sausage and fish I think)</p>
</blockquote>
<p>Later I joined some MVP Friends for lunch and for a walk to the space needle. We had lunch at the Pike Place Brewery before where I found sausages, sauerkraut and meshed potatoes on the menu. Beer brazed sausages, with fine apple sauerkraut. Seattle meats Bavaria. I needed to try it and it was really yummy.</p>
<p><img src="https://asp.net-hacker.rocks/img/mvpsummit2019/bratwurst.jpg" alt="" /></p>
<p>In the evening we had free beer at the hostel. With free beer and my laptop I started to merge almost all of the pull requests to the <a href="https://github.com/JuergenGutsch/graphql-aspnetcore/">ASP.NET Core GraphQL Middlewares</a>, answered almost all open issues and updated the dependencies of the project.</p>
<h2>The Summit days in Bellevue and Redmond</h2>
<p>The Sunday also started slowly, before I took the express bus to Redmond where the Summit hotels are located. I checked in to the Marriott Bellevue, where I shared the room with the famous Alex Witkowski. This room was awesome, with a great view to the space needle and a super modern stylish sliding door to the bathroom that cannot be locked and that always wasn't really closed. Felt strange while sitting on the toilet, but must be super modern for a 599$ room ;-)</p>
<p>Sunday is the day where the most of the MVPs registering for the summit at the biggest summit hotel. Some soft skill talks were held there too. The first parties organized by MVPs or tools vendors where on Saturday so we joined them and met the first Microsofties and other famous MVPs. it got late and the Monday got hard. Anyway the actual Summit starts on Monday with a lot of technical sessions.</p>
<p>From Monday to Wednesday there where a lot of interesting technical sessions. Many of them really had a lot of value. Some others didn't contain new information for me, because the most stuff in my area was openly discussed on GitHub, but anyway clarified some rumors.</p>
<blockquote>
<p>I really got into Razor Components, which is not about Blazor as I initially thought. Also Scott Hanselman did a clarification post about it. [link] Razor Components is component based development using Razor. It looks similar to React, even if it may be rendered on the server side, as well as on the client side using Blazor. Awesome stuff.</p>
</blockquote>
<p>The Thursday also is a highlight for me. Thursday is hackathon day. I joined <a href="https://twitter.com/csharpfritz">Jeff Fritz</a> who showed us his mobile streaming setup. I got a chance to talk to Jeff and to other Twitch streamers, like <a href="https://twitter.com/kasuken">Emanuele Bartolesi</a>. Besides of that I worked on the <a href="https://github.com/JuergenGutsch/graphql-aspnetcore/">ASP.NET Core CraphQL Middlewares</a> and had a chance to get a review by <a href="https://twitter.com/condrong">Glen Condron</a>. He also told me that the way how a Middleware is created changed in 3.0 for Middlewares that handle a specific path. I'll write about it in one of the next posts. Glen and <a href="https://twitter.com/JamesNK">James Newton King</a> who works on the new ASP.NET Core routing supported me to get it running for ASP.NET Core 3.0.</p>
<h2>Post-Summit days in Seattle</h2>
<p>On Thursday after the hackathon I moved back to Seattle into the Green Tortoise and again met the south Korean Azure MVP at the check-in. I used the night to work on the ASP.NET Core CraphQL Middleware to finish the GraphQL Middleware registration using the route mapping.</p>
<p>Friday was shopping day. My wife always need some pants from her favorite store Seattle and I need to buy some souvenirs for the Kids (usually some t-shirts). After this was done I decided to explore the international district and china town where I also had a quick lunch in on of the Asian restaurants. China town was less colorful than expected but nice anyway. An awesome detail: You know you are in china town, if the street names are printed in two languages.</p>
<p><img src="https://asp.net-hacker.rocks/img/mvpsummit2019/chinatown.jpg" alt="" /></p>
<p>I left china town and unexpectedly stumbled into the old part of Seattle. The Pioneer Square was surprisingly nice. Old houses, small shops and pubs. One of the pups sells a German stout beer &quot;Köstritzer&quot;, as well as &quot;Biers&quot; and &quot;Brats&quot;.</p>
<p><img src="https://asp.net-hacker.rocks/img/mvpsummit2019/koestritzer.jpg" alt="" /></p>
<p>Also found the &quot;Berliner&quot; döner and kebab restaurant, which is (as far as I know) the very first and the only real döner restaurant in the US:</p>
<p><img src="https://asp.net-hacker.rocks/img/mvpsummit2019/doener.jpg" alt="" /></p>
<p>In the evening I decided to go to the Hardrock Cafe across the street to take a dinner. I was there for the first time. I don't get why this is a popular place. Pretty loud, uncomfortable and the food is good but not really special. Anyway, I continued to get the GraphiQL Middleware (the GraphQL UI) running using the new route mapping and cleaned up all the changes. Free beer at the Green Tortoise and Coding matches pretty well.</p>
<p>Saturday was the day to fly back at home. The morning starts with the annual <strong>JustCommunity Summit</strong> at the Lowell's Restaurant in the Public Market area of the Pike Place. <a href="https://twitter.com/KostjaKlein">Kostia</a> and I took a breakfast and talked about the plans of the <a href="http://ineta-germany.de/">INETA Germany</a> and <a href="http://justcommunity.de/">JustCommunity</a>. Our goals: To have a strategy about the JustCommunity until the end of the year. We also need to lineup the INETA tasks with the community support of the <a href="https://dotnetfoundation.org">.NET Foundation</a>.</p>
<h2>Leaving Seattle</h2>
<p>This was the fifth time in Seattle which is one of the most impressive cities. Pretty diverse, fascinating and pretty much different to any other cities in the US I've bin (not that many unfortunately).</p>
<p>Leaving Seattle is a little bit like leaving home. The last years I didn't know why. Now I'm pretty sure it is because I always meet friends, community members and many other nice people for the summit. The Summit is a little bit like a annual family meetup.</p>
<p>But one week without the family is hard as well and it is time to go home to my lovely wife and the three boys :-)</p><img src="http://feeds.feedburner.com/~r/jgutsch/~4/B5r9F6WTU-U" height="1" width="1" alt=""/>Fri, 05 Apr 2019 00:00:00 Z2019-04-05T00:00:00ZJürgen GutschJürgen GutschJürgen Gutschhttp://heise.de/-4358275Holger SchwichtenbergVisual Studio 2019 erscheint heuteMicrosoft wird heute Abend um 18 Uhr die Version 2019 seiner IDE freigeben.Tue, 02 Apr 2019 17:05:00 +02002019-04-02T17:05:00+02:00Holger SchwichtenbergHolger SchwichtenbergHolger Schwichtenberghttps://asp.net-hacker.rocks/2019/04/01/git-flow.htmlhttp://feedproxy.google.com/~r/jgutsch/~3/o6SDzJzFG4k/git-flow.htmlJürgen GutschGit Flow - About, installing and using<p>The people who know me, also know that I'm a huge fan of consoles and CLIs. I run the dotnet CLI as well as the Angular CLI and the create-react CLI. Yeoman is also a tool I like. I own a Mac, but cannot really work with the Mac UI. I really prefer the terminal in Mac. Also Git is used in the console the most time. The only situation where I don't use git in the console, is while resolving merge conflicts. I configured KDiff3 as the merge tool. I don't really need a graphic user interface for all the other tasks to work with Git.</p>
<p>So I do using the <strong>Git Flow</strong> process.</p>
<h2>About Git Flow</h2>
<p>In general <strong>Git Flow</strong> is a branching concept over Git. It is pretty clear and intuitive, but following this concept manually in Git is a bit hard and needs some time. <strong>Git Flow</strong> is now implemented in many graphical user interfaces like <strong>SourceTree</strong>. This reduces the overhead.</p>
<p><strong>Git Flow</strong> is mainly about merging and branching. It defines two main branches, which are &quot;master&quot; as the production/release branch and &quot;develop&quot; as the working branch. The actual work is done in different types of feature branches:</p>
<ul>
<li>&quot;feature&quot; a branch created based on &quot;develop&quot; to implement new featues
<ul>
<li>will be merged back to &quot;develop&quot;</li>
<li>branch name pattern: feature/&lt;name|ticket|#123-my-feature&gt;</li>
</ul>
</li>
<li>&quot;release&quot; a branch created based on &quot;develop&quot; to create a new release
<ul>
<li>the branch name gets the tag name</li>
<li>will create a tag</li>
<li>will be merged to &quot;master&quot; and &quot;develop&quot;</li>
<li>branch name pattern: release/&lt;tag|version|1.2.0&gt;</li>
</ul>
</li>
<li>&quot;hotfix&quot; a branch created based on &quot;master&quot;
<ul>
<li>the branch name gets the tag name</li>
<li>will create a tag</li>
<li>will merge to &quot;master&quot; and &quot;develop&quot;</li>
<li>branch name pattern: hotfix/&lt;tag|version|1.2.3&gt;</li>
</ul>
</li>
<li>&quot;bugfix&quot; less popular. We use &quot;feature&quot; to create bug fixes
<ul>
<li>not available in all tools</li>
<li>behaves like &quot;feature&quot;</li>
</ul>
</li>
<li>&quot;support&quot; much less popular. We don't use it
<ul>
<li>not available in all tools</li>
<li>almost behaves like hotfixes</li>
</ul>
</li>
</ul>
<p>I propose to have a look into the Git Flow cheat sheet documentation to see how the branching concept works: <a href="http://danielkummer.github.io/git-flow-cheatsheet/">http://danielkummer.github.io/git-flow-cheatsheet/</a></p>
<p>Git Flow is also a tool provided as Git extension. This reduces branching, merging, releasing tagging to just one single command and does all the needed tasks in the background for you. This CLI makes it super easy to follow Git Flow.</p>
<h2>Install Git Flow as Git Extension</h2>
<p>The installation is a bit annoying, because it needs a some additional tools and some more tasks for just a small Git extension.</p>
<p>To install it you need <strong>cygwin</strong>, which also is a console that gives you Linux like tools on Windows. The easiest way to install <strong>cygwin</strong> is to use <strong>Chocolatey</strong>, which is a packet manager for Windows. (apt-get for windows). You can also install it manually by running the installer, but you need to ensure to also install <code>cyg-get</code>, <code>wget</code> and <code>util-linux</code>, which is much easier using <strong>Chocolatey</strong>.</p>
<p>To install <strong>Chocolatey</strong> follow the instructions on <a href="https://chocolatey.org">https://chocolatey.org</a>.</p>
<p>Open a console and type the following commands</p>
<pre><code class="language-shell">choco install cygwin
choco install cyg-get
</code></pre>
<p>If this is done you can use cyg-get to install the needed extensions for the cygwin console</p>
<p>Open the console and type the following commands:</p>
<pre><code class="language-shell">cyg-get install wget
cyg-get install util-linux
</code></pre>
<p>Now the cygwin is ready to use to install Git Flow. Type</p>
<pre><code class="language-shell">cygwin
</code></pre>
<p>This will open the cygwin bash inside the current console.</p>
<p>Now you are able to run the installation of Git Flow. Copy the following command to the cygwin bash and press enter:</p>
<pre><code class="language-bash">wget -q -O - --no-check-certificate https://raw.github.com/petervanderdoes/gitflow-avh/develop/contrib/gitflow-installer.sh install stable | bash
</code></pre>
<p>If this is done exit the bash by typing <code>exit</code> and close the console by typing <code>exit</code>. Closing the consoles and open it again ensures all the environment variables needed are available.</p>
<p>Open a new console and type <code>git flow</code>. You should now see the Git Flow CLI help like this:</p>
<p><img src="https://asp.net-hacker.rocks/img/git-flow/git-flow.png" alt="" /></p>
<p>Every time you checkout or create a new repository you need to run <code>git flow init</code> to enable Git Flow.</p>
<p>Using this command you will setup Git Flow on an existing repository by configuring the different branch prefixes and specifying the two main branches. I would propose to choose the default prefixes and names:</p>
<p><img src="https://asp.net-hacker.rocks/img/git-flow/git-flow-init.png" alt="" /></p>
<h2>Working with Git Flow</h2>
<p>Using Git Flow is pretty easy using this CLI. Let's assume we need to start working on a feature called &quot;Implement validation&quot;. We could now write a command like this</p>
<pre><code class="language-shell">git flow feature start implement-validation
</code></pre>
<p>This will work as expected:</p>
<p><img src="https://asp.net-hacker.rocks/img/git-flow/git-flow-feature-start-1.PNG" alt="" /></p>
<p>Since the most of us are using a planning tool like Jira or TFS it would make more sense to use the ticket number here as feature name. In case you use the TFS I would propose to add the work item type to the number:</p>
<ul>
<li>Jira: PROJ-101</li>
<li>TFS: Task-34212</li>
</ul>
<p>This helps to keep the branch names clean and you don't start messing around with long branch names or wrong names. Git Flow usually deletes the feature branch after merging it back. So the list of branches will never be too long. But anyway, I learned in the past few years, it is much easier to follow ticket numbers than weird named branch names, because we talk about the current tickets every day in the daily scrum meeting.</p>
<p><img src="https://asp.net-hacker.rocks/img/git-flow/git-flow-feature-start-2.PNG" alt="" /></p>
<p>All the commands that are not related to branches can be done using the regular Git CLI. That means commands to commit, to push and so on.</p>
<blockquote>
<p>Git Flow will merge the branches, if you finish them. It doesn't work with rebase or other approaches. This means it'll take over the entire history of the feature branch. Because of this I would also propose to add the ticket number to the commit messages like this: &quot;PROJ-101: adds validation to the form&quot;. This makes it easy to follow the history in case it is needed.</p>
</blockquote>
<p>To finish a feature you should first merge the latest changes of the development branch in:</p>
<pre><code class="language-shell">git fetch --all
git merge develop
git flow feature finish
</code></pre>
<p>If you don't add the feature name to the git flow feature finish command, Git Flow will try to close the current feature branch and will write out a message in case the current branch is not a feature branch.</p>
<blockquote>
<p>I would propose to always merge the latest changes of develop to the current feature branch to solve possible conflicts within the feature branch instead in the develop branch. This way the merge to develop will almost never have a conflict.</p>
</blockquote>
<p>I showed the way how to work with Git Flow using the feature branch. But it'll work the same way with the other branch types. Except with the release and the hotfix branches where you need to set the tag name as feature name. This should be the version number of the release or the version number of the hotfix.</p>
<p>While finishing these two branches Git Flow will ask you for a tag message. After finishing it you need to push both the master and the develop brunch, as well as the tags:</p>
<pre><code class="language-shell">git push --all
git push --tags
</code></pre>
<p>For more information about the Git Flow commands please follow the documentation on Daniel Kummer's Git Flow cheat sheet: <a href="http://danielkummer.github.io/git-flow-cheatsheet/">http://danielkummer.github.io/git-flow-cheatsheet/</a>. (Which is BTW the best Git Flow documentations ever)</p>
<h2>Conclusion</h2>
<p>I really love the CLI help of this tool. It is not only descriptive but also explaining. The same way the GIT CLI is explaining things. It is also providing proposals in case a command is miss-spelled.</p>
<p>Git Flow helps me to speed up the branching and merging flows and to follow the <strong>Git Flow</strong> process. I proposed to use <strong>Git Flow</strong> in the company and works pretty well there. And I learned a lot about how this process works in production.</p>
<p>As written somewhen in the past, It also helps me to write my blog. I really use <strong>Git Flow</strong> to organize my posts I'm working on. I'm creating a feature per post and a hotfix in case I need to fix a post or something else on the blog. I use <strong>SemVer</strong> to version my releases and hotfixes: Every post increases the feature number and a hotfix increases the patch number. The feature number also is the number of post in my blog. The number of open features in my blog is the number of posts I'm working on. This way I can work on many posts separately and I'm able to release the posts separately.</p><img src="http://feeds.feedburner.com/~r/jgutsch/~4/o6SDzJzFG4k" height="1" width="1" alt=""/>Mon, 01 Apr 2019 00:00:00 Z2019-04-01T00:00:00ZJürgen GutschJürgen GutschJürgen Gutschhttps://blog.codeinside.eu/2019/03/31/load-hierarchical-data-from-mssql-with-recursive-common-table-expressionshttp://feedproxy.google.com/~r/Code-insideBlog/~3/EBJJeJ9wtBo/load-hierarchical-data-from-mssql-with-recursive-common-table-expressionsCode-Inside BlogLoad hierarchical data from MSSQL with recursive common table expressions<h1 id="scenario">Scenario</h1>
<p>We have a pretty simple scenario: We have a table with a simple Id + ParentId schema and some demo data in it. I have seen this design quite a lot in the past and in the relational database world this is the obvious choice.</p>
<p><img src="https://blog.codeinside.eu/assets/md-images/2019-03-31/demo.png" alt="x" title="Demo table" /></p>
<h1 id="problem">Problem</h1>
<p>Each data entry is really simple to load or manipulate. Just load the target element and change the ParentId for a move action etc.. A more complex problem is how to load a whole “data tree”.
Let’s say I want to load all children or parents of a given Id. You could load everything, but if your dataset is large enough, this operation will work poorly and might kill your database.</p>
<p>Another naive way would be to query this with code from a client application, but if your “tree” is big enough, it will consume lots of resources, because for each “level” you open a new connection etc.</p>
<h1 id="recursive-common-table-expressions">Recursive Common Table Expressions!</h1>
<p>Our goal is to load the data in one go as effective as possible - <strong>without using Stored Procedures(!)</strong>. In the Microsoft SQL Server world we have this handy feature called “<a href="https://docs.microsoft.com/en-us/sql/t-sql/queries/with-common-table-expression-transact-sql"><strong>common table expresions (CTE)</strong></a>”.
A common table expression can be seen as a function inside a SQL statement. This function can be invoked by itself and now we can call this a “recursive common table expression”.</p>
<p>The syntax itself is a bit odd, but works well and you can enhance it with JOINs from other tables.</p>
<h2 id="scenario-a-from-child-to-parent">Scenario A: From child to parent</h2>
<p>Let’s say you want to go the tree upwards from a given Id:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>WITH RCTE AS
(
SELECT anchor.Id as ItemId, anchor.ParentId as ItemParentId, 1 AS Lvl, anchor.[Name]
FROM Demo anchor WHERE anchor.[Id] = 7
UNION ALL
SELECT nextDepth.Id as ItemId, nextDepth.ParentId as ItemParentId, Lvl+1 AS Lvl, nextDepth.[Name]
FROM Demo nextDepth
INNER JOIN RCTE recursive ON nextDepth.Id = recursive.ItemParentId
)
SELECT ItemId, ItemParentId, Lvl, [Name]
FROM RCTE as hierarchie
</code></pre></div></div>
<p>The <em>anchor.[Id] = 7</em> is our starting point and should be given as a SQL parameter. The <em>with</em> statement starts our function description, which we called “RCTE”.
In the first select we just load everything from the target element.
Note, that we add a “Lvl” property, which starts at 1.
The <em>UNION ALL</em> is needed (at least we were not 100% if there are other options).
In the next line we are doing a join based on the <em>Id = ParentId</em> schema and we increase the “Lvl” property for each level.
The last line inside the common table expression uses the “recursive” feature.</p>
<p>Now we are done and can use the CTE like a normal table in our final statement.</p>
<p>Result:</p>
<p><img src="https://blog.codeinside.eu/assets/md-images/2019-03-31/up.png" alt="x" title="Child to Parent" /></p>
<p>We now only load the “path” from the child entry up to the root entry.</p>
<p>If you ask why we introduce the “lvl” column:
With this column it is really easy see each “step” and it might come handy in your client application.</p>
<h2 id="scenario-b-from-parent-to-all-descendants">Scenario B: From parent to all descendants</h2>
<p>With a small change we can do the other way around. Loading all descendants from a given id.</p>
<p>The logic itself is more or less identical, we changed only the <em>INNER JOIN RCTE ON …</em></p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>WITH RCTE AS
(
SELECT anchor.Id as ItemId, anchor.ParentId as ItemParentId, 1 AS Lvl, anchor.[Name]
FROM Demo anchor WHERE anchor.[Id] = 2
UNION ALL
SELECT nextDepth.Id as ItemId, nextDepth.ParentId as ItemParentId, Lvl+1 AS Lvl, nextDepth.[Name]
FROM Demo nextDepth
INNER JOIN RCTE recursive ON nextDepth.ParentId = recursive.ItemId
)
SELECT ItemId, ItemParentId, Lvl, [Name]
FROM RCTE as hierarchie
</code></pre></div></div>
<p>Result:</p>
<p><img src="https://blog.codeinside.eu/assets/md-images/2019-03-31/down.png" alt="x" title="Parent to chid" /></p>
<p>In this example we only load all children from a given id. If you point this to the “root”, you will get everything except the “alternative root” entry.</p>
<h1 id="conclusion">Conclusion</h1>
<p>Working with trees in a relational database might not “feel” as good as in a document database, but it doesn’t mean, that such scenarios needs to perform bad. We use this code at work for some bigger datasets and it works really well for us.</p>
<p><em>Thanks to my collegue Alex - he discovered this wild T-SQL magic.</em></p>
<p>Hope this helps!</p><img src="http://feeds.feedburner.com/~r/Code-insideBlog/~4/EBJJeJ9wtBo" height="1" width="1" alt=""/>Sun, 31 Mar 2019 23:45:00 ZCode-Inside BlogCode-Inside BlogCode-Inside Blog5cb493608f13bd00177d432chttps://blog.der-albert.com/2019/03/25/kostenloser-live-asp-net-authentication-und-authorization-deep-dive-am-31-03-2019/Albert WeinertKostenloser Live ASP.NET Core Authentication und Authorization Deep Dive am 31.03.2019<h2 id="kostenlosabernichtumsonst">Kostenlos aber nicht umsonst!</h2>
<p>Auf meinem <a href="https://www.twitch.tv/DerAlbertLive">Twitch Live Conding Kanal</a> habe ich ein <strong>Follower Goal</strong> ausgerufen. Bei einhundert Followern mache ich Live einen ASP.NET Core Authentication und Authorization Deep Dive. Diese Ziel ist nun erreicht und nun muss ich Taten folgen lassen.</p>
<p>Die Taten starten am Sonntag den <strong>31.</strong></p>Mon, 25 Mar 2019 16:47:05 ZAlbert Weinert<h2 id="kostenlosabernichtumsonst">Kostenlos aber nicht umsonst!</h2>
<p>Auf meinem <a href="https://www.twitch.tv/DerAlbertLive">Twitch Live Conding Kanal</a> habe ich ein <strong>Follower Goal</strong> ausgerufen. Bei einhundert Followern mache ich Live einen ASP.NET Core Authentication und Authorization Deep Dive. Diese Ziel ist nun erreicht und nun muss ich Taten folgen lassen.</p>
<p>Die Taten starten am Sonntag den <strong>31. März 2019 um 11 Uhr</strong>, zusammen mit <a href="https://www.twitch.tv/juergengutsch">Jürgen Gutsch</a> der sich freundlicher Weise als Moderator, Fragesteller und Verbindung zum Chat zu Verfügung stellt, der Deep Dive Live auf Sendung gehen.</p>
<h2 id="waserwarteteuch">Was erwartet euch?</h2>
<p>2-3 Stunden Möglichkeiten, Dos and Don'ts rund um das Thema, auch werdet Ihr Fragen, Probleme und Wünsche äußern können. Vorab- oder Live im Chat. Von der Cookie Authentizierung bis Open ID Connect, von der Absicherung gegen übliche Angriffe aus dem Netz, was für Bestandteile gibt es. Viele Hinweise zu dem was man alles falsche mache kann und warum und wie man es besser richtig macht.</p>
<p>Es wird kein reiner Vortrag sein, sondern ein lockerer Dialog zwischen Jürgen, dem Chat und mir, wobei ich natürlich auch viel Code zeigen und schreiben werden.</p>
<h2 id="duhastfragenzumthema">Du hast Fragen zum Thema?</h2>
<p>Dann hinterlasse sie am besten beim passenden <a href="https://github.com/DerAlbertLive/Home/issues/1">GitHub Issue</a>, oder über Twitter mit dem Hashtag <a href="https://twitter.com/hashtag/deepdivealbert?lang=de">#deepdivealbert</a>. Alterantiv hier als Kommentar posten. Natürlich kannst Du dich auch während dem Stream einbringen, dazu brauchst Du ein <a href="https://www.twitch.tv">Twitch</a> Konto und musst angemeldet sein.</p>
<h2 id="beitwitchanmelden">Bei Twitch anmelden?</h2>
<p>Nein, Du kannst den Stream auch ohne Anmeldung sehen, aber Du kannst dann nicht am Chat teilnehmen.</p>
<h3 id="dieaufzeichnungistnunonline">Die Aufzeichnung ist nun Online</h3>
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/3bExQpA_eHo" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
Albert WeinertAlbert Weinerthttp://uliarmbruster.wordpress.com/?p=3508https://uliarmbruster.wordpress.com/2019/03/23/freigrenze-vs-freibetrag/Uli ArmbrusterFreigrenze vs. FreibetragHeute bin ich von meinem Steuerberater auf die wichtige Unterscheidung zw. Freigrenze und Freibetrag aufmerksam gemacht worden, was ich bis dato synonym verwendet habe. Dazu ein Beispiel: 40€ sind die jeweilige Grenze Der Kauf übersteigt die Grenze um 1€, sprich die Gesamtkosten belaufen sich auf 41€ Freigrenze Im Fall der Freigrenze muss bei Übersteigung des &#8230;Sat, 23 Mar 2019 08:02:47 ZUli Armbruster<p>Heute bin ich von meinem Steuerberater auf die wichtige Unterscheidung zw. Freigrenze und Freibetrag aufmerksam gemacht worden, was ich bis dato synonym verwendet habe.</p>
<p>Dazu ein <strong>Beispiel</strong>:</p>
<ul>
<li>40€ sind die jeweilige Grenze</li>
<li>Der Kauf übersteigt die Grenze um 1€, sprich die Gesamtkosten belaufen sich auf 41€</li>
</ul>
<p><strong>Freigrenze</strong></p>
<p style="padding-left:30px;">Im Fall der Freigrenze muss bei Übersteigung des Grenzbetrags die volle Summe versteuert werden, d.h. die vollen 41€ sind steuerpflichtig.</p>
<p><strong>Freibetrag</strong></p>
<p style="padding-left:30px;">Im Fall des Freibetrags muss lediglich der die Grenze übersteigende Betrag versteuert werden, d.h. 1€.</p>
<p>Bei Streuartikeln (bis 10€ netto) und Sachzuwendungen an Geschäftsfreunde (35€ pro Jahr und Person) bzw. Sachzuwendungen an Arbeitnehmer (44€ pro Mitarbeiter und Monat | nicht auf Folgemonat übertragbar) handelt es sich um Freigrenzen. Leider sind im betrieblichen Umfeld die meisten Grenzen Freigrenzen.</p>
<p>Ein Freibetrag wäre z.B. der sogenannte Rabattfreibetrag, bei dem der Arbeitgeber seinen Angestellten Rabatte auf die eigenen Waren oder Dienstleistungen gewährt.</p>
Uli ArmbrusterUli Armbrustertag:blogger.com,1999:blog-6162973429190225608.post-4910802701286959744http://feedproxy.google.com/~r/michaelsgermanblog/~3/2730-bxGvQ8/nach-langerer-pause-jetzt-zu-apple.htmlMichael SchwarzNach längerer Pause - jetzt zu Apple Themen auf TwitterNach längerer Pause bin ich jetzt zu Apple Themen auf Twitter umgestiegen. Unter <a href="https://twitter.com/DieApfelFamilie">https://twitter.com/DieApfelFamilie</a> könnt ihr mir folgen.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://twitter.com/DieApfelFamilie" target="_blank"><img border="0" data-original-height="300" data-original-width="300" height="200" src="https://2.bp.blogspot.com/-obXlsZAKUc4/XI6VET8R-lI/AAAAAAAADVk/5cTi_orGJpUHQjG6gyGBX9Fn6_C0SzEBgCLcBGAs/s200/DieAppleFamilie.png" width="200" /></a></div>
<br /><div class="feedflare">
<a href="http://feeds.feedburner.com/~ff/michaelsgermanblog?a=2730-bxGvQ8:QG8IXhvPCQg:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/michaelsgermanblog?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/michaelsgermanblog?a=2730-bxGvQ8:QG8IXhvPCQg:63t7Ie-LG7Y"><img src="http://feeds.feedburner.com/~ff/michaelsgermanblog?d=63t7Ie-LG7Y" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/michaelsgermanblog?a=2730-bxGvQ8:QG8IXhvPCQg:7Q72WNTAKBA"><img src="http://feeds.feedburner.com/~ff/michaelsgermanblog?d=7Q72WNTAKBA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/michaelsgermanblog?a=2730-bxGvQ8:QG8IXhvPCQg:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/michaelsgermanblog?i=2730-bxGvQ8:QG8IXhvPCQg:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/michaelsgermanblog?a=2730-bxGvQ8:QG8IXhvPCQg:qj6IDK7rITs"><img src="http://feeds.feedburner.com/~ff/michaelsgermanblog?d=qj6IDK7rITs" border="0"></img></a>
</div><img src="http://feeds.feedburner.com/~r/michaelsgermanblog/~4/2730-bxGvQ8" height="1" width="1" alt=""/>Sun, 17 Mar 2019 18:43:00 Z2019-03-17T19:43:50+01:00Michael SchwarzMichael SchwarzMichael Schwarzhttp://www.yellow-brick-code.org/?p=1196https://yellow-brick-code.org/?p=1196Christina Hirth Continuous Delivery Is a Journey – Part 2After describing the context a little bit in part one it is time to look at the single steps the source code must pass in order to be delivered to the customers. (I&#8217;m sorry, but it is a quite long part &#x1f644;) The very first step starts with pushing all the current commits to master &#8230; <a href="https://yellow-brick-code.org/?p=1196" class="more-link">Continue reading <span class="screen-reader-text">Continuous Delivery Is a Journey – Part 2</span></a>Sun, 17 Mar 2019 15:56:22 ZChristina Hirth
<p>After describing the context a little bit in <a href="https://www.yellow-brick-code.org/?p=1168">part one</a> it is time to look at the single steps the source code must pass in order to be delivered to the customers. (I&#8217;m sorry, but it is a quite long part <img src="https://s.w.org/images/core/emoji/11.2.0/72x72/1f644.png" alt="🙄" class="wp-smiley" style="height: 1em; max-height: 1em;" />)</p>
<p>The very first step starts with pushing all the current commits to master (if you work with feature branches you will probably encounter a new level of self-made complexity which I don&#8217;t intend to discuss about). </p>
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">I think, if you agree having CD this way (commit -&gt;&#8230;-&gt;production) than you have implicitly enforced trunk-based development.<br><br>This scenario triggered a totally new view on what we could achieve &#8211; good and bad <img src="https://s.w.org/images/core/emoji/11.2.0/72x72/1f609.png" alt="😉" class="wp-smiley" style="height: 1em; max-height: 1em;" /> &#8211; and made the responsibility on our shoulders palpable.— Krisztina Hirth (@YellowBrickC) <a href="https://twitter.com/YellowBrickC/status/1105087992245432325?ref_src=twsrc%5Etfw">March 11, 2019</a></p></blockquote> <script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
<p>This action triggers the first checks and quality gates like licence validation and unit tests. If all checks are &#8220;green&#8221; the new version of the software will be saved to the repository manager and will be tagged as &#8220;latest&#8221;.</p>
<figure class="wp-block-image"><img src="https://www.yellow-brick-code.org/wp-content/uploads/ci.png" alt="" class="wp-image-1200" srcset="https://yellow-brick-code.org/wp-content/uploads/ci.png 452w, https://yellow-brick-code.org/wp-content/uploads/ci-300x227.png 300w" sizes="(max-width: 452px) 100vw, 452px" /><figcaption>Successful push leads to a new version of my service/pkg/docker image</figcaption></figure>
<p>At this moment the continuous integration is done but the features are far from being used by any customer. I have a first feedback that I didn&#8217;t brake any tests or other basic constraints but that&#8217;s all because nobody can use the features, it is not deployed anywhere yet. </p>
<p>Well let Jenkins execute the next step: deployment to the Kubernetes environment called integration (a.k.a. development)</p>
<figure class="wp-block-image"><img src="https://www.yellow-brick-code.org/wp-content/uploads/cd_int.png" alt="" class="wp-image-1201" srcset="https://yellow-brick-code.org/wp-content/uploads/cd_int.png 887w, https://yellow-brick-code.org/wp-content/uploads/cd_int-300x163.png 300w, https://yellow-brick-code.org/wp-content/uploads/cd_int-768x416.png 768w" sizes="(max-width: 887px) 100vw, 887px" /><figcaption>Continuous delivery to the first environment including the execution of first acceptance tests</figcaption></figure>
<p>At this moment all my changes are tested if they can work together with the currently integrated features developed by my colleagues and if the new features are evolving in the right direction (or are done and ready for acceptance). </p>
<p>This is not bad, but what if I want to be sure that I didn&#8217;t break the &#8220;platform&#8221;, what if I don&#8217;t want to disturb everybody else working on the same product because I made some mistakes &#8211; but I still want to be a human ergo be able to make mistakes <img src="https://s.w.org/images/core/emoji/11.2.0/72x72/1f609.png" alt="😉" class="wp-smiley" style="height: 1em; max-height: 1em;" />? This means that my behavioral and structure changes introduced by my commits should be tested before they land on integration. </p>
<p>These must be obviously a different set of tests. They should test if the whole system (composed by a few microservices each having it&#8217;s own data persistence, one or more UI-Apps) is working as expected, is resilient, is secure, etc.</p>
<p>At this point came the power of Kubernetes (k8s) and <a rel="noreferrer noopener" aria-label="ksonnet (opens in a new tab)" href="https://ksonnet.io/" target="_blank">ksonnet</a> as a huge help. Having k8s in place (and having the infrastructure as code) it is almost a no-brainer to set up a new environment to wire up the single systems in isolation and execute the system tests against it. <strong>This needs not only the k8s part as code but also the resources deployed and running on it</strong>. With ksonnet can be every service, deployment, <a rel="noreferrer noopener" aria-label="ingress configuration (opens in a new tab)" href="https://kubernetes.io/docs/concepts/services-networking/ingress/" target="_blank">ingress configuration</a> (manages external access to the services in a cluster), or config map defined and configured <a rel="noreferrer noopener" aria-label=" (opens in a new tab)" href="https://ksonnet.io/docs/concepts/#application" target="_blank">as code</a>. ksonnet not only supports to deploy to different <a rel="noreferrer noopener" aria-label="environments (opens in a new tab)" href="https://ksonnet.io/docs/concepts/#environment" target="_blank">environments</a> but offers also the possibility to compare these. There are a lot of tools offering these possibilities, it is not only ksonnet. It is important to choose the fitting tool and is even more important to invest the time and effort to configure everything as code. This is a must-have in order to achieve a real automation and continuous deployment!</p>
<figure class="wp-block-embed-twitter wp-block-embed is-type-rich is-provider-twitter"><div class="wp-block-embed__wrapper">
<blockquote class="twitter-tweet" data-width="550" data-dnt="true"><p lang="en" dir="ltr">How Airbnb Simplified the Kubernetes Workflow for 1000+ Engineers <a href="https://t.co/ntdYKjFIQw">https://t.co/ntdYKjFIQw</a> via <a href="https://twitter.com/InfoQ?ref_src=twsrc%5Etfw">@InfoQ</a> <a href="https://twitter.com/hashtag/ContinuousDeployment?src=hash&amp;ref_src=twsrc%5Etfw">#ContinuousDeployment</a> <a href="https://twitter.com/hashtag/k8s?src=hash&amp;ref_src=twsrc%5Etfw">#k8s</a></p>&mdash; Krisztina Hirth (@YellowBrickC) <a href="https://twitter.com/YellowBrickC/status/1105081259934605312?ref_src=twsrc%5Etfw">March 11, 2019</a></blockquote><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
</div><figcaption>Good developer experience also means simplified continuous deployment</figcaption></figure>
<p>I will not include here any ksonnet examples, they have a great documentation. What is important to realize is the opportunity offered with such an approach: if everything is code then <strong>every</strong> change can be checked in. Everything checked in can be included observed/monitored, can trigger pipelines and/or events, can be reverted, can be commented &#8211; and the feature that helped us in our solution &#8211; can be tagged.</p>
<p>What happens in a continuous delivery? Some change in VCS triggers pipeline, the fitting version of the source code is loaded (either as source code like ksonett files or as package or docker image), the configured quality gate checks are verified (runtime environment is wired up, the specs with the referenced version are executed) and in case of success the artifact will be tagged as &#8220;thumbs up&#8221; and promoted to the next environment. We started do this manually to gather enough experience to automate the process.</p>
<figure class="wp-block-image"><img src="https://www.yellow-brick-code.org/wp-content/uploads/cd_manual_review.png" alt="" class="wp-image-1203" srcset="https://yellow-brick-code.org/wp-content/uploads/cd_manual_review.png 642w, https://yellow-brick-code.org/wp-content/uploads/cd_manual_review-300x264.png 300w" sizes="(max-width: 642px) 100vw, 642px" /><figcaption>Deploy manually the latest resources from integration to the review stage</figcaption></figure>
<p>If you have all this working you have finished the part with the biggest effort. Now it is time to automate and generalize the single steps. After the Continuous Integration the only changes will occur in the ksonnet repo (all other source code changes are done before), which is called here <em>deployment</em> repo. </p>
<figure class="wp-block-image"><img src="https://www.yellow-brick-code.org/wp-content/uploads/cd_overview-1-349x1024.png" alt="" class="wp-image-1206" srcset="https://yellow-brick-code.org/wp-content/uploads/cd_overview-1-349x1024.png 349w, https://yellow-brick-code.org/wp-content/uploads/cd_overview-1-102x300.png 102w, https://yellow-brick-code.org/wp-content/uploads/cd_overview-1.png 365w" sizes="(max-width: 349px) 100vw, 349px" /><figcaption>Roll out, test and eventually roll back the system <strong>ready for review</strong></figcaption></figure>
<p>I think, this post is already to long. The next part ( I think, it will be the last one) I would like to write about the last essential method, how to deploy to production, without annoying anybody (no secret here, this is why feature toggles were invented for <img src="https://s.w.org/images/core/emoji/11.2.0/72x72/1f609.png" alt="😉" class="wp-smiley" style="height: 1em; max-height: 1em;" />) and about some open questions or decisions what we encountered on our journey.</p>
<p>Every graphic is realized with <a rel="noreferrer noopener" aria-label="plantuml (opens in a new tab)" href="http://plantuml.com" target="_blank">plantuml</a> thank you very much!</p>
<p><em>to be continued &#8230;</em></p>
Christina Hirth Christina Hirth http://heise.de/-4330258Golo RodenEinführung in Node.js, Folge 26: Let's code (comparejs)JavaScript verfügt – wie auch andere Programmiersprachen – über Operatoren zum Vergleichen von Werten. Leider läuft ihre Funktionsweise häufig der Intuition zuwider. Warum also nicht die Vergleichsoperatoren in Form eines Moduls neu schreiben und dabei auf vorhersagbares Verhalten achten?Mon, 11 Mar 2019 11:36:00 +01002019-03-11T11:36:00+01:00Golo RodenGolo RodenGolo Rodenhttp://www.yellow-brick-code.org/?p=1168https://yellow-brick-code.org/?p=1168Christina Hirth Continuous Delivery Is a Journey – Part 1Last year my colleagues and I had the pleasure to spend 2 days with @hamvocke and @diegopeleteiro from @thoughtworks reviewing the platform we created. One essential part of our discussions was about CI/CD described like this: &#8220;think about continuous delivery as a journey. Imagine every git push lands on production. This is your target, this &#8230; <a href="https://yellow-brick-code.org/?p=1168" class="more-link">Continue reading <span class="screen-reader-text">Continuous Delivery Is a Journey &#8211; Part 1</span></a>Sun, 10 Mar 2019 17:27:54 ZChristina Hirth
<p>Last year my colleagues and I had the pleasure to spend 2 days with <a href="https://twitter.com/hamvocke">@hamvocke</a> and <a rel="noreferrer noopener" aria-label="@diegopeleteiro (opens in a new tab)" href="https://twitter.com/diegopeleteiro" target="_blank">@diegopeleteiro</a> from <a rel="noreferrer noopener" aria-label="@thoughtworks (opens in a new tab)" href="https://twitter.com/thoughtworks" target="_blank">@thoughtworks</a> reviewing the platform we created. One essential part of our discussions was about CI/CD described like this: <em>&#8220;think about continuous delivery as a journey. Imagine every git push lands on production. This is your target, this is what your CD should enable.&#8221;</em></p>
<p>Even if (or maybe because) this thought scared the hell out of us, it became our vision for the next few months because we saw great opportunities we would gain if we would be able to work this way.</p>
<p>Let me describe the context we were working:</p>
<ul><li>Four business teams, 100% self-organized, owning 1&#8230;n <a rel="noreferrer noopener" aria-label="Self-contained Systems (opens in a new tab)" href="https://en.wikipedia.org/wiki/Self-contained_system_(software)" target="_blank">Self-contained Systems</a>, creating microservices running as Docker containers orchestrated with Kubernetes, hosted on AWS.</li><li>Boundaries (as in Domain Driven Design) defined based on the business we were in.</li><li>Each team having full ownership and full accountability for their part of business (represented by the SCS).</li><li>Basic heuristics regarding source code organisation: &#8220;share nothing&#8221; about business logic, &#8220;share everything&#8221; about utility functions (in OSS manner), about experiences you made, about the lessons you learned, about the errors you made.</li><li>Ensuring the code quality and the software quality is 100% team responsibility.</li><li>You build it, you run it.</li><li>One Platform-as-a-service team to enable this business teams to deliver features fast.</li><li>Gitlab as VS, Jenkins as build server, Nexus as package repository</li><li>Trunk-based development, no cherry picking, &#8220;roll fast forward&#8221; over roll back.</li></ul>
<div class="wp-block-image"><figure class="aligncenter"><img src="https://www.yellow-brick-code.org/wp-content/uploads/Teams.png" alt="Teams" class="wp-image-1176" srcset="https://yellow-brick-code.org/wp-content/uploads/Teams.png 464w, https://yellow-brick-code.org/wp-content/uploads/Teams-300x231.png 300w" sizes="(max-width: 464px) 100vw, 464px" /><figcaption>4 Business Teams + 1 Platform-as-a-Service Team = One Product</figcaption></figure></div>
<p>The architecture we have chosen was meant to support our organisation: independent teams able to work and deliver features fast and independently. They should decide themselves when and what they deploy. In order to achieve this we defined a few rules regarding inter-system communication. The most important ones are:</p>
<ul><li> Event-driven Architecture: no synchronous communication only asynchronous via the Domain Event Bus</li><li>Non-blocking systems: every SCS must remain (reduced) functional even if all the other systems are down</li></ul>
<p>We had only a couple of exceptions for these rules. As an example: authentication doesn&#8217;t really make sense in asynchronous manner.</p>
<p>Working in self-organized, independent teams is a really cool thing. But </p>
<blockquote class="wp-block-quote"><p> with great power there must also come great responsibility </p><cite><a href="https://en.wikipedia.org/wiki/Spider-Man">Uncle Ben to his nephew</a></cite></blockquote>
<p>Even though we set some guards regarding the overall architecture, the teams still had the ownership for the internal architecture decisions. As at the beginning we didn&#8217;t have continuous delivery in place every team was alone responsible for deploying his systems. Due the missing automation we were not only predestined to make human errors but we were also blind for the couplings between our services. (And we spent of course a lot of time doing stuff manually instead of letting Jenkins or Gitlab or some other tool doing this stuff for us <img src="https://s.w.org/images/core/emoji/11.2.0/72x72/1f914.png" alt="🤔" class="wp-smiley" style="height: 1em; max-height: 1em;" /> )</p>
<p>One example: every one of our systems had at least one React App and a GraphQL API as the main communication (read/write/subscribe) channel. One of the best things about GraphQL is the possibility to include the GraphQL-schema in the react App and this way having the API Interface definition included in the client application. </p>
<p>Is this not cool? It can be. Or it can lead to some very smelly behavior, to a real tight coupling and to inability to deploy the App and the API independently. And just like my friend <a rel="noreferrer noopener" aria-label="@etiennedi (opens in a new tab)" href="https://twitter.com/etiennedi" target="_blank">@etiennedi</a> says: <em>&#8220;If two services cannot be deployed independently they aren&#8217;t two services!&#8221;</em> </p>
<p>This was the first lesson we have learned on this journey: <strong>If you don&#8217;t have a CD pipeline you will most probably hide the flaws of your design.</strong></p>
<p>One can surely ask &#8220;what is the problem with manual deployment?&#8221; &#8211; nothing, if you have only a few services to handle, if every one in your team knows about these couplings and dependencies and is able to execute the very precise deployment steps to minimize the downtime. But otherwise? This method doesn&#8217;t scale, this method is not very professional &#8211; and the biggest problem: this method ignores the possibilities offered by Kubernetes to safely roll out, take down, or scale everything what you have built.</p>
<p style="text-align:center" class="has-background has-yellow-background-color"><strong>Having an automated, standardized CD pipeline as described at the beginning &#8211; with the goal that every commit will land on production in a few seconds &#8211; having this in place forces everyone to think about the consequences of his/hers commit, to write backwards compatible code, to become a more considered developer.</strong></p>
<p><em>to be continued &#8230;</em></p>
Christina Hirth Christina Hirth