Piotr Gankiewiczhttps://piotrgankiewicz.com
My personal page and blog about software development.Tue, 18 Dec 2018 09:14:35 +0000en-UShourly1https://wordpress.org/?v=4.5.15Distributed .NET Core – Episode 2https://piotrgankiewicz.com/2018/12/18/distributed-net-core-episode-2/
https://piotrgankiewicz.com/2018/12/18/distributed-net-core-episode-2/#respondTue, 18 Dec 2018 09:12:51 +0000https://piotrgankiewicz.com/?p=4272Continue reading →]]>The second episode is here. In this video, we’re going through CQRS and implement a basic HTTP API, that is able to receive and handle the command. In the next episodes, we’ll be extending the newly created Discounts Service, in order to make it work with the overall solution (remaining microservices).

https://www.youtube.com/watch?v=yqh0dN4oDTs

For more information, please navigate to DevMentors.io website, where you can find additional links to forums, social medias, repositories and so on.
Stay tuned for the next episodes!

]]>https://piotrgankiewicz.com/2018/12/18/distributed-net-core-episode-2/feed/0Distributed .NET Core – Episode 1https://piotrgankiewicz.com/2018/12/06/distributed-net-core-episode-1/
https://piotrgankiewicz.com/2018/12/06/distributed-net-core-episode-1/#respondThu, 06 Dec 2018 07:06:12 +0000https://piotrgankiewicz.com/?p=4269Continue reading →]]>After almost 3 months, since we released the Distributed .NET Core Teaser and a lot of refactoring and code improvements along with custom libraries being implemented, we’d like to present the first episode of our online course, where we’re talking about setting up your environment, starting the services and validating HTTP requests.

https://www.youtube.com/watch?v=s4fd3PRlOcw

For more information, please navigate to DevMentors.io website, where you can find additional links to forums, social medias, repositories and so on.
Stay tuned for the next episodes!

]]>https://piotrgankiewicz.com/2018/12/06/distributed-net-core-episode-1/feed/0Distributed .NET Core – Teaserhttps://piotrgankiewicz.com/2018/09/19/distributed-net-core-teaser/
https://piotrgankiewicz.com/2018/09/19/distributed-net-core-teaser/#commentsWed, 19 Sep 2018 05:21:01 +0000https://piotrgankiewicz.com/?p=4250Continue reading →]]>Hi there, folks. It’s been a while (a quarter to be exact) since I’ve announced for the first time Microservices in .NET Core with DShop series as a part of DevMentors idea. I do apologize for not being consistent back then, however, there was a single reason for it – together with Darek we did rewrite lots of code after gathering a great feedback during our lectures.
May I present you, the very first teaser of our incoming video series.

As you can see, there have been some quite significant changes (simply by looking at the repositories). All of the microservices are truly separate from each other (the only shared package is named DShop.Common – it’s a standard library for infrastructural or cross-cutting concerns). Let me quickly rephrase what are we going to focus on in the near future:

Orchestrating services on your VM or in the Cloud using Portainer or Rancher (built on top of Kubernetes)

You can find all of the information (and some helpful scripts) in the main DShop repository. Moreover, the HTTP API requests can be found here, assuming that you’d like to play with the solution on your own for the time being.

Last but not least, feel free to ask about anything related to this topic and if possible, please post your questions or thoughts on our forums.

]]>https://piotrgankiewicz.com/2018/09/19/distributed-net-core-teaser/feed/9.NET Core Microservices – theory, DShop solution structurehttps://piotrgankiewicz.com/2018/07/09/net-core-microservices-theory-dshop-solution-structure/
https://piotrgankiewicz.com/2018/07/09/net-core-microservices-theory-dshop-solution-structure/#commentsMon, 09 Jul 2018 05:31:27 +0000http://piotrgankiewicz.com/?p=4192Continue reading →]]>In the previous post, being sort of a teaser, I made a brief introduction to DShop project, as well as the idea behind the overall course. Starting from now on, we’ll focus on the fundamental parts of DShop, including the theory behind a particular concept, its possible solutions, and eventually an implementation.

Foreword

At the time of writing this article, we’re still working on the project, and some of the concepts may change – we have received a lot of great feedback so far (during conferences, events and as comments) and there are always pros & cons of the chosen solution when it comes to distributed systems. We are to share with you our own approach, yet it may evolve (hopefully, to the better) in time, so just keep that friendly warning in mind :).

Speaking of theoretical concepts, when it comes to microservices, that’s a lot of the ground to cover (I strongly recommend reading this book), however, we’ll focus on the core comparison, between monolithic and distributed world.

Single vs Multiple solution(s)

Monolith means a single solution (although it may contain a lot of projects having references to each other), which also means a single repository. On the other hand, in the microservices area, ideally, you want to have a single repository per service. And why is that? Mostly due to the fact, that services should be separated from each other and a single team should be responsible for its implementation, deployment, maintenance and so on. You could keep everything in a single repository, or even use Git submodules, but well – in a real world, you want your projects to be flexible and decoupled, so start with separating them into distinct repositories.

Domain vs Bounded Context knowledge

Given that you have heard a little of DDD, the microservices turn out to be a good example, of so-called bounded contexts. Just take a look at DShop solution – there’s a service dedicated to products, orders, users and so on – although they’re most likely to somehow work and communicate with each other, they represent different use cases and boundaries of your business. Simply saying, if you think of the services separation, you should think of the bounded contexts as well. Start with a monolithic approach by putting them all e.g. in different folders and once you figure out how to split them, these might be good candidates for the unique services. Just beware that it’s usually a difficult task.

Immediate vs Eventual consistency

Sending a request to HTTP API, means receiving a response indicating either success or failure, correct? Well yes, and if you get a 20X status code, that means that data was somehow modified (as long as it’s not a GET request) at this particular moment. Again, that’s totally true, as long as your API handles the request on its own. Otherwise, the request gets pushed further to the message bus and has to be consumed + handled by specialized microservice. Which takes time. How much time? It could be milliseconds, as well as it could be hours, depending on the scenario. Whenever you work with a distributed system, there’s always this “delta of t(s)” when the data is not consistent as it has to be stored asynchronously by a different service.

Internal vs External communication

Handling a command? Getting results from a query? In a monolithic application, all of your components work with each other in the same process – your domain objects, application services or controllers, thus they communicate “internally”. In the distributed world, when service A has to talk to service B, it means sending HTTP request to that service B that may be hosted on the other part of the world, which may take time and be vulnerable to network partitioning.

Single vs Multiple technology/ies

One of the beauties of the microservices is being able to use the best technology available (language, framework, library etc.) in order to solve the particular use case. You can have 10 microservices, written in 10 different programming languages and still create a top-notch system. It’s also one of the reasons, why you should strive for having the different repositories for your projects.

Vertical vs Horizontal scalability

Let’s say you have a monolithic application and it handles some requests that are either CPU intensive or just take a significant amount of time or resources to complete, whatever the reason is. You can add more cores or RAM (vertical scaling) but it might cost a lot + there’s a limit to such upgrades. You could also scale your application (horizontally), by adding more instances and spreading the workload into different servers (+ putting the load balancer on top of it), yet again – some requests may be literally “draining” your application and it becomes unresponsive. If you work with microservices, you can easily split e.g. these demanding parts – why not having a single instance of HTTP API and 5 instances of Worker Service and just ask it to process the heavy requests? Your API shall remain responsive all the time, and you can easily scale out (horizontally) the microservices that have to do a lot of work.

Single vs Multiple unit(s) of deployment

Deploying a monolithic application is quite easy, however, deploying N services? Well, that’s not an easy task. Although, you can use a variety of tools and have a fully automated CI & CD, still, there’s a lot of things that might go wrong.

Single vs Multiple points of failure(s)

This one is a sort of continuation of the previous point. If your deployment fails or your monolithic application crashes, everything is down, nothing works. On the other hand, if one service goes down in a distributed scenario, the rest of the system might (and probably should) still work. Well, maybe you won’t be able to add a comment to your blog post application, but still will be able to read an article. However, it’s not an easy task, to make your services resilient and behave properly when one of its dependencies (other services) become offline.

Easily vs Hardly maintainable

Maintaining an application is rarely an easy task, as it all depends on the size of a project, its architecture, patterns being use and code quality. However, it tends to be much easier to work with a single solution that knows about everything and handles all of the requests, than with a solution who has to rely on other applications, being deployed somewhere, far, far away. Whether it’s deployment, external communication or not breaking the data contracts – these are just a few things that make it usually more difficult to build a distributed application.

Synchronous vs Asynchronous

I’m not telling you, to no longer use Task, async and await features. In that context, asynchronous means that handling HTTP request being sent to API will not yield an immediate result. It will be sent to the message bus, and the processed by specific service. The caller of the API might get 202 (Accepted) status code, which means that a request is being processed, but it might take some time (already mentioned “delta of t(s)”) before it’s completed.

Tight vs Loose coupling

Loosely coupled in that case means that due to the fact of microservices being treated as separate applications (which can be hosted anywhere) we might think of them as components that talk to each other but are totally separate and concentrate on their own part of the overall domain. We could even state that a well-designed microservice should imply a high cohesion.

Generic vs Specific usage

Although, there’s been a lot of hype in the last years, and almost everyone desires to implement the microservices without giving it a second thought, let’s face a rather brutal truth, most of the time it makes no sense. Most of the time, a monolithic application will be the best choice. It will save you a lot of time, resolving the things that happen only in a distributed world. Honestly, I’ve started playing with this architecture almost 2 years ago, and during that period, quite a few times I had to figure out some really weird things on my own. As you will notice during our journey, even deploying a project having a different set of packages is not a trivial task, not to mention the other, more complex scenarios. However, isn’t our life also all about the learning? Getting to know about different programming paradigms or patterns is a way to go (at least for myself). And if implemented correctly, the microservices can give your application quite an advantage. No more bottlenecks, distributed workload, full asynchronicity, and finally, adding “microservices knowledge” to your resume sounds cool.

DShop solution structure

And once again, below I do include the current solution structure.

DShop services

Let’s briefly discuss one by one, what each repository is about:

DShop – nothing special about this one, just a set of common scripts (e.g. Docker ones).

Api – so-called API gateway, an entry point to the whole system – the end-user communicates mostly with this one, except the Identity Service handling the registration and authentication process.

Common – helper methods and classes, sort of infrastructural project e.g. authentication, database connector and so on, used as a NuGet package.

Some of these projects may be updated, for example when you think of having common messages package vs messages per service package vs no packages at all – all of these approaches have their pros & cons. Going further – you could keep generic utilities (as we do in the Common package) or split them into smaller packages or have no packages at all and simply enforce each service implement e.g. the same code to handle MongoDB connection. We’ll discuss these approaches in the upcoming posts, therefore stay tuned.

]]>https://piotrgankiewicz.com/2018/07/09/net-core-microservices-theory-dshop-solution-structure/feed/20.NET Core Microservices – DShophttps://piotrgankiewicz.com/2018/07/05/net-core-microservices-dshop/
https://piotrgankiewicz.com/2018/07/05/net-core-microservices-dshop/#commentsThu, 05 Jul 2018 04:39:56 +0000http://piotrgankiewicz.com/?p=4164Continue reading →]]>It’s been a while since I published the latest article, but it’s high time to finally get into the topic of microservices for real. Does open source, .NET Core, distributed system, Docker and other cool words sound good to you? If that’s the case, stick with me and let me guide you through the world (or at least part of it) of microservices. This is going to be the very first article (an introduction) of the upcoming series.

Foreword

A few months ago, I had an idea to publish a detailed course about implementing microservices in .NET Core. It turned out, that a friend of mine, Darek, thought of a similar concept – so we teamed up, created the distributed application (available on GitHub) and gave a few lectures (close to 10) during IT events and conferences, here in Poland. The idea of the recording a video course is still there (and sooner or later, it will be published), yet for now, we need to polish some remaining bits of the application.

Nevertheless, whether you are a microservice expert or beginner, read a book published by Microsoft and studied the eShopOnContainers repository (or not), let me introduce you the DShop (Distributed Shop), a brand new solution, containing over 15 repositories, including API Gateway + 8 microservices written totally from the scratch using the latest version of ASP.NET Core (2.1.1), hopefully, a starting point for some of you, who wanted to get into the world of microservices hype but had no idea where to start, or got stuck somewhere during the journey.

DShop

As before mentioned, DShop stands for an acronym of Distributed Shop, simple as that.
Why another online shop? For a single reason – this domain is usually understood by all of the people (including developers) at least at its very basic level – products, shopping carts, orders etc. If you’re into the world of DDD, these are also pretty good bounded contexts, that can be treated as separate microservices. And trust me – we (Piotr and Darek, the core developers behind the project) made everything as simple as possible (besides some generic reflection magic and other quirks), so you should be able to understand quite fast what’s going on, simply by looking at the domain models or application services (handlers).

Source

The whole idea behind DShop was to make it for you – the programmers, so that you can take a look at the code, play with it, validate your own ideas, copy our code and use it in your own projects, or point out our mistakes – whatever makes you a better software developers and provide the valuable content. Thus, you can download the whole source code from GitHub, just keep in mind that we update the repositories from time to time, fix bugs, refactor the code and extend some features – it’s an ongoing project.

Tech

One of our goals was to make the solution agnostic from cloud providers that offer some special services (e.g. Azure Service Bus or AWS Lambda). Basically, you can run DShop anywhere – on local machine, private server or in any cloud. Let’s have a glimpse, what technologies and tools are being used to make it work:

.NET Core – API Gateway and all microservices are written in C# and ASP.NET Core (2.1.1)

RabbitMQ – one of the most popular message buses out there and RawRabbit as client library

Docker – containers are everywhere, so Dockerfile for each service and Docker compose on top of it

Travis CI – build service, free to use for open source projects hosted on GitHub

Docker Hub – Docker images repository, where DShop images are being published

Rancher – enterprise management for Kubernetes, open source and easy to use

There are of course some other tools or libraries being used e.g. Angular 6 for the web application (not finished yet), but let’s leave it for now, we’ll talk about them in the next posts.

Start

Each project has its own repository. We’ll discuss the solution structure in the upcoming articles, but these are the most important projects (besides the bash scripts, that were simply copied from the base DNC-DShop repository).

DShop projects

In order to start DShop you need to have RabbitMQ, MongoDB, and Redis up and running (not to mention the latest version of .NET Core SDK). You can also easily start these through Docker – just take a look at the following script.

Once the required services are available, you can start either one by one or via Docker all of the microservices (projects named DShop.Services.Xyz, a total number of 8) and the API Gateway (DShop.Api). You can also make use of this script that loops through each repository and starts the project – just keep in mind to put that into the root directory, where the remaining projects are (as shown on the screen above).

Assuming that everything is up and running, you shall find the DShop.rest file that uses REST Client extension for the VS Code – give it a try and send a few HTTP requests to the API.

That being said, I encourage you to explore the source code and play with it. Stay tuned, as in the next posts we’ll go through the particular microservices implementation, talk about the distributed systems pros & cons and many, many other concepts that sometimes are not clearly visible at the first glance.

Oh, and finally – check out the DevMentors.io and subscribe to our social media channels, if you want to know once the video course will be completed.

P.S.

If you know Polish, take a look at the following video that was recorded quite recently – here, we talk about the DShop and core aspects of microservices.

]]>https://piotrgankiewicz.com/2018/07/05/net-core-microservices-dshop/feed/88Canceling JWT tokens in .NET Corehttps://piotrgankiewicz.com/2018/04/25/canceling-jwt-tokens-in-net-core/
https://piotrgankiewicz.com/2018/04/25/canceling-jwt-tokens-in-net-core/#commentsWed, 25 Apr 2018 04:48:32 +0000http://piotrgankiewicz.com/?p=4136Continue reading →]]>Quite some time ago I published an article (along with the source code) about refreshing the JWTtokens. In the following post, I’m going to focus on canceling the token, thus it can’t be used by anyone else. This tutorial includes the video, so it might be easier to understand the implementation flow.

Given that we do not make use of OAuth (IdentityServer etc.) what can we do in terms of canceling the active tokens? We have a few options:

Remove token on the client side (e.g. local storage) – will do the trick, but doesn’t really cancel the token.

Keep the token lifetime relatively short (5 minutes or so) – most likely we should do it anyway.

Create a blacklist of tokens that were deactivated – this is what we are going to focus on.

The important note is that in order to make it reliable we will use the Redis to store the deactivated tokens on an extremely fast caching server. Whether you host just a single instance of your application or multiple ones, it’s the best idea to use Redis – otherwise, when server goes down, you will lose all of the deactivated tokens blacklist being kept in a default server cache (not to mention the different data if each server would keep its own cache).

Alright, no more theory, proceed with coding, where we will start with the interface:

And process with its implementation, where the basic idea is to keep track of deactivated tokens only and remove them from a cache when not needed anymore (meaning when the expiry time passed) – they will be no longer valid anyway.

As you can see, there are 2 helper methods that will use the current HttpContext in order to make things even easier.
Next, let’s create a middleware that will check if the token was deactivated or not. That’s the reason why we should keep them in cache – hitting the database with every request instead would probably kill your app sooner or later (or at least make it really, really slow):

For sure, we could make it more sophisticated, via passing the token via URL, or by canceling all of the existing user tokens at once (which would require an additional implementation to keep track of them), yet this is a basic sample that just works.

Make sure that you will register the required dependencies in your container and configure the middleware:

Try to run the application now and invoke the token cancelation endpoint – that’s it.
Source code is available here.

]]>https://piotrgankiewicz.com/2018/04/25/canceling-jwt-tokens-in-net-core/feed/7Warden 2.0https://piotrgankiewicz.com/2018/02/19/warden-2-0/
https://piotrgankiewicz.com/2018/02/19/warden-2-0/#commentsMon, 19 Feb 2018 06:10:31 +0000http://piotrgankiewicz.com/?p=4121Continue reading →]]>It’s been a while since I last published a post. There are some projects, courses, and events going on, thus I didn’t want to write just about anything. Nevertheless, I decided to get back to some of my core open source projects, as few of them didn’t receive any update for way too long. And here it is, the Warden project is back.

Warden is my first open source project ever, that actually gained some traction (over 500 stars so far). It was also one of the reasons I started this blog 2 years ago. The idea behind it is quite simple – back then, I was looking for a tool (to be specific, a framework or library) that would help me write my own monitoring application. And it wasn’t just about monitoring, I also needed to know when some specific error happens (callback) so that I could write a code and react to it e.g. restart the VM if the API is not responding anymore, or run a query on a database. I couldn’t find anything like that written with the C#. Therefore, I thought it might be a good idea to create it on on my own. During these times, also the .NET Core framework was going through some heavy development (do you remember early alpha, beta, preview, whatever releases?), however despite all of these issues, it seemed as a good fit for creating a truly cross-platform solution.

You may find pretty much everything (code, docs, samples etc.) about Warden on its landing page, main repository and the GitHub organization. What I wanted to mention here, is that after a long break (over half a year) I finally decided that it needs to get an update. What changed? I included the full compatibility with .NET Standard 2.0, moved all of the extensions (over 15 projects) into their own repositories, updated the documentation and so on. These are not the major changes, however, given the fact that everything is up to date now with the latest version of .NET Core and resides in their own repositories, it makes is much easier to organize the work and further development.

Finally, we also had some plans to create a brand new web application along with the microservices on the backend side, in order to provide a real-time monitoring system, that you could use on your own or as the SaaS model. There is quite a lot of additional repositories (Warden.Services.Xxx) and I also wrote about this idea here, but we’ve stopped the development, due to some other projects and activities that consumed our time back then. However, I do hope and believe that one day we’ll get back to it, and maybe if you’re interested in the open source contribution, just leave a comment or send me a message.

]]>https://piotrgankiewicz.com/2018/02/19/warden-2-0/feed/112017 summaryhttps://piotrgankiewicz.com/2017/12/30/2017-summary/
https://piotrgankiewicz.com/2017/12/30/2017-summary/#commentsSat, 30 Dec 2017 14:47:05 +0000http://piotrgankiewicz.com/?p=4088Continue reading →]]>This year was simply phenomenal, so many things happened (some of them totally out of the blue) and I never thought it’s even possible to achieve so much during a single year. If I were to choose the most important thing that I learned (besides the technology topics and related activities), that would be the investment of your time to share for free your knowledge with the others via local meetups, conferences, groups, workshops or course recordings.
And by the others I mean both, already experienced programmers as well as the regular folks who are considering if the programming is the right choice for them or just already started their IT career. Anyway, let me point out the best things and events that I was part of this year.

Work

I didn’t spend too much time working this year. Don’t get me wrong – I worked a lot, but speaking of the typical, freelance/commercial stuff, maybe for a quarter in total? For the rest of the time, I was spending the money that I saved up in the past years and dedicated this time purely for speaking up during meetups and workshops through all of Poland. And even most importantly, I chose to develop our own “after hours” open source projects as I believed that it would be worth it. However, speaking of the regular work, I’m happy to say that at first, I worked with the Nexta (they found me thanks to this article). After a month or so, my colleague Grzegorz joined, and we worked together for them – later on, I decided to drop the job and focused on the .NET Core Tour as well as developing the Collectively, while Grzegorz gathered up with his friend Kamil, and now they continue the development, well-done guys. Starting in October, I joined the Verve Industrial team – again, they found me thanks to my GitHub projects and so far I’m delighted with the collaboration.

.NET Core Tour

I remember as if it was today – March, some Sunday and I was supposed to give the final answer by Monday, whether I want to join one of the best programming schools here in Poland to prepare the .NET course from scratch. That was indeed a difficult decision, as I fancied teaching the people programming in general, yet on the other hand, I didn’t crave to make this my full-time job. And exactly then, a friend of mine Łukasz asked me, if I want to join him and travel via all of Poland and speak about .NET Core. Without giving it a second thought, I decided to drop the job at programming school and join Łukasz. That was an exceptional experience, I encourage you to read about it here.

Events

Besides the .NET Core Tour, I was also a speaker at some well-recognized conferences, such as 4Developers (both, in Warsaw and Gdanśk) and a few others. Again, such a neat experience that gives you even more confidence. Speaking from personal experience, start small, with local groups or meetups and then go to the big conferences like these.

Mentoring

Starting early this year, by giving the first workshops ever for the WiT (Women in Technology) group, I discovered that it’s something that makes me happy. I never thought that teaching the others, whether about the programming basics or more advanced topics might be such a remarkable experience. I ran the whole day seminars both, solo and also with Łukasz during the .NET Core Tour. I also had a chance to do some paid workshops, for the companies such as Microsoft. Eventually, with a friend of mine Paweł, we were given an opportunity to run the postgraduate studies, strictly about the .NET Core and programming basic. Moreover, I also became the mentor in the 3rd edition of TechLeaders.

Video courses

Before I started doing the regular workshops, I created the “Becoming a Software Developer” course, which received a lot of positive feedback. Later on I was asked by different companies to record some paid courses, and eventually, I created 3 of them. One in Polish and remaining 2 in English, you can find the links for all of the sessions here. I also started publishing more video content to my YouTube channel and realized that Snapchat is also a pretty handy tool. Feel free to follow me (spetzu) and the rest of the developers.

Collectively

am proud to say that after spending a year (me and a friends of mine) on the rather typical, open source and after hours project named Collectively, we managed to get it up and running here in Kraków. On top of that, we were chosen as 1 of 3 projects that will receive money funds to develop the Blockchain technology underpinned to our application. You can find more about the Collectively on it’s landing page and in the social media (yes, there was even a press conference).

Open Source

Pretty much all of the code I do write nowadays is fully open source and can be found on my GitHub profile (or the Noordwind one or any other organization that I’m part of). I will continue doing so, especially now, given that we’re going to develop Collectively for real, in a group of 8-10 people. Almost 2000 contributions this year, and I ain’t gonna stop that flow.

Noordwind

Since I’m one of the 4 co-founders of Noordwind, I’m very happy to announce that we started to grow as the teal organization. Few more members joined us (friends and friends of our friends), and even though we’re usually working with different customers and on different projects, we still manage to find time to talk regularly about organizational stuff, think about next steps and so on. We’re more like a network of developers and other skilled members being able to deliver software services (but not only these). I wish that we will continue the growth exactly this way.

Microsoft MVP

At the beginning of the December, I was awarded Microsoft MVP (not LeBron James) as one of the 3 members of this part of Europe. Thank you very much, Microsoft, for such reward!

Crypto@Cracow

Although I’m not a part of the C@C team, I must admit that I’m very glad to be a sort-of catalyst that gathered up its 2 core members (Tomasz & Tomasz). It’s astonishing that starting from scratch, a group of 5 people managed to deliver one of the most popular Meetup groups here in Poland, having on average 150 members per event just in Kraków. Good luck in the future guys, with delivering the great content about Blockchain and cryptocurrencies.

2017 summary

Most likely I forgot about something, but that doesn’t really matter. As you can see, I do a lot of different things, as it keeps me motivated and happy. I always wanted to be fully independent at some point which basically means having my own projects that I could develop and benefit from. I do believe, that 2018 is going to be the year dedicated to such activities. I do not want to make any commitments here, as you never know whether your idea will work out. Nevertheless, if you never try you will never find out, so just wish me good luck, and see you in 2018!

]]>https://piotrgankiewicz.com/2017/12/30/2017-summary/feed/10JWT refresh tokens and .NET Corehttps://piotrgankiewicz.com/2017/12/07/jwt-refresh-tokens-and-net-core/
https://piotrgankiewicz.com/2017/12/07/jwt-refresh-tokens-and-net-core/#commentsThu, 07 Dec 2017 13:51:26 +0000http://piotrgankiewicz.com/?p=4066Continue reading →]]>In this article, I will present to you a basic implementation of the refresh token mechanism that you can extend to your own needs.

Let’s start with the need of using the refresh tokens. When you make use of the token authentication (e.g. OAuth) and pass the tokens via Authorization HTTP header, usually, these tokens have a specific expiration time. Whether it’s a minute, 10 minutes, an hour or a week makes no big difference, as long as you can provide a way to generate the new token.

Most likely, you don’t want the user to login every time that the token expiration hits its limit. On the other hand, you don’t want to store the user credentials (email, login, password etc.) somewhere in memory (whether it’s a device, cookie or a local storage). What can you do then? Store the so-called refresh tokens instead, that can be used to recreate the access tokens.

You can download the whole sample by cloning a repository and the HTTP requests available as the Postmancollection. Now, let’s start with the implementation, just beware that I’m not following here any specialized patterns, rich domain models and so on – it’s just a sample that works, not a sophisticated solution.

The logic is very simple here – just ensure that the refresh token exists and that it was not already revoked, so it can be used again and again. And when to create a new refresh token? For example, when the user authenticates:

]]>https://piotrgankiewicz.com/2017/12/07/jwt-refresh-tokens-and-net-core/feed/56.NET Core DevOps with Docker, Travis CI and Rancher – part 2https://piotrgankiewicz.com/2017/11/03/net-core-devops-with-docker-travis-ci-and-rancher-part-2/
https://piotrgankiewicz.com/2017/11/03/net-core-devops-with-docker-travis-ci-and-rancher-part-2/#commentsFri, 03 Nov 2017 07:18:05 +0000http://piotrgankiewicz.com/?p=4053Continue reading →]]>Welcome to the second part about DevOps (here is the first one) and automating the deployment for the .NET Core apps with the usage of Docker, Travis CI and Rancher. The purpose of this tutorial is to show you that setting up the CI & CD for the projects that you’re working on is not as complex as it may seem at the first glance. The slides for the presentation can be found here.

In this episode, we will focus purely on Rancher which is a great tool for managing the containers and overall infrastructure. Let me point out some of its most important features.

The need

You might be wondering, why such orchestration tool is even needed in the first place? Just think about the following tasks: managing the hosts (virtual machines), scaling, distributing containers across different hosts, upgrading, deploying new versions with an ability to rollback, load balancing, setting up certificates and many other things.
These are just a few important reasons why tools such as Rancher are really useful.

Cluster, environments and hosts

We can have distinct clusters where different environments can run. Particle environment e.g. production can be assigned only to a single cluster, but on the other hand, a cluster can have multiple environments assigned e.g. a testing cluster that includes specific testing environments. To each cluster, we can add one or more hosts (virtual machines) to which all of our services will be deployed using the Docker containers. We can also label our hosts (as well as services), in order to control where we would like to deploy the containers (for example, only into the specific virtual machines).

Stack

In order to run our containers, we can either add the container available in the Docker Hub (or any other registry that we can add) or provide a stack file that is compatible e.g. with docker compose file definition. And that’s all it takes to deploy the service(s). Once deployed, we can easily upgrade them, edit their settings, add more instances that will be spread through the available hosts, monitor the usage of the resources, browse logs and even execute the shell.

Load balancer

It is a good approach to keep most of the container running in the private network without exposing their ports and only make them available via the load balancer (HAProxy) that takes care of the redirecting the incoming requests, as well as setting up certificates, proper subdomains and so on.

]]>https://piotrgankiewicz.com/2017/11/03/net-core-devops-with-docker-travis-ci-and-rancher-part-2/feed/13.NET Core DevOps with Docker, Travis CI and Rancher – part 1https://piotrgankiewicz.com/2017/10/23/net-core-devops-with-docker-travis-ci-and-rancher-part-1/
https://piotrgankiewicz.com/2017/10/23/net-core-devops-with-docker-travis-ci-and-rancher-part-1/#commentsMon, 23 Oct 2017 05:55:52 +0000http://piotrgankiewicz.com/?p=4035Continue reading →]]>Welcome to the first part about DevOps and automating the deployment for the .NET Core apps with the usage of Docker, Travis CI (I’ll also mention how to use BitBucket Pipelines) and Rancher. The purpose of this tutorial is to show you that setting up the CI & CD for the projects that you’re working on is not as complex as it may seem at the first glance. The slides for the presentation can be found here.

At first, let’s discuss what is the aim and what technologies are we going to use. In the first part, we will not make use of the Rancher, which is a topic that will be covered in the second episode.

.NET Core

The cross-platform and open source framework for building the applications using C# language. In this tutorial, we will make use of the Fibon project, which is a simple, distributed application that we’ve built during the .NET Core Tour in order to show, what .NET Core is capable of and how easily such application can be deployed later on to the cloud.

Docker

An open platform to build, deploy and run your applications in a distributed way by using the concept of containers. Thanks to this technology, we are able to create a so-called image of our application and then run it inside a container, which will make use of the underlying operating system and provide a separate environment for the application that we want to start. It is a much faster (and different) approach than using the typical virtual machines.

Travis CI

One of the available build servers (amongst other ones such as TeamCity, Jenkins, BitBucket Pipelines – this one is also mentioned in the video) that is capable of building our project whenever there’s a new push to the Git repository. Easy to start with, includes powerful extensions and is free for the open source projects.

Goals

During the first part, we want to achieve the following goals:

Travis CI builds the project once the code is pushed to the Git repository.

Docker image will be pushed to the Container Registry.

Docker Image will be ready to deployed on the server.

Application will run on the Linux VM (Ubuntu Server) via Docker Compose.

As you can see, although there are quite a few things that are fully automated, we are still missing the part related to the automated deployment of the new Docker images, orchestration and services management. This is something that we will cover in the next part with the help of Rancher. For now, follow the screencast available above (you can also download it here) and take a look at the slides.

]]>https://piotrgankiewicz.com/2017/10/23/net-core-devops-with-docker-travis-ci-and-rancher-part-1/feed/12Partial update your .NET Core HTTP API resourceshttps://piotrgankiewicz.com/2017/10/05/partial-update-your-net-core-http-api-resources/
https://piotrgankiewicz.com/2017/10/05/partial-update-your-net-core-http-api-resources/#commentsThu, 05 Oct 2017 13:44:00 +0000http://piotrgankiewicz.com/?p=4025Continue reading →]]>Today, I was struggling with the idea of so-called partial updates. Imagine the following scenario, which is actually a quite common one. You’d like to update some resource in your HTTP API, for example, the product object. However, such entity may contain a lot of properties, tens or even hundreds, and you want to change only its name or a few more things as well (doesn’t really matter). And that’s where JSON Patch comes in really handy.

If you were to use the typical PUT request, you’d probably have to send the whole object including all of its properties, which might result in unnecessary bandwidth usage amongst other things, such as possible mistakes (e.g. missing fields etc.). You could also expose the POST endpoints for the atomic operations such as /products/1/name that would be responsible for updating a single property, yet again – multiple updates, multiple calls, not to mention the backend mess caused by implementing so many different operations.

And the solution for all of this is JSON Patch – similarly to the GraphQL, you are free to include only these properties that you want to update. For our product entity, such HTTP PATCH request could look like this:

As you can see, by following the RFC 6902 standard, you can easily define what should be done with your resource, and you have multiple operations to choose from e.g. “add” or “replace” – you can find the whole description along with the examples in one of the links above.

Finally, there are already working implementations of the JSON Patch for the ASP.NET Core and there’s also another one that supports Nancy (I’ve just started using it and can verify that it does work as expected). Since I really like the command & event handlers pattern and have been working with distributed systems recently, e.g. in our Collectively platform I found it really helpful, as I can almost directly map the incoming request to some particular update command that is getting pushed to the service bus automatically, and I don’t need to do any additional properties checking or assigning at all.

The idea behind distributed services is quite old and the general concept is the following – instead of building a typical monolithic application (e.g. a single HTTP API that contains all of the business logic and process the actual requests in a synchronous way), let’s pass the request further to some specialized service (please note that there can be many different services responsible for different tasks) and let such service do its job.

Sound smart? Sure. Simple? Not really. Well, whenever you choose to go with the distributed architecture, whether it’s going to be SOA (Service Oriented Architecture) or Microservices, you have to keep in mind that the complexity of building such system tends to be much bigger than in a typical monolithic application.

However, what is on of the main benefits? You can scale your application pretty much infinitely. Thanks to the so-called horizontal scaling, you can add more resources (e.g. virtual machines or actual servers) that will contain more and more instances of services and you can easily distribute the overall load. As opposite to the vertical scaling (adding more CPU power, RAM etc.) which has its limits.

In this video, I’ll guide you through the basic of building distributed applications – I’ve started studying this topic over a year ago, and although I gained some experience and knowledge which helped me to build e.g. Collectively there are still many more topics that I haven’t discover yet, due to the rather complex nature of such architecture. Yet, don’t be afraid – you always need to start somewhere, in order to get better, thus let me give you a quick introduction into the world of distributed systems.

The end

Thank you all very much for spending the time with me on that short journey, I hope that the whole series turned out to be helpful for at least some of you, and well – see you in the near future with more content!

]]>https://piotrgankiewicz.com/2017/09/06/becoming-a-software-developer-episode-xxiv/feed/7Collectively – an open source platform for the citizenshttps://piotrgankiewicz.com/2017/08/26/collectively-an-open-source-platform-for-the-citizens/
https://piotrgankiewicz.com/2017/08/26/collectively-an-open-source-platform-for-the-citizens/#commentsSat, 26 Aug 2017 09:56:32 +0000http://piotrgankiewicz.com/?p=3984Continue reading →]]>It’s been almost a year since we – the members of the Noordwind teal organization started working on our own, fully open sourced project named Collectively, being the platform for the citizens that would help them report and discuss about things that are important for their community and environment. On the 15th of September, there will be a special event (including press conference) held in Kraków related to our platform as well, so please feel already invited. And now, let me introduce what the Collectively is all about.

And I just want to state that, we will greatly appreciate any feedback, whether it would be your opinion about the idea, application, code, design or bugs, as we’re working on the solution on a daily basis and didn’t have a chance to test it with a broader audience.

The idea

It all began in September 2016, when we decided that it’d be nice if we could somehow report e.g. the litter left in the woods using some designed application for this purpose. Thus, we started working on our own solution, and since we’re the strong believers of the open source software, which provides the transparency required to gain the real trust of the users, we also decided that everything will be created this way.

Everything we did was by using our own funds and free time, as we did not look for the investors, as the typical startup business. Since the local communities and citizens lacked such tool, we thought that someone should change it and deliver a platform that’d help us to improve the environment. We believed, and still, do believe, that it’s high time that citizens, including me, you or your neighbors will take care of their city as well, instead of just relying on some public services or companies. You can think of it as one step towards the so-called Society 2.0.

The evolution

After some time, we realized that our solution doesn’t have to be tied only to negative things such as litter, so it can be much more open and universal tool, that will allow sending remarks (as we named it such) about positive things as well. Eventually, we chose 4 categories: defects, issues, suggestions, praises – which means that you can report something that you don’t like, as well as something that you find to be nice and enjoyable.

The current

After few months of work, we started talking to some municipalities in Kraków (which is where the idea was born) and finally gained some attention. We’re very happy to state that our first official partner is ZZM (Zarząd Zieleni Miejskiej) – the unit responsible for taking care of green areas, trees, parks etc. With their support and belief in the Collectively platform, we strongly believe that it will be the first, very important step into changing the mentality of the citizens, by giving them such tool, that is easy to use, fully open and transparent and provides the link between public services and citizens, so that each one of us can take care of the environment that we are surrounded by and has a direct impact on our lives.

The future

There are a lot of different scenarios that would further enhance the usage of the platform, just to name a few:

Partnership with more public services or companies in order to work with them together on relevant tasks.

Introduce gamification features to get the people more involved in resolving the sent remarks.

Make use of the AI such as image recognition services in order to easier categorize the remarks and distribute them to the relevant groups.

Add a new category for the micro jobs, so that people could ask for help and get paid for the resolved tasks.

Implement the Blockchain technology for the inner payments between the users (related to the micro jobs).

Allow the users to organize and together perform some actions or raise the funds for the particular goals.

Provide the identity (single account) that connects the citizens with the different public units.

The possibilities

The Collectively platform can be used for the variety of purposes, as it’s pretty universal solution designed for reporting things (by sending remarks) that can be of any nature, which means that you can build your own tool on top of the Collectively that is suited to your custom needs. There are many features available out of the box, such as:

User registration, verification and authentication using email or Facebook.

Most of us work after hours, during spare time, however, personally, I decided to drop any freelance stuff over a month ago, which means that I am fully dedicated to developing the platform and this is my daily work that consumes most of my time.

]]>https://piotrgankiewicz.com/2017/08/26/collectively-an-open-source-platform-for-the-citizens/feed/84developers – Gdańsk 2017https://piotrgankiewicz.com/2017/08/18/4developers-gdansk-2017/
https://piotrgankiewicz.com/2017/08/18/4developers-gdansk-2017/#commentsFri, 18 Aug 2017 05:17:48 +0000http://piotrgankiewicz.com/?p=3977Continue reading →]]>The second edition of the 4developers – one of the biggest IT conferences here in Poland is about to take place in less than a month. And I will be a speaker here once again.

In case you’ve never heard of 4developers before (really?), just take a look at my summary of the previous edition that was held at the beginning of the April this year. It was my first time to speak at such a big conference ever – I already had some experience, but it was gained mostly during meetups of the local programming communities, while 4developers was much more demanding due to the number of participants and its huge scale.

I’m happy to announce, that you will be able to meet me again at 18th of September in the beautiful city of Gdańsk (I really wanted to visit this place again this year, so this is really great) and I’ll be once again a speaker. The topic of my presentation is: “.NET Core – from zero to deployment” and I’ll share with you my experience and the knowledge gained over the 1,5 year working with .NET Core and deploying the applications using a variety of tools and services. If you’re wondering how to work with Docker, build servers, Linux, Rancher and other tools in order to automate building, testing and deployoing .NET Core applications, as well as managing and orchestrating them e.g. while dealing with a distributed services, make you sure you will not miss my lecture, that will be all about practical examples anyway.

What can I say more? The previous edition that took place in Warsaw was a really nice event and I’m pretty confident that in Gdańsk, although the overall number of the “paths” will be lower, you will be happy with the outcome as well.

And if you’re don’t have a ticket yet, you can purchase one here and use the promo code 2017_ZnamSpeakera which reduces the ticket price by 20%, enjoy!

]]>https://piotrgankiewicz.com/2017/08/18/4developers-gdansk-2017/feed/7.NET Core Tour summaryhttps://piotrgankiewicz.com/2017/08/16/net-core-tour-summary/
https://piotrgankiewicz.com/2017/08/16/net-core-tour-summary/#commentsWed, 16 Aug 2017 04:38:54 +0000http://piotrgankiewicz.com/?p=3964Continue reading →]]>It’s been over 2 weeks since the .NET Core Tour ended. In this summary, I’d like to share with you the journey that lasted for over 4 months.

It was March, and I was struggling with a particular job offer that was about preparing and then teaching the people programming in one of the best programming schools in Poland. After an interview, they wanted to hire me and I’d be mostly responsible for creating the content for the new .NET programming course and once completed with this part, I’d teach the students. It was Sunday, and I was supposed to give the final answer by Monday – on one hand, I really wanted to do that (and I was fine with spending some time in Warsaw), but on another, I wasn’t 100% sure if that’d be a good choice, due to my other tasks, responsibilities, and lack of time.

.NET Core Tour.

And this is when a friend of mine Łukasz Pyrzyk asked me about doing a “.NET Core Tour”. I wasn’t hesitating even for a second, and pretty much immediately responded him with a positive answer. Just so you know better – I decided to drop a very well-paid job, and instead chose to travel almost through the whole country and spend my free time in order to share the knowledge for free, cause this is what I truly enjoy doing. In return, I got a lot of experience, both in giving lectures and even more importantly in running workshops, got more recognition and met a lot of great people that are part of the programming communities in different cities in Poland.

We had almost 20 different events during our tour including 5 workshops that lasted almost the whole day, met with hundreds of people, traveled thousands of kilometers mostly by train and received a lot of positive feedback + gained the priceless experience. During June I spent like one-half of a month away from home and it was really crazy in terms of traveling, but there was a lot of fun as well.

Wrocław – 21.03.2017

Kraków – 29.03.2017

Warszawa – 03.04.2017

Wrocław – 06.04.2017

Białystok – 19.04.2017

Warszawa – 20.04.2017 (Channel9 Microsoft Poland)

Katowice – 17.05.2017

Wrocław – 24.05.2017

Łódź – 31.05.2017

Wrocław – 01.06.2017

Kraków – 03.06.2017 (Workshops)

Toruń – 06.06.2017

Lublin – 07.06.2017

Opole – 08.06.2017

Poznań – 21.06.2017

Łódź – 24.06.2017 (Workshops)

Wrocław – 01.07.2017 (Workshops)

Katowice – 08.07.2017 (Workshops)

Warszawa – 29.07.2017 (Workshops)

I’d like to thank everyone who supported us – our partners and sponsors, leaders of the local programming communities and all of the participants. I hope that you enjoyed our lectures and workshops. And if you’re thinking about our .NET Core expertise and maybe would like to have such workshops in your company or so, take a look at the offering.
And finally, let me present you a few photos from different events during the tour. You can also find some tweets under the #netcoretour tag.

.NET Core Tour.

.NET Core Tour.

.NET Core Tour.

.NET Core Tour.

.NET Core Tour.

.NET Core Tour.

.NET Core Tour.

]]>https://piotrgankiewicz.com/2017/08/16/net-core-tour-summary/feed/9Becoming a software developer – episode XXIIIhttps://piotrgankiewicz.com/2017/07/31/becoming-a-software-developer-episode-xxiii/
https://piotrgankiewicz.com/2017/07/31/becoming-a-software-developer-episode-xxiii/#commentsMon, 31 Jul 2017 05:57:42 +0000http://piotrgankiewicz.com/?p=3918Continue reading →]]>Welcome to the twenty-third episode of my course “Becoming a software developer” in which we will focus on the vast topic of DevOps which is all about building, testing and deploying the application. And we will use Docker to help with the overall process.

All of the materials including videos and sample projects can be downloaded from here.
The source code repository is being hosted on GitHub.

Scope

Publishing

Docker

Deployment

Automation

Abstract

Publishing

When we want to run our application in a proper manner, we should not really use dotnet run and run the app based on the provided source code written in C#. Instead, we should only use the assemblies compiled into the .dll files and then execute dotnet application_name.dll which will run the actual application that was already compiled, not the project with its source code. This can be done by running dotnet publish -c Release where the -c defines the configuration that should be used. We want to use Release so that the compiler can include some optimizations for this environment in order to ensure that our application will be truly prepared for the production mode.

Docker

Docker is the most popular engine for running the applications within so-called containers, although the containers itself have been used since the 70s. You can read about the differences between virtual machines and containers, the important part is that container is very lightweight, runs on top of the current operating system (e.g. Windows or Linux), has its own and separate environment and most importantly it will behave in the same way as on our development machine, which means that we can be certain that regardless of the infrastructure, our application will work and behave as expected. Below is the sample Dockerfile for the .NET Core applications:

And run it simply by typing docker-compose up command. Finally, we can publish our own Docker image to the registry e.g. Docker Hub, by creating an account here, adding a new repository (e.g. passenger-api) and then running the following commands:

In this episode, I did deploy the application to the virtual machine based on Ubuntu Server running in the Digital Ocean cloud. I logged in to the VM by typing: ssh root@46.101.203.190 and then executed the following commands:

Eventually, I did restart the Nginx with service nginx restart command, executed the docker-compose command up and our application was available under the http://46.101.203.190 URL.

Automation

Is it possible to somehow automate this process of building, testing, publishing and deploying the application? Of course, and this is where tools like build servers come in handy. There are many of them and you can browse my previous posts where I did describe how to make use of such services. Here, I chose the Travis CI which is a really cool build server and its free for the open source projects. Just sign in using the GitHub account, mark the selected repository integration, and place the .travis.yml file in the root folder of the solution:

The actual content of the build scripts can be found in the project repository, just keep in mind that you could also use the pure bash commands instead.

Next

In the next episode, which will be the final one, we will do something special :).

]]>https://piotrgankiewicz.com/2017/07/31/becoming-a-software-developer-episode-xxiii/feed/6JWT RSA & HMAC + ASP.NET Corehttps://piotrgankiewicz.com/2017/07/24/jwt-rsa-hmac-asp-net-core/
https://piotrgankiewicz.com/2017/07/24/jwt-rsa-hmac-asp-net-core/#commentsMon, 24 Jul 2017 06:22:51 +0000http://piotrgankiewicz.com/?p=3934Continue reading →]]>Recently, I was struggling with the SSO authentication. At first I did pick up JSON Web Token which of course is a legitimate option, however, I was forced to share the secret key between different parties, as I decided to use HMAC. Not so long ago I decided to switch to the RSA instead and I’d like to present you both solutions using ASP.NET Core.

I will not dive into the details of how HMAC or RSA work as I’m not an expert in that matter, yet there’s at least this main difference that you should be aware of. If you stick to the HMAC, you’ll be forced to share the so-called secret key among different applications (e.g. services in a microservices architecture) given that you’d like to have the same valid token for different apps. And that’s not really the best idea, which is where RSA comes in handy – the service responsible for the token generation will use the private key for signing the JWT and all of the interested parties may use the public key to ensure the token’s validity. I did implement this solution for our open source platform Collectively that we’re building in a distributed way and we need sort of SSO mechanism in place.

Before we implement anything using C#, let’s prepare all of the stuff. In order to generate the secret key for HMAC, you could use e.g. this website. Just generate some random key, so it may look like this: GRQKzLUn9w59LpXEbsESa8gtJnN3hyspq7EV4J6Fz3FjBk994r.

Next, let’s create the RSA keys. In order to do that, we’ll use openssl tool. Open your terminal and type the following commands:

There’s one more thing before we can actually use our RSA keys within .NET Core application. We need to convert them into XML. It can be done here and before we copy our shiny XML into private-rsa-key.xml and public-rsa-key.xml files, let’s format them a little bit by using this tool. Eventually, we should have the following 2 files that we will deploy with our application:

As you can see, such handler could be used by all of the services, as the private RSA key part is an optional one. Inside the payload you might notice a custom claim unique_name – this one is actually required if you want to get the current username using User.Identity.Name within ASP.NET Core application.

The FromXmlString() is an extension method defined in a following way:

The tokens can be valited here and the expiration date can be checked here, simply by coping the expires property being represented as the EPOCH ticks.

The full application and all of the source code can be downloaded from the GitHub repository. I hope that this one will get you started with JWT and SSO that can be used to achieve the robust token based authentication mechanism in your application.

]]>https://piotrgankiewicz.com/2017/07/24/jwt-rsa-hmac-asp-net-core/feed/27Becoming a software developer – episode XXIIhttps://piotrgankiewicz.com/2017/07/17/becoming-a-software-developer-episode-xxii/
https://piotrgankiewicz.com/2017/07/17/becoming-a-software-developer-episode-xxii/#commentsMon, 17 Jul 2017 05:36:22 +0000http://piotrgankiewicz.com/?p=3867Continue reading →]]>Welcome to the twenty-second episode of my course “Becoming a software developer” in which we will use SQL Server database along with Entity Framework Core library.

All of the materials including videos and sample projects can be downloaded from here.
The source code repository is being hosted on GitHub.

Scope

SQL Server

Entity Framework Core

Abstract

SQL Server

At first, we need to have access to the SQL Server where we could store our data. You can download SQL Server here and install it on your own or just connect to the instance running somewhere in the cloud or locally by using Docker like I did.

Once the SQL Server is installed, there are many tools that provide a graphical interface for managing the database.
One of the most popular ones (and also free) is SSMS, yet there are also others such Datagrip, Team SQL or SQL Toolbelt. You can also use the mssql plugin for the Visual Studio Code.

Finally, execute the following script to create a new database and a table for the User type.

Entity Framework Core is a new version of EF ORM (Object-relational mapping) designed to provide an access to the SQL Server database. It’s one of the most popular libraries for storing and retrieving the data from the SQL Server, as well as configuring the classes and the mappings between our models and database tables. Once we install the required dependencies, we can create a new DbContext which will be responsible for handling the connection:

The proper connection string using some default credentials may look like this: Server=localhost;User Id=SA;Password=abcd1234;Database=Passenger.

Next

In the next episode, we will focus on publishing and deploy our application using tools such as Docker, Nginx and uploading it to the cloud where we will run a virtual machine containing Ubuntu Server.

]]>https://piotrgankiewicz.com/2017/07/17/becoming-a-software-developer-episode-xxii/feed/23Hosting Docker images on Azure Container Registryhttps://piotrgankiewicz.com/2017/07/09/hosting-docker-images-on-azure-container-registry/
https://piotrgankiewicz.com/2017/07/09/hosting-docker-images-on-azure-container-registry/#commentsSun, 09 Jul 2017 18:45:20 +0000http://piotrgankiewicz.com/?p=3899Continue reading →]]>In one of the previous videos, as well as posts I described how to use Docker and Docker Hub in order to build and deploy applications written with ASP.NET Core. In this post, I’d like to introduce the Azure Container Registry which is an alternative to the well-known Docker Hub.

In order to start with Azure Container Registry, you need to have the account on Azure portal (and some credits, as it’s not for free although it’s a very cheap service). Once you login to the portal, you can create a new resource of type: Azure Container Registry – just provide a unique name, select the resource group, region and basically that’s it – after a minute or two, you’ll have your container registry ready to use. Ensure that you enable the admin access mode.

Create a new Azure Container Registry.

Make sure you enable admin access.

When you open your newly created resource, navigate to the Access keys section.
Here, you will the so-called login server (which is the actual URL such as myapp.azurecr.io, username and password.

Details of the login server, username and password.

Copy these 3 and start your terminal:

docker login [login_server] -u [username] -p [password]

You can also just type docker login [login_server] and you shall be prompted to provide the username and password. Once completed, you should see Login succeeded! message. From that point on, you can push, pull and do other things with your Docker images. Let’s say that you’d like to push the local image named my-api. At first, you have to tag it correctly, only then the Docker image can be transferred further to the remote repository:

docker tag my-api myapp.azurecr.io/my-api
docker push

And that’s it. You can make use of this container registry just like you used (or not) Docker Hub. Keep in mind though that these repositories are private by default, which means that you have to perform authentication first.

]]>https://piotrgankiewicz.com/2017/07/09/hosting-docker-images-on-azure-container-registry/feed/8Building .NET Core apps with BitBucket Pipelines and Dockerhttps://piotrgankiewicz.com/2017/06/19/building-net-core-apps-with-bitbucket-pipelines-and-docker/
https://piotrgankiewicz.com/2017/06/19/building-net-core-apps-with-bitbucket-pipelines-and-docker/#commentsMon, 19 Jun 2017 04:49:28 +0000http://piotrgankiewicz.com/?p=3881Continue reading →]]>Recently, I started researching tools and services for the build automation. Being a long user of TeamCity and currently Travis CI (also had some experience with Jenkins, AppVeyor and VSTS) I wanted to find out what else is there. Then I realized that there’s a build server built into BitBucket, thus I decided to give it a go.

BitBucket Pipelines is a part of BitBucket cloud and hosted repository either for Git or Mercurial projects. There’s no need to install or download anything – just click enable Pipelines and that’s it. What I wanted to achieve is the following, a rather typical scenario – after each commit to the repository, assuming that the specific Git branch was updated, the service should build the app, test it and then build a Docker image and publish it to the Docker Hub or any other registry e.g. Azure Container Registry.

And I wanted to do it in the simplest way possible – as much as I like TeamCity, there’s a lot of things required to get things done, in opposite to e.g Travis CI. Therefore, I really wanted to define my overall build flow in a single file if possible and just don’t care about anything else. So this is what my bitbucket-pipelines.yml eventually looked like (which has to be placed in the root of the repository):

As you can see, there’s an easy way to access the environment variables. Both, the DOCKER_USERNAME and DOCKER_PASSWORD were defined as the secured variables within my account settings in the repository, as I do not want to push the username or password to my repository at all. And if you’d like to define some environment variables in the configuration file, you can do it easily using the export keyword, for example like this:

And of course, you can access the system variables like the name of the branch using BITBUCKET_BRANCH or hash of the commit BITBUCKET_COMMIT that might be especially useful for tagging the project versions.

Pipelines builds are really fast and for me, that really matters. So where’s the catch? Using the free version you can access only 50 minutes monthly for the build services, however for 10$ monthly (2$ per user starting from 5 users) you can already make use of 500 minutes, so you should be fine.

]]>https://piotrgankiewicz.com/2017/06/19/building-net-core-apps-with-bitbucket-pipelines-and-docker/feed/29Becoming a software developer – episode XXIhttps://piotrgankiewicz.com/2017/06/15/becoming-a-software-developer-episode-xxi/
https://piotrgankiewicz.com/2017/06/15/becoming-a-software-developer-episode-xxi/#commentsThu, 15 Jun 2017 06:13:57 +0000http://piotrgankiewicz.com/?p=3840Continue reading →]]>Welcome to the twenty-first episode of my course “Becoming a software developer” in which we will use MongoDB which is a NoSQL database for storing the data of our application.

All of the materials including videos and sample projects can be downloaded from here.
The source code repository is being hosted on GitHub.

Scope

MongoDB

Abstract

MongoDB

This is one of the most popular NoSQL databases in the world. Beware that NoSQL is just a catchy buzzword – there are many different NoSQL databases with totally unique behavior and destination. Basically, whenever you hear NoSQL, you should think about data storage where you can put the data in a so-called collection that does not require any particular schema and is very flexible. Quite often JSON is used as the primary data type for storing the documents. Opposite to the SQL database, usually, NoSQL databases do not handle transactions and reference keys. On the other hand, it’s quite simple to map even 1:1 class to the document stored within such database which makes them a very interesting choice as the primary data storage type for the variety of applications. To connect to the database, you can use Robomongo which is a free GUI client.

In order to start with MongoDB and .NET Core, it is required to install MongoDB.Driver. Once completed, we can create custom MongoSettings, configure the IoC module and implement the actual repository for the users:

In the next episode, we will use the SQL Server and Entity Framework Core to store the data in a typical SQL Server database.

]]>https://piotrgankiewicz.com/2017/06/15/becoming-a-software-developer-episode-xxi/feed/7ASP.NET Core deployment using Docker, Nginx and Ubuntu Serverhttps://piotrgankiewicz.com/2017/06/12/asp-net-core-deployment-using-docker-nginx-and-ubuntu-server/
https://piotrgankiewicz.com/2017/06/12/asp-net-core-deployment-using-docker-nginx-and-ubuntu-server/#commentsMon, 12 Jun 2017 05:38:34 +0000http://piotrgankiewicz.com/?p=3847Continue reading →]]>Since ASP.NET Core became a truly cross-platform framework, we’re free to use other environments such as Linux in order to host our applications. This is a great opportunity not only to reduce the possible licensing costs but also to try out a new environment. In the video tutorial below, I’ll show you how to build a Docker image using ASP.NET Core, publish it to the Virtual Machine running in the Digital Ocean and use Nginx to expose the app to the world.

1. Creating a sample web application

At the very beginning let’s create a sample ASP.NET Core application using the available template. It does not matter whether you choose the MVC or Web API.

Just type dotnet new mvc, then dotnet restore and eventually dotnet run to make sure that the application works under the default localhost:5000 URL.

2. “Dockerizing” ASP.NET Core application

In order to build the Docker image create a new Dockerfile within the root directory of a project:

And type the following command: docker build -t webapp-demo .
Having this in place, you can run the Docker container simply by executing:docker run -p 5000:5000 webapp-demo.

Please note that for the production purposes, you should probably use the another Dockerfile that would be built using the already deployed files using dotnet publish and ENTRYPOINT dotnet APP_NAME.dll.

3. Docker Hub registry

Docker Hub is a registry for the Docker Images (like a GitHub). You can publish the public images for free or choose the paid version for the private repositories. There’s an available integration with both, GitHub and BitBucket which means that can we can have the automated build in place once the commit was pushed to the source code repository. For the simplicity of this tutorial, I chose the already existing image on my own repository.

4. Deploying app to the Ubuntu Server

I created a Virtual Machine (called Droplet since I’m using Digital Ocean cloud) containing the Ubuntu Server 16.04. Once you access the VM using ssh, you should install the Docker, for example by following this guide. Then, you can pull and run the actual Docker image – in my case I simply typed docker pull spetz/net-core-tour-2017-demo and then docker run -d -p 5000:5000 spetz/net-core-tour-2017-demo.

Beware of the -d argument – if you choose not to run the Docker container in the background, the only to stop it will be by logging to the VM once again and executing docker stop PID where PID is the identifier that can be found by executing docker ps command.
Now, you should be able to access the application under the http://VM_IP_ADDRESS:5000 URL (add /entries endpoint if you’re using my sample Docker image).

5. Configuring Nginx

Nginx is a great and very popular (and also powerful) HTTP server. Well, the Kestrel itself is also great and blazingly fast, however, it’s not suited for the production environment. Things like SSL or setting up a load balancer are not really possible, which means that you should go with a server like IIS, Apache, Nginx or so. In our case we will stick to the Nginx – in order to install it, type the following command: apt-get install nginx. Try to access the http://VM_IP_ADDRESS URL – you should see the default Nginx site.

The next step is to actually configure the Nginx using a technique called reverse proxy, which will redirect all of the incoming movement from a given port e.g 80 to the internal port e.g. 5000 where our application runs. Let’s navigate to the /etc/nginx/sites-enabled directory, type the rm default and then nano default in order to create a new configuration file containing the following settings:

Eventually, execute the service nginx restart command and that’s it – your web application running within the Docker container should be accessible from the public port of 80. If you’d like to restrict the ports, you can use the UWF and type in commands like ufw allow 80.

]]>https://piotrgankiewicz.com/2017/06/12/asp-net-core-deployment-using-docker-nginx-and-ubuntu-server/feed/18Becoming a software developer – episode XXhttps://piotrgankiewicz.com/2017/06/08/becoming-a-software-developer-episode-xx/
https://piotrgankiewicz.com/2017/06/08/becoming-a-software-developer-episode-xx/#commentsThu, 08 Jun 2017 05:35:09 +0000http://piotrgankiewicz.com/?p=3771Continue reading →]]>Welcome to the twenty episode of my course “Becoming a software developer” in which we will implement our custom “handler” that will be responsible for executing the given methods, dealing with exceptions etc.

All of the materials including videos and sample projects can be downloaded from here.
The source code repository is being hosted on GitHub.

Scope

Handler

Abstract

Handler

In the last 2 episodes, we dealt with exceptions. It’s high time to actually complete this story and also extend the rather simple extension handling middleware. In order to achieve that, we will implement our custom handler using the so-called Fluent API technique that will allow defining the flow of the methods (e.g. some more or less complicated business logic) into a nicely written method chain. Such method chaining is another programming technique and the goal of this episode is to actually learn 2 things – how the Fluent API can be implemented, but also how you can switch from a typical, procedural style of writing the code into the more expressive one.

In the next episode, we will store the application data using MongoDB which is a NoSQL database.

]]>https://piotrgankiewicz.com/2017/06/08/becoming-a-software-developer-episode-xx/feed/3Blogging, why?https://piotrgankiewicz.com/2017/06/05/blogging-why/
https://piotrgankiewicz.com/2017/06/05/blogging-why/#commentsMon, 05 Jun 2017 06:23:21 +0000http://piotrgankiewicz.com/?p=3824Continue reading →]]>People sometimes ask what’s the point of blogging. What’s the secret of being consistent and what’s the motivation behind such activity? And what are the profits? A few weeks ago, Andrzej Krzywda (owner of arkency.com) asked his followers on Snapchat to send him a reason why they’re blogging (or not). I put some thoughts into it, and here is my answer.

I had to start

I started blogging, cause I was “forced” to – well at least that was the main requirement of “Get Noticed” competition. In the past, I had a very short episode related to blogging, so it was a great opportunity to start, as I already felt back then, that sooner or later I’ll have to give a try one once again.

Sharing knowledge

Definitely, the most valuable part of blogging and actually the same applies to the open workshops or public speeches. Whether you’re a beginner, intermediate or advanced one software engineer it does not matter that much as long as you have the willingness to publicly share your experiences. You should not care about your skills that much – there will always be other folks searching for answers for a variety of topics on a different level. It’s good to browse the content published by other people, but it’s even better to share your own.

Recognition

Don’t want to remain anonymous until the end of your career? Start blogging, it’s that simple. Once you start publishing some interesting content, more and more people will be visiting your website and they will at least remember your last name. And who knows, maybe some of them will even remember you and at some point in the future offer a well-paid job?

Projects

You never know who’s actually reading your articles or playing with your open source code. And sometimes, these people might be your future clients, business partners or coworkers.
I did receive quite a few great offerings thanks to some articles in which I shared code about a solution that seemed obvious to me. Thus, if you’re looking for a new, interesting job, getting some recognition will definitely improve your chances.

Promotion

After some time, once people realize that you constantly provide a legit and informative content, you shall get more and more followers. And as in every other network, some of these followers might be actually well-known software developers, so you can imagine that promoting your ideas, posts or custom projects will be much easier as you reach more publicity and page views.

Influence

Influencing others, whether these are your friends or some totally random people from all around the world that you never met is a very powerful phenomenon. Trust me, it does feel great indeed, once you receive a message from a guy who would like to thank you, for persuading him/her to start his/her own blog or open source project, due to your content publishing activities. It feels like giving someone a courage, to fully show him/herself to the outside world.

Persistence

This one varies from person to person, but for me, blogging is a sort of constant factor.
Since I started writing posts, only once (which happened last week due to the very late return from a moto trip) I did not publish a post on the Monday’s morning. Other than that – I tend to write articles every week and stick to it. Such small things are important and do help organize our time in a better way.

Hone language skills

Maybe not the most important aspect, yet for sure, quite a useful one. Although I’m not a native English speaker, and still, I’m making some mistakes while talking or writing, I’m pretty certain that as the time goes by, the content being published by myself is getting better in terms of grammar or more advanced phrases.

]]>https://piotrgankiewicz.com/2017/06/05/blogging-why/feed/4Becoming a software developer – episode XIXhttps://piotrgankiewicz.com/2017/06/01/becoming-a-software-developer-episode-xix/
https://piotrgankiewicz.com/2017/06/01/becoming-a-software-developer-episode-xix/#commentsThu, 01 Jun 2017 05:35:33 +0000http://piotrgankiewicz.com/?p=3769Continue reading →]]>Welcome to the nineteenth episode of my course “Becoming a software developer” in which we will gracefully handle the exceptions and extend logging services with NLog.

All of the materials including videos and sample projects can be downloaded from here.
The source code repository is being hosted on GitHub.

Scope

Exceptions

NLog

Abstract

Exceptions

Dealing with exceptions from the API’s point of view is not an easy task. In the end, we want to return meaningful error codes to the end customers of our service once something goes wrong. It doesn’t matter whether it’s an internal server error or some client validation issue – returning simply an HTTP Status Code that something went wrong doesn’t really help.

Let’s start from the very begging, at first, we can define the custom exception type like this:

Default logging available within ASP.NET Core framework is fine, however, quite often we would actually like to use something more sophisticated in terms of logging services. Having the ability to specify different outputs like files, console, databases, external services, defining templates of the messages being logged and so on. This is where libraries like NLog come in handy.

In order to start working with NLog, you need to install the base package in the infrastructure project as well as other 2 extensions designed for ASP.NET Core that will be used by the API. Once the packages are installed, you need to do the 3 things: configure the NLog in Startup class, create a nlog.config where you can specify all of the behaviors, outputs (targets) and much more if needed and mark this file as part of the content that should be copied to the output directory.

Please note that this is a very basic configuration and you can extend it in order to achieve a really sophisticated logging. And finally, you need to copy the configuration file and mark it as a content in the End-to-End test project in order to run the integration tests properly.

In order to use NLog, simply add the following line of code to the particular class and that’s it.

In the next episode, we will handle not only the exceptions but the actual business logic operations within specialized handlers in a more fluent way.

]]>https://piotrgankiewicz.com/2017/06/01/becoming-a-software-developer-episode-xix/feed/4Becoming a software developer – episode XVIIIhttps://piotrgankiewicz.com/2017/05/25/becoming-a-software-developer-episode-xviii/
https://piotrgankiewicz.com/2017/05/25/becoming-a-software-developer-episode-xviii/#commentsThu, 25 May 2017 04:35:50 +0000http://piotrgankiewicz.com/?p=3767Continue reading →]]>Welcome to the eighteenth episode of my course “Becoming a software developer” in which we will finalize the basic CRUD for the Driver type, implement extension methods for the repository and build custom middleware in order to deal with exceptions.

All of the materials including videos and sample projects can be downloaded from here.
The source code repository is being hosted on GitHub.

Scope

Extensions

Middleware

Abstract

Extensions

I did write about extensions method already, but this time, I’d like to present how they can be used along with repository. Let’s consider the following scenario:

Quite often, we want to fetch e.g. driver within some application service and use it for some specific use case. Do we have to write the same code over and over again? Of course not, we could create a very simple extension methods:

ASP.NET Core framework is built around the middleware, which means that you can easily plug into the so-called pipeline and do pretty much whatever you want with the incoming HTTP request. This great feature can be used, for example for creating a global handler that will deal with the exceptions:

Since its inception, Google has been a major driving force in how the Internet has grown and developed. Google Search is one of the world’s most widely used search engines, with its simple goal of helping users find the best answers to their questions quickly and without hassle.

In August of 2016, Google announced that it would be updating its search algorithms to target sites that utilize intrusive mobile popups or intrusive interstitials such as email signups, sale notifications, event ads, and more. These techniques are used by online companies to gain new customers or garner more sales from existing customers, but they can be disruptive to the user browsing experience. With Google’s updated algorithm, websites using intrusive interstitials won’t rank highly within Google’s search results.

Affected Websites

Google aims to direct people to the best website to answer their question through its search function, and part of this is leading online browsers to user-friendly destinations. Interstitials can get in the way of smooth surfing, frustrating website visitors, and so Google had rewritten its search algorithm to penalize sites that use obtrusive popups by lowering their search ranking. Pages targeted include websites that:

Block important content with popups, whether it appears upon entering the site or after scrolling down a page.

Interrupt user experience with a standalone, full-screen popup that must be dismissed before the main site will appear.

Not all sites that use popups are affected by Google’s new algorithm. Sites that have necessary or unobtrusive interstitials are generally unaffected by the change. Websites are unaffected if popups pertain to:

Legal agreements and obligations, such as age verification.

Login popups for private content and services.

Banners that are reasonably sized and easy to dismiss.

Google’s search algorithm takes hundreds of factors into consideration when assigning websites a ranking, and intrusive interstitials are just one signal that comes into play. Popular online destinations with quality content still tend to rank highly, regardless of whether or not they use popups.

Mixed Reactions

A lot of users claim to have seen little impact from changes to search algorithms since it was first introduced in January 2017, but web developers have taken notice. Ranking is important for online revenue, and now, avoiding interstitials and popups can give new and growing websites a leg up in the marketplace. More and more businesses are expected to drop the use of interstitials in favor of other advertising strategies. Online users have the engineers at Google to thank as they start seeing fewer intrusive mobile popups disrupting their browsing experience.

]]>https://piotrgankiewicz.com/2017/05/22/googles-mobile-popup-algorithm-impact-on-web-development/feed/0Becoming a software developer – episode XVIIhttps://piotrgankiewicz.com/2017/05/18/becoming-a-software-developer-episode-xvii/
https://piotrgankiewicz.com/2017/05/18/becoming-a-software-developer-episode-xvii/#commentsThu, 18 May 2017 05:07:03 +0000http://piotrgankiewicz.com/?p=3765Continue reading →]]>Welcome to the seventeenth episode of my course “Becoming a software developer” in which we will mostly talk about the boundaries and responsibilities of the application services. Eventually, we will implement some helper code to automatically assign the authenticated user id to the given command.

All of the materials including videos and sample projects can be downloaded from here.
The source code repository is being hosted on GitHub.

Scope

Boundaries

AuthenticatedCommand

Abstract

Boundaries

Defining the boundaries of the application services is not an easy task. We have to keep in mind that accordingly to the SRP (Single Responsibility Principle) and ISP (Interface Segregation Principle) we should define our interfaces to revolve around the particular topic (e.g. managing users, defining routes, finding location etc.). If our service defines additional responsibilities, which are not a part of its scope, most likely we should implement a separate class.

Let’s take a look at the IDriverRouteService which is all about managing available routes for the drivers and passengers.

Even though the route requires the actual address (e.g. fetched from some specific API like Google Maps) and the distance between nodes in order to create a valid Route object, it does not do it on its own. Instead, it makes use of the available service defined as IRouteManager which exposes the required methods:

If our IDriverRouteService would require additional workflow that is not particularly tied to the route itself (such as fetching the address), we could simply define additional interfaces and inject them via constructor in order to extend the business logic.

AuthenticatedCommand

Let’s say, we would like to have automatically assigned user id, whenever there’s a command which requires the user to be authenticated. It can be done easily in 3 steps:

At first, we need to create an interface which marks the given command as the one that requires user to be authenticated, and then we can implement a brand new DispatchAsync method that will assign the user id to the incoming request. Finally, we can use it like this:

In the next episode, we will write more and more business logic and also create a custom middleware for handling the exceptions.

]]>https://piotrgankiewicz.com/2017/05/18/becoming-a-software-developer-episode-xvii/feed/6Depot – building ASP.NET Core distributed applicationhttps://piotrgankiewicz.com/2017/05/15/depot-building-asp-net-core-distributed-application/
https://piotrgankiewicz.com/2017/05/15/depot-building-asp-net-core-distributed-application/#commentsMon, 15 May 2017 05:50:47 +0000http://piotrgankiewicz.com/?p=3777Continue reading →]]>In this article, I’d like to guide you through the development process of the simple application named Depot. It was created for my presentation about using .NET Core in practice, which is a part of .NET Core Tour. The overall journey will last 10 steps, so get ready.

The purpose

Application per se will be quite straightforward, yet its sole purpose is to familiarize you with some of the ASP.NET Core framework features (middleware, IoC, options etc.) as well the variety of tools such as RabbitMQ, MongoDB, Redis or Docker that can be easily used for building the software.

Depot will allow sending HTTP POST request containing {key, value} object to the API which will push it further to the RabbitMQ service bus (CreateEntry command). Then, the message shall be consumed by Entries Service and once validated, it will either produce the CreateEntryRejected event or EntryCreated (consumed by the API). For the latter scenario, the data will be stored in MongoDB database and also in the Redis distributed cache with 10 seconds timespan of sliding expiration. Eventually, we can browse the logs available in the API whether the operation has succeeded.

On top of that, the whole application can be run using Docker and docker compose tools (just browse to the scripts directory) and the repository itself is using Travis CI as the open source build server.

You can easily browse the history of the repository by using the git checkout and navigating to the particular revision by switching between available tags from 1 to 10.

1. Init

At the very beginning, we want to create 2 Web API projects using dotnet new webapi command: Depot.Api and Depot.Services.Entries. Make sure we have required references,
set the port of the second project to the 5050 then execute dotnet restore and dotnet run to see whether everything works fine.

2. Common messages

Since we want to create a distributed application, there’s a need of having some common contract in order to exchange the messages via service bus. Thus, we create new commands and events and make sure that our web services reference this particular project.

3. RabbitMQ

You can choose from multiple services buses available out there, but I really like the RabbitMQ. I’m also using RawRabbit library to handle the connection and subscribe to the messages. You can find settings of the RabbitMQ inside appsettings.json file.

4. Event and command handlers

Once we have our 2 services, common messages and service bus in place, we’re finally able to implement the first version of our distributed system. All that is needed to achieve that goal is to implement the appropriate event and command handlers. Just make sure that you do not miss setting up IoC container in the Startup class.

5. Storing entries and logs

At this point, it would be nice to actually store the data somewhere – let’s start with memory. We can create the very naive repositories and hold everything internally in the memory. Once it’s done, we can extend our command and event handlers to make use of the newly created storage.

6. Autofac, Middleware, Exceptions

It’s very easy to use a different IoC container, for example, Autofac. Also, thanks to the nature of ASP.NET Core framework and the way it’s built using so-called middleware, we can easily plug our own code into it and for example implement a global exception handler.

7. MongoDB

The time has come to actually make use of some real database. I chose MongoDB for this example – make sure you have it up and running and there’s a database named Depot available. You can change the settings by editing the appsettings.json file.

8. Redis

What about caching? For sure, we could use IMemoryCache, but we’r ere building a distributed system, right? Therefore, we need to have distributed caching as well, otherwise, your system will fail, once the first HTTP request which involves use of caching mechanism goes to server A, and the second one to server B. Since I’m a big fan of Redis, there’s a great news – it can be easily used in the ASP.NET Core applications.

9. Tests

The very final step, before we can actually deploy the application. Let’s write some tests finally. First, let’s create a new directory tests and 2 new projects Depot.Tests and Depot.Tests.EndToEnd using dotnet new xunit command. The former project will be all about typical unit tests (using xUnit, Moq and FluentAssertions) while latter will allow to execute integration tests by running the application in memory.

10. Docker, Travis, scripts

You can skip that part, as the application is already finished, however, if you want to greatly improve your deployment including automated build and testing using Travis CI and packing your source code into the Docker containers, take a look at the available Dockerfiles in the web service projects, as well as the .travis files in the root directory.

Summary

As you can see it’s neither that difficult nor time-consuming to build a pretty cool app on top of the ASP.NET Core and other fancy tools. Beware, that the code is rather simplistic and not really a production ready, yet I do hope that it will help you start with playing with distributed and (micro)services application in general.

If you would like to see more sophisticated apps built with ASP.NET Core (and NancyFX) using similar techniques, you can take a look at the Collectively which is an open source platform built by me and a friend of mine (about which I’d like to write more in a near future) or into my another open source project Warden for which I’m also building a distributed backend services.

]]>https://piotrgankiewicz.com/2017/05/15/depot-building-asp-net-core-distributed-application/feed/9Becoming a software developer – episode XVIhttps://piotrgankiewicz.com/2017/05/11/becoming-a-software-developer-episode-xvi/
https://piotrgankiewicz.com/2017/05/11/becoming-a-software-developer-episode-xvi/#commentsThu, 11 May 2017 05:39:24 +0000http://piotrgankiewicz.com/?p=3679Continue reading →]]>Welcome to the sixteenth episode of my course “Becoming a software developer” in which we will implement the login endpoint in our API, discuss the caching mechanism and how to initialize the application with basic data.

All of the materials including videos and sample projects can be downloaded from here.
The source code repository is being hosted on GitHub.

Scope

Login

Seed

Abstract

Login

Once we have JWT authentication in place, the final step is to allow the users to actually get the valid token via exposed login endpoint. In order to do so, we can define a simple Login command along with its handler, like this:

As you might have already noticed, I’m using the IMemoryCache which allows using caching mechanism in our application. Whenever you want to cache something, always think in terms how big the data would be (server has a limited memory), does it change too often or not and, is it a costly to e.g. fetch it from a remote resource (database, service) and is it going to be accessed by the end users quite often.

Back to the point, eventually, we can create a controller that will consume and handle the Login command. You might be wondering what do we need caching here for? Since we’re following the CQS (Command & Query Separation) pattern, our command handlers do not return any values. Thus, we need to store our token in some place (it could be e.g. a real database, but in this case, it’s just a memory) in order to fetch it and return to the user.

In the next episode we will keep on implementing our business logic and findining out both, the boundaries and responsibilities of the different services and how they should interact with each other.

]]>https://piotrgankiewicz.com/2017/05/11/becoming-a-software-developer-episode-xvi/feed/4Open source implicationshttps://piotrgankiewicz.com/2017/05/08/open-source-implications/
https://piotrgankiewicz.com/2017/05/08/open-source-implications/#commentsMon, 08 May 2017 05:12:50 +0000http://piotrgankiewicz.com/?p=3750Continue reading →]]>Recently I had an interesting discussion about open sourcing most of the code that you write on a daily basis, especially in terms of commercial usage, for example creating your own product or service. Here are some of the thoughts and assumptions, and I’m really looking forward to hearing your own remarks and also share your experience.

Quality

Whenever you decide to do something in public, most likely you’re going to put even more effort to it, in order to simply nail it. Whether it’s a speech, marketing campaign or the software project that is no longer being kept privately in your local repository, you don’t want to feel ashamed by submitting a code that could look like spaghetti in the worst case scenario. You start caring more about what you’re really doing, as the other folks can easily browse your code and sort of judge you. On the other hand, you should really want your project to be written nicely, as the other programmers may actually like it and apply some of your patterns to their own solutions or even send pull requests or remarks to your repository.

Community

Regardless of what type of technology you’re dealing with and whether your project is niche or quite opposite, you can be sure, that at least a few people will be interested in what you’re doing if you put a minimal effort into advertising it on some forums or user groups. Once the word is spread and you see the very first forks of your repository or the reported issues, you can be proud of yourself, because someone found your solution to be useful or helpful. This whole process is actually quite similar to blogging. All of us like to read interesting articles describing e.g. well-suited solutions to our issues or things like that. We also like to use free and open software, whether it’s an actual application or a library being used in our own software. Since the other developers spend their (free) time to share their code, why not to give something from yourself? Don’t just be a consumer, be also a producer :).

Transparency

I believe this part is especially important whenever we’re dealing with user data in our applications, which is quite often. Some users will hesitate from using our service, as they may suspect that we will do something not really appropriate with their private and personal data. It can be critical when it comes to the public projects for government or municipality. If it’s a closed source code, then suspicions will surely arise. Having an open source project, where everyone can have an insight and ensure that the application itself is not violating his personal right, not messing with his private data, there are no backdoors etc. might be the only way to actually make the users trust you and your code.

Asset

Whether you’re working solo on your private projects, manage a group of people, as your project grown bigger or actually you’re part of the organization having the code publicly available is a huge advantage. For you, because it’s much easier to get a good job, as the company in which you’d like to work, can just browse your code and see your patterns straight ahead. The same applies if your company works in an open source way – the potential client can easily find out how do you write code and what quality might he expect from your services.

Management

That’s the part in which I don’t have a big experience yet, however managing a group of a few people, also when working remotely is not that complicated. I guess that the difficulty lays in working on a very big project with multiple teams involved and a lot of contributors submitting their issues and pull requests. You have to somehow work twofold – develop the core by sticking to the original plan, yet you should also take care of the all reported bugs or enhancements and prioritize all that stuff. It’s not difficult to make the community angry with you if you won’t consider their suggestions at all (or merely). I think the open source project management at this phase is quite a challenge.

Contraindications

Certainly, we can’t apply open source strategy everywhere. Most of the companies want to keep their code private and it’s totally understandable. Some of them, for example, work on some special algorithms and might be afraid of having it stolen and so on. However, if you’re developing your own software, don’t be afraid to share your experience and knowledge with the rest of the world. Really, no one is going steal from you, and you’re giving a part of you and your quality software to the overall community of the developers.

]]>https://piotrgankiewicz.com/2017/05/08/open-source-implications/feed/5Becoming a software developer – episode XVhttps://piotrgankiewicz.com/2017/05/04/becoming-a-software-developer-episode-xv/
https://piotrgankiewicz.com/2017/05/04/becoming-a-software-developer-episode-xv/#commentsThu, 04 May 2017 04:57:16 +0000http://piotrgankiewicz.com/?p=3677Continue reading →]]>Welcome to the fifteenth episode of my course “Becoming a software developer” in which we will implement password encryption, authorization and authentication using JWT.

All of the materials including videos and sample projects can be downloaded from here.
The source code repository is being hosted on GitHub.

Scope

Encryption

Authentication

Authorization

Abstract

Encryption

Whenever we want to store user accounts along with their passwords in our database, the best option is to apply a hashing function. It means that given e.g. “secret” password such function will create a so-called hash which can not be reversed (or decrypted) based on some random and secure sequence of characters named salt. Basically, whenever we want to ensure that the password is valid, we need to create its hash based on the salt generated for the first time when hashing the password e.g. during account registration and then compare the hashes, simple as that. This way, even if the data storage would be compromised the password can not be decrypted easily, as the hash is not a reversible function (at least theoretically).

In order to find out the identity of the user in our system, he needs to be able to authenticate in some way. For the typical web application which is stateless, we can choose between different methods of authentication and pass along this information either with cookies, headers or within the URL itself. In our case, we want to use JWT (JSON Web Tokens) which is one of the most popular industry standards and basically boils down to generating a secure token that can be passed within the HTTP Header “Authorizaion: Bearer {token}”. Once the token is validated by the server, we can assign an identity to the user and allow him to perform operations that he wouldn’t be able to do otherwise.

Once the user was authenticated we can grant him access to the different operations or resources for example based on his role (user, moderator, admin etc.) or claims (list of permissions). While authentication is all about finding out if the user is who he claims to be, the authorization’s task is to validate whether the user has the required permissions to successfully perform a request.

In the next expisode we will talk a little bit more about caching, implement the “login” endpoint in our API, move further with business logic and also resolve the user identity based on JWT claims and map it automatically to the commands that require user id.

]]>https://piotrgankiewicz.com/2017/05/04/becoming-a-software-developer-episode-xv/feed/9“Becoming a software developer” course will have 24 episodeshttps://piotrgankiewicz.com/2017/05/01/becoming-a-software-developer-course-will-have-24-episodes/
https://piotrgankiewicz.com/2017/05/01/becoming-a-software-developer-course-will-have-24-episodes/#commentsMon, 01 May 2017 06:32:29 +0000http://piotrgankiewicz.com/?p=3729Continue reading →]]>I have quite a good news for everyone who’s been following my programming course.
“Becoming a software developer” will receive 8 additional episodes, which is going to result in the total number of 24. Wondering why? Here are the main reasons why 16 episodes will not be sufficient enough.

24 is a nice number

Seriously 16 is ok, but look at 24 – ain’t it better? It’s much more adult than a sixteen.

Too many things to cover

Ok, seriously now. When we thought for the first time about the idea of the Passenger application it seemed both cool and easy to implement. However, it turned out that there’s so much different aspects, patterns and practices that require an explanation that it’s simply not doable in 8 episodes. Maybe if this project was a very simple CRUD it would be possible, yet apparently it’s not – otherwise, this course wouldn’t make much sense, as it would be pretty much the same as the other ones.

Recording without cuts

Yes, that’s also true. I do record a video as a whole without any editing afterward whatsoever. I do want to be as natural as possible while explaining stuff and writing the code. It’s pretty much a live stream session available for the offline viewing. And that’s the reason why sometimes you can see me making errors, spending a few minutes trying to resolve bugs etc. That’s the way that a real-world programming does look like. There’s no need to try to appear as an ideal software engineer who writes a perfect code without any bugs – it just doesn’t’ work this way.

After hours activity

Since I’ve been doing this in my spare time totally for free, I can’t spend as much time additional as I would e.g. while preparing a paid course. It doesn’t mean that I’m not doing my best, quite opposite I do, but I have to think about the time constraints and do some actual work and projects in a regular time. This is the reason why sometimes things may seem a little bit chaotic or buggy, cause I didn’t put as much thought into it as I wished.

“Do or don’t, there’s no try.”

Although I’m not a big fan of Star Wars, I really love this quote (amongst few others). Whatever you do in your life and it’s valuable to you or the others, you either do your best in order to achieve the goal/meet the requirements or don’t do it at all (same applies to work – either for a good money or totally for free). I would feel very bad if at one point I’d decide to just stop recording the videos and leave you on your own without further explanation, or simply start writing code that I’d be ashamed of.

Perfect timing

I did a quick calculation and it turns out, that the last 24th episode shall be published on 6th of July. And it’s just great, as 8th of July we’ll most likely have our latest .NET Core Tour Workshops in Warsaw. Thus, basically there 2 months of traveling, speaking and teaching ahead of us and this course is just a perfect addendum to all of it.

Experience

The more you practice something, the better you get at it (which would be recording screencasts in that case). I can already tell you that within few weeks you can expect another online course of mine (however paid one this time), so stay tuned.

]]>https://piotrgankiewicz.com/2017/05/01/becoming-a-software-developer-course-will-have-24-episodes/feed/7Becoming a software developer – episode XIVhttps://piotrgankiewicz.com/2017/04/27/becoming-a-software-developer-episode-xiv/
https://piotrgankiewicz.com/2017/04/27/becoming-a-software-developer-episode-xiv/#commentsThu, 27 Apr 2017 05:37:43 +0000http://piotrgankiewicz.com/?p=3675Continue reading →]]>Welcome to the fourteenth episode of my course “Becoming a software developer” in which we will configure our application by using the appsettings.json file.

All of the materials including videos and sample projects can be downloaded from here.
The source code repository is being hosted on GitHub.

Scope

Settings

Abstract

Settings

Quite often, we’re in need to somehow configure our software in a way that can be both controlled and provided as an argument directly via the source code.

Such goal can be achieved easily, by mapping the appsettings.json file into the appropriate so-called options classes. Please note, that you can use IOptions interface available in ASP.NET Core directly, however I’d like to show you an alternative way that you might find also useful.

At first, let’s define the following convention – we will have classes named XyzSettings where Xyz can be anything and the Settings will be removed for the sake of clarity.
For the starters, let’s add a very simple GeneralSettings class, beware that you can have as many properties and unique settings classes as you wish.

public class GeneralSettings
{
public string Name { get; set; }
}

Within appsettings.json file, add a new section named “general” that will be mapped into the “GeneralSettings” class.

"general": {
"name": "Passenger"
}

It’s time to do a little bit of the magic, create a new SettingsExtensions class containing the following code:

And register it inside the ConfigureServices() method that can be found in the Startup class:

builder.RegisterModule(new SettingsModule(Configuration));

And that’d be all. From that point on, you can inject directly GeneralSettings class instance that will have properly mapped properties based on the values taken from the appsettings.json configuration file.

Next

In the next episode, we will dive into the authentication using JWT, as well as storing encrypted user passwords in our application.

For the benefit of anybody who has spent the last five years in a cave, big data is exactly what it says on the tin. But the name doesn’t really do it justice – imagine if someone had suggested calling the Atlantic Ocean, ‘Big Puddle’ and you start to get the idea. Big data should really be called mind-bogglingly uncountably vast data. And it is changing the face of the online world in more ways than you can imagine.

The field of big data analytics has evolved with the phenomenon of big data itself, in order to try and make sense of the strategic and marketing insights that big data can bring. As organizations are able to learn more from every transaction, mouse click, social media share and all the other data streams that contribute to big data, their insights into our online behavior will feed back into their websites and the growth and development of the Internet as a whole. Let’s take a look into the crystal ball and see just how big data might affect web design over the coming years.

Data driven design

A major aspect of web design comes down to aesthetics and user experience. These are, by definition, subjective and are influenced by the personal tastes and opinions of the design team.

Today’s business intelligence tools are already capable of measuring click-through rates, multivariate testing and other similar metrics to add some science to the design process. Over the coming years, these tools will increase in both effectiveness and affordability, making data-driven design the industry standard.

Programmatic transactions

Big data will also change the advertising landscape beyond recognition. Programmatic transactions refer to the use of software to make purchasing decisions instead of fallible humans. It is already being seen as the way forward by an increasing amount of companies, despite some adverse media attention as a result of problems that really stem from some decision makers failing to properly understanding how to apply this developing technology.

More personalized user experience

The more insights that websites gain about their visitors through big data, the more they will be able to tailor those sites accordingly. In the long term, that will mean a more user-driven experience for every individual who surfs the net. Those tailored ads that appear on your social media feed showing the product or service you have just been Googling are only the beginning. The Internet of the future will be able to anticipate what you are looking for before you even know it yourself.

]]>https://piotrgankiewicz.com/2017/04/26/web-design-and-big-data/feed/0When it all started 100 posts ago…https://piotrgankiewicz.com/2017/04/24/when-it-all-started-100-posts-ago/
https://piotrgankiewicz.com/2017/04/24/when-it-all-started-100-posts-ago/#respondMon, 24 Apr 2017 05:02:24 +0000http://piotrgankiewicz.com/?p=3688Continue reading →]]>21st April of 2017, almost 10 PM on the clock and 14 hours left till I get to my hometown of Kraków from Warsaw where I had participated in a Channel9 “Thursday with .NET” episode recording a day before. We had almost 2 hours talk about .NET Core along with lots of examples, it was a very nice experience indeed. And today I met with a friend of mine to discuss the premium course that we will be recording within the next few weeks. I also had a few other interesting meetings, exchanged quite a few messages, maybe I’ll even get to be MVP one day… but hold on a second, how did it even come to this? Let me tell you about my journey that began over a year ago and has been getting even more and more fascinating as the day goes by.

Initially, I had a plan to make this article quite a long one. Another wall of text, filled with achievements, emotions, descriptions of projects, activities and so on. Instead, I decided to write merely a few sentences and include pictures. So, here we go:

It all started over a year ago with this post. I lost my remote job and I had no idea what I was doing back then in terms of writing the content (honestly I still don’t in some cases).
My English ain’t perfect, yet I do believe that during last 13 months you could find some improvements in my texts.

In May, I landed my dream job – became a co-founder of so-called teal organization Noordwind and found out how great the work of a freelancer, without having any bosses, could be.

Noordwind

After 3 months of extensive work, I managed to win “Daj Się Poznać” and leave my “basement” once and for all.

Daj Się Poznać 2016 – winning

Along with this, my emerging love to the open source projects was growing and my first and most advanced project Warden gained some popularity due to the Scott Hanelsman’s tweet and mention during .NET Rocks podcast.

After that, I had a chance to start giving the speeches at the local .NET developers groups. Such a great opportunity to hone your skills in terms of public speaking.

Giving a talk about Warden in Warsaw WG.NET group.

I started going to the IT conferences again – meeting new people and networking, in general, was what I exactly needed at this point (and still do).

Programistok – one of the best IT conferences I participated this year.

During summer I had a chance to give some tips to my colleagues thinking about changing their jobs and becoming the programmers. That was the first time when I thought about becoming a sort of mentor or teacher and help the others willing to start their journey with programming.

In September, we (Noordwind) started developing our own platform for the citizens named Collectively, fully open sourced. It started with an idea about collecting and reporting the litter and the app has already grown quite big. The public testing phase will be available soon.

Collectively – platform for the citizens, fully open sourced.

In the meantime, with friends of mine, we also created a StrengthCraft event (and hopefully, we will find some time for the 2nd edition).

StrengthCraft – one of our side projects related to physical activities.

With the beginning of 2017, I decided to create a “Becoming a software developer” course as a series of videos available on YouTube and blog posts about starting with programming in C# and .NET Core and building an application by following the good patterns and practices. I was thinking about 16 episodes being published every week, but for sure it shall last much longer than that :).

This is when I really got into the teaching. I did receive quite a positive feedback and decided to do another step – free and open programming workshops. By now, I have already done 4 of them, yet there are many more to come.

C# and .NET Core basics workshops.

I was also a guest on the DevTalk podcast and you can find my article (in Polish) about summary of “Daj Się Poznać” and what changed in my life afterwards on the DevStyle.

A few weeks ago, Łukasz Pyrzyk asked me about running a .NET Core Tour – series of presentations and advanced workshops related to this technology. It was a perfect timing, as I had to refuse the job offer as a full-time programming teacher. Instead, I went with doing things pro publico bono, which is opposite to being paid good money, simply cause it’s more fun and does deliver much more satisfaction.

.NET Core Tour.

On the 1st of April, I had a great pleasure to be a speaker during 4developers – one of the biggest IT conferences. Undoubtedly, that was a remarkable experience.

4developers GraphQL summary.

And just a few days ago, I was a guest at Channel9 “Thursday with .NET” live stream series. It will be available for the offline viewing soon.

Channel9 “Thursday with .NET” episode 3.

And this is pretty much when things started getting really crazy (and awesome). I did receive few interesting offers, mostly related to the workshops and premium courses (I did accept some of them already). Moreover, I got the offer to run the postgraduate studies – needless to say, you don’t think that I would turn down such an offer, do you?. Eventually, the Warden open source project gained again some attention and I “recruited” a few people to help me deliver it.

On the personal level, I managed to hit almost of the goals related to the personal bests in my strength training (you can watch them on my private YouTube channel).

Typical deadlift set.

In order to sum up, do you know what’s really great? That I was able to somehow influence both my friends and other people that I never met to start doing great things on their own. Like creating blogs, running their own open source projects or meetups like Crypto Cracow.

These are the most satisfying achievements, there is no money equivalent existing that could even come close to compensate for that. I wish for myself and to all of you, for the upcoming months to be as productive as the previous ones.

]]>https://piotrgankiewicz.com/2017/04/24/when-it-all-started-100-posts-ago/feed/0Becoming a software developer – episode XIIIhttps://piotrgankiewicz.com/2017/04/20/becoming-a-software-developer-episode-xiii/
https://piotrgankiewicz.com/2017/04/20/becoming-a-software-developer-episode-xiii/#commentsThu, 20 Apr 2017 04:27:31 +0000http://piotrgankiewicz.com/?p=3673Continue reading →]]>Welcome to the thirteenth episode of my course “Becoming a software developer” in which we will make use of the Command Handler pattern in order to extend our business logic and clean up the controllers.

All of the materials including videos and sample projects can be downloaded from here.
The source code repository is being hosted on GitHub.

Scope

Command Handler

Command Dispatcher

Abstract

Command Handler

We can encapsulate our business logic even more by using the ICommand interface which is a common way to start using the CQS (Command & Query separation) pattern in our application.
The interface per se is merely a “marker” and can be defined as follows:

//Marker interface.
public interface ICommand
{
}

Having ICommand in place, we can define as many commands as we want to:

Here comes the question – why would you even bother to do that? Usually, the business logic consists more than a single operation to be invoked in order to complete its flow.
Command handlers are a great way to achieve such goal, as we’re able to inject as many services as we need to. Otherwise, we would have to create a separate interfaces dedicated for a single business logic unit, which would do exactly the same (or in the worst case scenario, write such code within our controllers that should be as transparent as possible).

Command Dispatcher

Now, how can we enforce our software to automatically wire up commands to the particular command handlers? And even more importantly, how can we order our controllers, to resolve the proper command handler? At first, let’s define the ICommandDispatcher interface along with its implementation:

In the next episode, you’ll find out how to configure the application by passing the configuration classes that can be defined and mapped from the appsettings.json.
Moreover, you shall see my efforts in trying to find out why something didn’t work as expected in the first place ;).

]]>https://piotrgankiewicz.com/2017/04/20/becoming-a-software-developer-episode-xiii/feed/11ASP.NET Core 12 sampleshttps://piotrgankiewicz.com/2017/04/17/asp-net-core-12-samples/
https://piotrgankiewicz.com/2017/04/17/asp-net-core-12-samples/#commentsMon, 17 Apr 2017 06:13:18 +0000http://piotrgankiewicz.com/?p=3664Continue reading →]]>In today’s post, I’d like to present a dozen of minimalistic samples that you can make use of within ASP.NET Core application. Starting from simple things like options, through middleware, databases and even Nginx or Docker. These samples are part of the upcoming event “Thursday with .NET” that I’ll be part of on Thursday 20.04.2017.

Make sure you execute dotnet restore first and then dotnet run to start the particular application. If the example is using an external resource like a database, ensure that you have it installed and running before starting the app.

1. Options

Let’s warm up with the options provider. You can easily create a so-called XyzOptions class and bind it to the configuration defined in the appsettings.json file and use it in your application simply by injecting IOptions instance.

2. Middleware

You can extend the HTTP request pipeline by adding your own middleware into the overall flow. If you ever used a framework like NodeJS and wanted to have the same possibility of providing your own code in order to e.g. validate or process the incoming request, you can do it in ASP.NET Core as well.

3. Filters

In need of the custom exception handler? Need to log the incoming requests or validate them? These and much more can be achieved by using filters, simply by creating a new attribute and using them on top of your MVC controllers.

4. Autofac

Dependency Injection and IoC container are built-in to the framework, however, you can still use your favorite libraries like Autofac if you feel like you need something more powerful in terms of dependency inversion principle.

5. Tests

We are all aware how to write the good unit tests, correct? But what about the integration (end-to-end) tests? For sure, you can expose the running instance of your API and perform the HTTP requests e.g. via HttpClient instance. Yet, there’s a better way and you can do such tests in memory thanks to TestHost library.

6. SQL Server

Did you know that you can run SQL Server on Linux? Anyway, you can connect to the SQL Server instance via .NET Core e.g. by using Entity Framework Core library, however, I do prefer a more lightweight solution, thus the provided example makes use of the Dapper.

7. MongoDB

Do you like NoSQL databases like I do? You can use the MongoDB Driver and connect to the MongoDB databases from the .NET Core applications.

8. Redis

Powerful caching can be surely done by Redis. And you can connect to the Redis server thanks to the developers from StackExchange who did create such a great library.

9. RabbitMQ

Using service bus is one of the most common ways when it comes to creating a distributed system. RabbitMQ is one of those and you can make use of it either by adding the official library or the RawRabbit which I like due to its abstractions and ease of use.

10. Nancy

Nancy is an amazing framework for building the HTTP API. I like it very much and I was very happy when I found out that it’s compatible with the ASP.NET Core framework.

11. Docker

Containerized applications and containers, in general, are industry standard nowadays. You can easily make use of the Docker and build your ASP.NET Core applications on top of that.

12. Nginx

Thanks to the Kestrel HTTP Server you’re no longer forced to use the IIS. For example, you choose Apache or Nginx and host your .NET Core applications also on the Linux servers.

As mentioned above, all of the examples can be found and downloaded from the GitHub.

]]>https://piotrgankiewicz.com/2017/04/17/asp-net-core-12-samples/feed/10Becoming a software developer – episode XIIhttps://piotrgankiewicz.com/2017/04/13/becoming-a-software-developer-episode-xii/
https://piotrgankiewicz.com/2017/04/13/becoming-a-software-developer-episode-xii/#commentsThu, 13 Apr 2017 05:27:28 +0000http://piotrgankiewicz.com/?p=3658Continue reading →]]>Welcome to the twelfth episode of my course “Becoming a software developer” in which we will write tests, both unit and integration (end-to-end) for our application.

All of the materials including videos and sample projects can be downloaded from here.
The source code repository is being hosted on GitHub.

Scope

Unit tests

Integration tests

Abstract

Unit tests

You can read moe about testing here (the episode VII of this course), so I will not go into the details of this practice here. Instead, I want to tell you what is needed in order to start writing tests for the Passenger app. At first, I decided to use xUnit instead of NUnit, mostly due to the fact that I had some issues related to running NUnit tests after the latest update of the .NET Core framework to the version 1.1.
For the starters, including the following dependencies within the Passenger.Tests.csproj file:

Do not forget about adding the required “using” to the missing namespaces. Eventually, run the dotnet test command and that’s it, your first unit test shall successfully pass!

Integration tests

Unit tests are easy, so what about creating sophisticated integration tests that will execute the real HTTP calls on our API? It could be done twofold – the first way is to run the API using dotnet run and write tests using e.g. HttpClient in order to send requests and validate them by using particular assertions.

However, there’s’ also another way, much cooler than that. Thanks to the ASP.NET Core framework, you can run the whole API in the memory and perform the integration tests this way, which is really cool. You can find more details here, but this is how could it look like.
At first, include the following dependencies within the Passenger.Tests.EndToEnd.csproj file:

As before, make sure you do not forget about adding the required “using” to the missing namespaces. Finally, run the dotnet test command and that’s all!

Next

In the next episode, we’ll talk implement the Command Handler pattern and make use of the external Autofac (which is a powerful IoC container) in order to achieve such goal.

]]>https://piotrgankiewicz.com/2017/04/13/becoming-a-software-developer-episode-xii/feed/84developers 2017 summaryhttps://piotrgankiewicz.com/2017/04/10/4developers-2017-summary/
https://piotrgankiewicz.com/2017/04/10/4developers-2017-summary/#commentsMon, 10 Apr 2017 05:11:28 +0000http://piotrgankiewicz.com/?p=3613Continue reading →]]>One week ago (03.04.2017) I participated in a really cool IT conference 4developers. What was so special about it, besides the fact that there were over 1500 people and a lot of different paths related not only to the programming languages and technologies? For the first time in my life, I had a chance to give a talk on such big event, thus, here’s my quick summary.

Actually, I arrived at the Warsaw 2 days earlier, as I ran the workshops about C# and .NET Core (over 7 hours from the very basics to more advanced stuff like LINQ, reflection or TPL). Thanks, to devWarsztaty for hosting me and giving a chance to teach the other programmers!

.NET Core workshops @devWarsztaty hosted by mBank.

Secret WiFi password ;).

But let’s get back to the main topic. On Sunday there was special “before party” in which I happily participated. The weather was just perfect through my whole stay (3 days or even 4 including Tuesday when I got back to my hometown Kraków).

Sunny old town.

Night riders – I already felt like home with so many bikes!

I met there some of the people that I already knew and also some new faces that e.g. turned out to be one of the best speakers within our IT community. Such parties are a really great way of the networking which is a very important aspect of a life of a software developer who’s also interested in public speaking and other topics related to such activity.

Booze, booze everywhere.

Getting card during before party in order to avoid morning queue ;).

I had a plan to get back to my apartment before midnight, but then Michał Śliwoń showed up (a co-organizer of one of the best conferences called DevDay and from this year DevConf) and our small group (of a few people) went to some other bar, in which I spent another 2 hours.

4developers before party evolved.

I had an amazing sleep that lasted a little bit over 4 hours – happily, I had my talk at 15:00 so I was able to get back my strength. In the morning I went to the Railway station to pick up Patryk Huzarski (with whom we’re running an online programming course) and we went to the conference at the Sangate Hotel.

Cookies, cookies everywhere.

Chill area.

I won’t be getting into the details of the talks, as there were so many of them and I even skipped some presentations this time just in order to talk more with some friends of mine that I’m able to meet only during such events. I’ll give you a little bit of the description of my presentation, though.

Whenever I give a public talk (already had a few of them, but on the smaller meetings like .NET groups with about 50+ people) I feel a little bit stressed before I start presenting. Yet, once I start talking it’s all gone and this sort of natural flow undertakes the control. I did some serious preparations in order to give my best in this talk. The time cap was 45 minutes (and I was able to finish in about 35 minutes which was quite surprising), prepared the useful example (which you can download here) for the live demo part and worked on the speed of my talk itself. I had this habit of speaking too quickly, but this time I managed to do it just right, which made me feel good. The presentation itself (slides) can be downloaded here.

Problems with the RESTful API.

Dat face like – do you have a problem, bro? ;)

Quite a lot of people – both, surprised and very happy about this!

I’m very happy that the room was full (maybe about 100 people) and they didn’t seem to be bored or so. My talk was about GraphQL and .NET Core, but quite opposite to some other talks that are all about the hype of some particular technology or solution, I didn’t say a word that it’s a silver bullet. I did point out when it makes sense and what is difficult about implementing this (and there are quite a few challenging topics), therefore I think that the subject itself was quite well balanced.

To sum up – I was able to meet so many people, that I don’t even want to include all of the names here, as I probably already forgotten some of these (apologies).
Anyway, it was a great event, which ensured me even more that IT conferences and meetups are the best way to meet new folks and getting more recognizable if that’s also what you’re looking for. See you all next time!

All of the materials including videos and sample projects can be downloaded from here.
The source code repository is being hosted on GitHub.

Scope

AutoMapper

HTTP POST

Async

Abstract

AutoMapper

As programmers, we want to simplify and automate as many things as possible. One of such activities might be transforming object of type A into B, for example User into UserDto. For sure, we can do it in a naive way, like this:

In order to create a new resource (for example user), we should create a new endpoint that supports HTTP POST operation. As the documentation states, POST is responsible for doing exactly this (unlike the GET for fetching the data or PUT for updating it). It’s very trivial to add the new HTTP POST endpoint within ASP.NET Core controller – just mark with the attribute HttpPost, provide a route path if needed and make sure to include FromBody attribute within a method parameter in order to bind incoming request to the defined type.

When it comes to I/O (In/Out) operations, we should strive for making them asynchronous always when it’s possible. There’s no point to spend the valuable server resources and keep it busy only to wait till some external database or web service call finishes. This is why the asynchronicity was introduced in a first place. Perform a request, get a Task object in return and await it when you need the result of such operation. We will apply this pattern to our repositories, services and controllers.

In the next episode, we’ll talk about HTTP Status Codes, Headers and write first unit and integration (end-to-end) tests.

]]>https://piotrgankiewicz.com/2017/04/06/becoming-a-software-developer-episode-xi/feed/29Open source contributors wantedhttps://piotrgankiewicz.com/2017/04/03/open-source-contributors-wanted/
https://piotrgankiewicz.com/2017/04/03/open-source-contributors-wanted/#commentsMon, 03 Apr 2017 06:00:05 +0000http://piotrgankiewicz.com/?p=3599Continue reading →]]>Hey everyone, I haven’t been asking anyone for help with developing the software for quite some time now, but well, the time is the crucial part here. I wish that the day lasted much longer than it currently is, yet, since I can’t do much about it, I want to ask you for help with contributing to the open source projects that I’ve been working on. It could be anything, like a feedback or an actual contribution (e.g. via Pull Request) and maybe you will find some of the projects interesting as there is a few of them waiting to be developed further.

Warden

Warden is by far my biggest and most advanced open source project, that I’ve been working on for over a year now, and there are at least quite a few people around the world using it (at least based on the messages I did receive from them). At its core, it was built as a lightweight library containing the set of extensions (that can be easily added to the overall configuration and pipeline simply by implementing one trivial interface) in order to help with monitoring the resources like websites, APIs, databases, files etc.
You can find the repository and the documentation to all of its features as well as examples on the GitHub.

That’s the base part of the Warden. On top of that, I did create a Web application (a year ago) to find out if the Warden library that could be used and run within a console app or a service would be able to somehow push its results and visualize the data and resources being monitored via web dashboard. It turned out to be possible and actually quite easily achievable, therefore I did realize it’s worth trying to create a whole new stack that will be built as the set of microservices and the RESTful API on the top in order to seamlessly integrate with the core Warden monitoring application. I have already created and deployed to the cloud the very first version of the services and with a friend of mine, we’re working on the new Web interface, so it will look really cool and neat. You can find more UI designs here (some of them are already implemented).

There are a few things that I’d like to with the Warden, so feel free to provide your feedback or join our small team and help us develop this product (we already talked to some people and there’s a chance to make it commercial stuff using the SaaS model).

Core – migrate the Warden library to the latest version of .NET Core framework (.csproj thingy) and separate all of the extensions into their own repositories in order to make the overall solution more modular.

API – develop further the current set of services, there’s still a lot of stuff to figure out in terms of storing the data, caching, sending notifications etc.

Extensions – any new extension (external package) that would be valuable to you or the others, as well as adding new features to the core library.

Lockbox

Lockbox was created to provide a secure storage to any type of the credentials (in my case it was the configuration of the application, so-called appsettings.json) and easily integrate it with any software (or the actual device) that is able to perform an HTTP request. It’s a very simple idea – there’s a NoSQL database (MongoDB but it could be anything), that contains basic user accounts, boxes with the access privileges and within the boxes, actual entries that contain the encrypted data. Encryption key is being passed via the HTTP Header, thus even if the database was compromised, there’s no way to decrypt the values. I did it, cause there was no solution back then like e.g. Azure Vault Storage and I really needed something simple enough (yet still secure), to quickly and securely load the configurations into my applications on the production environment. Here you will find a repository, basic wiki with some examples and the actual HTTP API documentation.

For the further development, there’s really a need to create a web interface through which you could manage all of your encrypted data and so on. It would be also nice to create a CLI (like a redis-cli or so) and of course add more tests since I wanted to get it up and running quickly and had no time to write them back then.

Medium

Medium is another project that I created as I couldn’t find a service that would help me solve my problem. Think about the following scenario, for example, you have a build server, and once the build is completed you might want to execute some special webhook that should be e.g. validated and based on the data being passed within this webhook, you may want to do another HTTP request etc. so the flow can go on and get really complicated. And this is why I created Medium – there was no service, that would let me expose the API and define the validation rules for the incoming HTTP request (e.g. from the build server) and then perform another HTTP request to some other service (e.g. Docker Hub or MyGet) based on the received input.

Same as in Lockbox, it would be really great to have the web interface in order to define the flow – for the starters, it could contain a textarea that would be used to save or load the configuration for the webhooks. And also, there’s a need to implement some repository using the database (for now everything is being kept in the memory) as well as add tests of course.

Final words

In order to sum up what was written here – if you would like to start with the open source in general and don’t have your own project (yet or don’t want have it at least for now), but still would like to contribute in some way, feel free to leave a comment or message me directly via email: piotr.gankiewicz[at]gmail.com. Any help will be greatly appreciated!

All of the materials including videos and sample projects can be downloaded from here.
The source code repository is being hosted on GitHub.

Scope

Domain

Services

Controllers

Abstract

Domain

As I mentioned already before, we will incorporate the Domain Driven Design (DDD) approach (at least in its light version) into our solution. Our domain models are the root of the Core project. They will have to be rich classes (that we will be continuously refactoring throughout the whole course), containing both properties and methods that will ensure that the internal of such object is valid. Otherwise, our domain model might throw an exception in order to let the user know that something went wrong. We can split our domain models into the following types:

Value Object – has no unique identifier, is immutable, can represent address, geolocation etc. Both Value Objects are usually equal if all of their properties have the same values.

Entity – has the unique identifier, which means that both entities are equal only if they possess the same id. This is a rich model that may contain additional methods in order to manipulate its state.

Aggregate – it’s an entity that may contain other entities and is a root model that we will have access to via the repository. It can be constructed by using one or more entities. Think about the Trip which can be an entity and have it’s unique identifier, but without a Driver (aggregate) the Trip on it’s own doesn’t make much sense – this is what the aggregates are for, to set the boundaries and ensure the valid state of the other entities that are a part of it.

Our repository will also contain the repositories, but only the interfaces. The actual implementation of such repository interface is the concern of the Infrastructure layer, as it could be either in memory, database or even some external storage being used to save our data.

Later on, we may also include e.g. events in order to build a more sophisticated application.

Our infrastructure will have to deal with a lot of different tasks related either to implementation of the domain interfaces, handling the database connections and so on.
One of such requirements will be to provide the so-called application services. They will be defined as interfaces and implemented using the domain repositories and models in order to manipulate them and perform the actual business logic (e.g. registering user, creating a new trip etc.). The important part here is that we will never expose the domain itself to the higher layer which would be API in our solution (or the UI). We want to ensure the consumer of the application services that the models he will receive will be safe to use, which means that they will be just a set of public properties with no robust methods whatsoever. By doing so, we can be certain that we will never modify our domain model (even unwillingly) via our API without ensuring it’s proper validation and the overall flow. In order to achieve such goal, we will be returning DTO (Data Transfer Objects) from our infrastructural services to the other layers (being API in that scenario).

This is where we can actually deal with our application from the end user point of view. For now, we can only do the very trivial things such as fetch the User account, but later on we will be able to do much, much more. Our controllers define the set of distinct operations (GET, PUT, POST, DELETE) and each one of them will have its unique endpoint called the URI (Unique Resource Identifier) e.g. /users or /drivers/{id}/vehicle that will allow to either get or modify the underlying data.

In the next episode, we’ll make use of some external libraries to help us with handling the DTO mapping, take a look at the HTTP POST method and refactor our services to be fully asynchronous.

]]>https://piotrgankiewicz.com/2017/03/30/becoming-a-software-developer-episode-x/feed/12Why you should care about DevOpshttps://piotrgankiewicz.com/2017/03/27/why-you-should-care-about-devops/
https://piotrgankiewicz.com/2017/03/27/why-you-should-care-about-devops/#commentsMon, 27 Mar 2017 05:38:47 +0000http://piotrgankiewicz.com/?p=3576Continue reading →]]>Many programmers tend to believe, that sticking to the particular technology of their choice while being reluctant to the other pieces of a rather complex process of providing a completed application is not their concern. DevOps, infrastructural concerns, cloud computing and so on – we got other teams able to do that, correct? Well, even if you do, you’re missing a huge piece of the knowledge that could save your day at some point in the future. Let me briefly present my point of view on the given subject.

To begin with, I have a question for you. Imagine, that you’re working on your own project (could be after hours as some of us do) and sooner or later, you would like to deploy it somewhere. And I’m talking about things like API with the front-end part, as providing SaaS solutions is probably the most common solution these days. At this point, you can choose only between 2 activities related to the deployment – manual or the automated one.

Manual deployment is usually quick at the beginning. As long as you know the basics of some HTTP Server like Nginx, Apache or IIS to host your application, all you need to do is to basically open the connection to the virtual machine, upload the application via FTP / ssh, maybe setup some additional firewall and databases and you’re good to go.
The actual process of deploying your so-called artifacts can greatly vary depending on the size of your application, whether you’re using multiple virtual machines to e.g. distribute it via load balancer and maybe you’re doing a (micro)services solution? Doing it manually for the first time is probably ok, but doing the same things (updating the source code being spread out across multiple servers etc.) tends to become really cumbersome.

Thus, just like in the actual software, when you see a repetitive code in 2 or more places, you don’t want to copy it over and over again and then update all of the N places when there are new changes on their way. What you should do instead, is to refactor the code, so there’s only a single place containing the common classes or methods.
The quite similar approach applies to the process of deploying your software, which can be understood as the continuous integration and deployment (CI & CD) process.

Before we dive into the details, let’s get back to our question. All of us are aware that time is the most precious value. In that case, you can also think of time as being equivalent to the money. So, would you enjoy spending your time doing the repetitive things over and over again? Uploading the new version of your application, testing it manually and so on? Trust me, it would you drive you mad after some time, and you can only imagine how much time would be wasted due to this.

For sure, there’s also another way – let’s hire some people to do this for us. And yes, it could be a viable option, yet in that scenario, instead of wasting time, you would spend a lot of money to pay the engineers to do that for you (remember, I’m still talking from the perspective of a guy, who runs his own project after hours). Now, what if you order other people to do that task for you, but in future, some things change and there’s a need to adjust the whole build and deployment process. You see where I’m going at, a neverending story, where you’re merely an observer, who has no clue what this is al about.

As you may have already guessed, the solution would be to learn these things on our own. Although you may not like this idea, I think it’s the only way if you truly want to have everything under your control. You may feel overwhelmed by the number of different topics that are required to possess the general knowledge about delivering the software from A-Z. For sure I did, but trust me, it’s not that difficult as may seem at the first glance. I’d start with the basic strategy and later on extend it, depending on your needs.

Setup a hosting environment.

Setup a build server.

This already gives you a lot of flexibility and stands as a core of the CI & CD. At first, you need to have the place (e.g. a virtual machine) that will be able to host your application. Then, you need to set up, a build server that will build your code, test it and then push it to the given server. And everything will happen automatically e.g. after a new commit to the source code repository. By spending maybe half a day to find out how the build server works, you will already save a lot of time that would be unnecessarily spent on the manual deployment instead. And that’s just a beginning. Later on, you can think of creating a separate test environment first, that will be run integration tests and only after they succeed, the build server will receive a message that it’s safe to push the new release to the production environment.

Going further, you can pack your application within the containers (e.g. using Docker), setup some notifications to the Slack, build your own packages using MyGet and do other extraordinary, fully automated activities. At some point, it’s almost impossible to keep track of the manual deployment process, which is why I believe that the sooner you establish it, the better.

I hope that this short article was able maybe not to already convince you to go studying the automation of the deployment process, but at least to think about it for more than a minute. There are so many services and tools able to do all these things, that it’s really easy to get started at least with the build server (for example take a look at my post about Travis CI).
I do believe that we, being the programmers, and even more importantly, software engineers should know as much as possible about not only about creating but also delivering the software.

All of the materials including videos and sample projects can be downloaded from here.
The source code repository is being hosted on GitHub.

Scope

Overview

Architecture

Abstract

Overview

We’ll be creating the web application which acts as an HTTP RESTful API that can be understood as a gateway to our whole system. We won’t focus on the front-end part at all, instead, we’ll build a web service API that can be consumed literally by and end user able to perform HTTP request (web application, mobile or desktop application and so on).
Application will be based on the latest version of ASP.NET Core framework, that is great choice either for creating HTTP API (like we will) or also the web applications that can return HTML views (by using Razor engine). During the course, we will explore both features and possibilities that are baked into this powerful framework.

Application is named Passenger, and it will allow e.g. to register a new driver, that goes from point A to B (for example to his workplace) and there will be passengers, who can ask a particular driver to pick them up at the specific location, and if he accepts their requests, all of them will share the same vehicle in order to reduce the costs related to travelling.

Source code will be available at our GitHub repository, thus you need to have a very basic knowledge of using Git in order to get the latest version of source code. Alternatively, you can click on the “Download ZIP” button in order to get the latest changes without using Git at all.

We will use a built-in issues and the extension named ZenHub in order to have a board that will help to organize the list of tasks and we will also use the so-called smart commits containing messages referencing to the particular tasks by their identifiers (e.g. “Fixed user service #15”).

Architecture

We will make use of the Onion Architecture which may look difficult at the first sight, yet it’s actually simple and still sophisticated enough in terms of layering, maintainability, extensibility, loose coupling and high cohesion. We will apply some of the core concepts of Domain Driven Design (DDD) and other good patterns and practices, that can be successfully used whether you’re building the small or large-scale applications.

In the next episode, we’ll create our first domain models, repository, application service and a simple controller returning the user account.

]]>https://piotrgankiewicz.com/2017/03/23/becoming-a-software-developer-episode-ix/feed/15.NET Core Tourhttps://piotrgankiewicz.com/2017/03/20/net-core-tour/
https://piotrgankiewicz.com/2017/03/20/net-core-tour/#commentsMon, 20 Mar 2017 06:06:58 +0000http://piotrgankiewicz.com/?p=3524Continue reading →]]>The great technology event is about to get started. Are you ready for the series of presentations and later on advanced workshops related to the latest Microsoft technology being .NET Core?

Yes, you can hear me right, in the upcoming weeks (or actually days, to be more precise about the day of start) me and Łukasz Pyrzyk (who is the author of this idea) will visit some of the major cities and have open talks and even workshops (in May and June) about the .NET Core. What you can expect from us is that we will dive into the .NET Core itself – Łukasz will focus on some of the newest functionalities and experimental types, while I’ll have a demo of how to get started with building the .NET Core applications. Speaking of the workshops, we want to do something more advanced probably related to building ASP.NET Core apps using microservices and maybe even include the topic of DevOps (Docker, containers, deployment and so on). Eventually, if everything goes as expected, there might be a so-called grande finale event, but it’s yet to be discussed.

.NET Core Tour.

The dates are as follows (some of them still to be discussed):

Wrocław – 21.03.2017

Kraków – 29.03.2017

Warszawa – 03.04.2017

Wrocław – 06.04.2017

Białystok – 19.04.2017

Warszawa – 20.04.2017 (Channel9 Microsoft Poland)

Katowice – 17.05.2017

Wrocław – 24.05.2017

Łódź – 31.05.2017

Wrocław – 01.06.2017

Kraków – 03.06.2017 (Workshops)

Toruń – 06.06.2017

Lublin – 07.06.2017

Opole – 08.06.2017

Poznań – 21.06.2017

Łódź – 24.06.2017

Wrocław – 01.07.2017 (Workshops)

Warszawa – 29.07.2017 (Workshops)

Gdańsk – TBD (Workshops)

Katowice – TBD (Workshops)

Grande finale – TBD

Of course, everything will be totally free and open and the knowledge is not the only cool thing that you can expect. We did manage to get some really cool partners, therefore you can expect really nice gifts such as:

I’ll join Łukasz in Łódź and probably Poznań and Trójmiasto plus of course during workshops. The rest of the events he will run solo. However, I’ll have my speech the same day in Warsaw during 4Developers conference.

Make sure you won’t miss these events and hopefully see you all soon! Follow us on Twitter in order to stay updated with the latest news @spetzu, @lukaszpyrzyk.

And once again, we want to say a big thank you to all of our supporters and partners.

]]>https://piotrgankiewicz.com/2017/03/20/net-core-tour/feed/12Becoming a software developer – episode VIIIhttps://piotrgankiewicz.com/2017/03/16/becoming-a-software-developer-episode-viii/
https://piotrgankiewicz.com/2017/03/16/becoming-a-software-developer-episode-viii/#commentsThu, 16 Mar 2017 06:53:45 +0000http://piotrgankiewicz.com/?p=3493Continue reading →]]>Welcome to the eighth episode of my course “Becoming a software developer”, which does focus on the good patterns and practices used on a daily basis in the world of the software development.

All of the materials including videos and sample projects can be downloaded from here.

Scope

Good patterns and practices

Design patterns

Abstract

Good patterns and practices

There are a lot of good patterns in the world of the software development and I’d like you to beware especially of the following ones:

KISS (Keep It Simple Stupid) – whatever you do, especially when you write a code, tend to create the small and granular methods, do not name them or variable using meaningless names or single characters, focus only on what’s important and do not introduce complexity if it’s completely unnecessary.

DRY (Don’t Repeat Yourself) – if there’s a duplicated code (or whenever you see the same code in the N different places) it means that it should be refactored and put into a single class or method. By doing so, you will have just a single place that will need to be maintained (in terms of further development or potential issues).

YAGNI (You Aren’t Gonna Need It) – we, being the programmers quite often tend to think about the features that might be needed in the future. Not only this, but we tend to write a code, that is currently not needed (and probably won’t be needed for a long time, or maybe even at all). Remember to focus on delivering the features that are part of the current scope, cause the scope of the application tends to change rather often and you don’t want to spend time on something that will have to be completed removed later on.

SOLID – this one is big, as it stand for a 5 very important practices, which have been greatly covered by many articles and tutorials, so just take a loot at it on your own.

Design patterns

There are tons of patterns, and honestly, I remember just a small part of them that I’m using on a daily basis. Anyway, whenever I need to look for some sophisticated solution and can’t figure it on my own, I would look for a specialized pattern that would help me solve my problem. I do encourage you to get familiar with the design patterns in general, yet do not be afraid if you can’t memorize them all at once, it doesn’t really matter. Once you get enough experience as a software developer, you will quickly realize, that writing a proper and well-designed code is something that comes naturally, and some of these patterns will be applied to your source code even without thinking about it.

However, there a few patterns that are my favorites, so take a look at the following:

Dependency Injection – part of Inversion of Control using IoC Container, where you can declare a specific interfaces and classes that do implement them that shall be injected by the library into the other parts of your code.

Strategy Pattern – a way to distinguish between different implemtantations of the particular interface (e.g. SQLDatabase, InMemoryDatabase etc.), works great with DI.

In the next episode, we’ll finally start working on the application. We’ll also discuss the way that the work will be done in terms of managing tasks and so on.

]]>https://piotrgankiewicz.com/2017/03/16/becoming-a-software-developer-episode-viii/feed/9.NET Core continuous deployment part I – Travis CI integrationhttps://piotrgankiewicz.com/2017/03/13/net-core-continuous-deployment-part-i-travis-ci-integration/
https://piotrgankiewicz.com/2017/03/13/net-core-continuous-deployment-part-i-travis-ci-integration/#commentsMon, 13 Mar 2017 06:00:13 +0000http://piotrgankiewicz.com/?p=3479Continue reading →]]>Recently I’ve been doing a lot of DevOps in order to automate the continuous integration and deployment (CI & CD) of the microservices as much as possible. In this article, I’d like to share with you some of my experiences how to get started with creating your own deployment process and this is going to be the first part of the series of articles related to this process.

I’ll be using one of my open source projects Warden as an example, however, I follow pretty much the same practices in the other projects I’m working on such as e.g. Collectively which is built using similar patterns and practices related to the microservices architecture. By the way, I’d be grateful if you could take a look at the first concepts of the new Warden Web UI shown here and share your thoughts whether you like it or not (but that’s off the topic).

The first thing that you need to do, is to sign in to the Travis CI build server, which can be done using the GitHub account. And once you set the required permissions, you shall see the list of your repositories and organizations (if you’re a member of any). Here, you can easily toggle which repositories should take part in the build process.

Sign in to the Travis CI and select repositories.

Alright then, the next step is to setup the build configuration file, thus let’s take a look at the basic structure of the project directory containing a .NET Core application.

.NET Core service project structure.

As you can see, I have multiple services and all of them follow the same directory structure.
I’d have the actual service e.g. Warden.Api and another 2 test projects. However, you can have much more projects containing additional layers or so. Just keep them in the same root directory if possible. Let’s go one step further and take a look at the actual .travis.yml file which holds the configuration being used by the Travis CI build server.

There might seem to be a lot of going on, but it’s actually quite simple, so let’s discuss this particular configuration. We want to build a csharp project, and we need to have the sudo privileges e.g. in order to correctly restore the packages via dotnet restore command. Next, we want to use one of the latest versions of the dotnet framework being 1.0.0-preview2-1-003177 in that case (which does support project.json RIP [*]), however I’m pretty sure that by now there’s already a new version available that works with this ugly .csproj files.

Next, we do care only about master and develop branches – if you leave this option empty, the Travis CI will run build for any branch in your repository (including the feature ones that might be temporarily broken or so). And now we’re getting into some tricky parts – before_script means that we want to set the proper file access in order to execute the bash scripts that are included inside them. In the script part we’re saying what scripts should be executed during the build phase (the order does matter), and later on, we want to run some additional script(s) once the built was successful.

Eventually, we’re stating that we do not want to receive spam after each successful build, so let’s send an email only on_failure. And this is what you will get to see once everything was set up correctly (like I do here and here).

Successful Travis CI build.

Moreover, you can add a graphical status e.g. your repository, simply by fetching the image file from the following URL: https://api.travis-ci.org/{repository}/{project}.svg?branch={branch}.
For the Warden.Api repository the markdown code looks like this:

And that’s it for now, I really enjoy using Travis CI as it works fast, provides a lot of configuration options and is very easy to get up and running quickly. In the future posts, I’ll talk about integrating the build system with Docker Hub registry, MyGet packaging service and other useful tools.

]]>https://piotrgankiewicz.com/2017/03/13/net-core-continuous-deployment-part-i-travis-ci-integration/feed/12Internet of Thingshttps://piotrgankiewicz.com/2017/03/10/internet-of-things/
https://piotrgankiewicz.com/2017/03/10/internet-of-things/#commentsFri, 10 Mar 2017 05:33:55 +0000http://piotrgankiewicz.com/?p=3473Continue reading →]]>I received another article from Jenny Holt related to the hot topic being Internet of Things. Hopefully you will find it helpful in understanding what IoT is all about.

Software Development and the Internet of Things

The Internet of Things is bringing a new set of challenges and opportunities to software design and development. The interconnectedness that IoT provides, is spawning new tools and techniques for developers as this burgeoning opportunity becomes a reality.

Previously, software only needed to be developed to work on specific hardware. Now, smart ‘things’ have to collect and transmit data to an intelligent system of software that manages devices and networks, and stores, organizes, and processes data. This data is subsequently presented to the end user in a way that makes it useful.

Multi Step Approach to IoT

The programming of a smart device has three parts. The first is the device itself, the ‘thing.’ This could be a refrigerator, a self-driving vehicle, a building, a LED light system, or any of hundreds of other items that combine using IoT. These things usually have a low-power processor and a wireless connection, but no screen. The second part is the software that sends and receives data from the device and runs in the cloud. Finally, the analytics takes the data and processes it to make it useful. This web app, mobile app, or enterprise application is what the end user sees.

Each of these three parts requires specialist programming. The coding of the device itself is usually handled by the device manufacturer. This type of programming presents its own challenges since the hardware needs to be designed into the item that the user will use.

Introducing the Apple Watch

The Apple Watch, for example, was a challenge for Apple developers because it needed to be small, as consumers would expect a watch to be. Yet, it also had to have the capability to communicate with other devices and to collect and analyze data.

The other two parts, the communication, and analysis of the data are usually the purview of software developers. Because the hardware may be made by any number of manufacturers, the sheer number of discreet applications required has the potential to quickly get out of hand.

Fortunately, there are platforms that developers can use which already include the basics for these types of applications. Developers need only write the code that is specific to the device they are working on. Industry leaders like Microsoft and Oracle have IoT platforms, but a lot of the groundbreaking work is being done by startups like Bug Labs and ThingWorx.

These platforms will speed development and improve consistency in IoT development while contributing to the increase in connected devices and applications.