]]>Text of ListView Subitems not updated when using AddRange (WinForms)https://www.onwerk.de/2018/12/05/text-of-listview-subitems-not-updated-when-using-addrange-winforms/
Wed, 05 Dec 2018 09:37:24 +0000https://www.onwerk.de/?p=3128When ListView.AddRange was used to add the sub items to an item in a WinForms ListView, a bug in the .NET Framework prevents text updates: After updating the texts of sub items for a ListView item the new texts are not displayed in the ListView. Invalidating, Updating, Redrawing, Refreshing are not helping. As KevinBui pointed out in this article „C# setting ListView’s Item’s Subitem’s text does not display“ new sub item texts are not displayed in the ListView when ListView.AddRange was used to add the sub items due to a bug in the .NET Framework. The solution is not to […]]]>

When ListView.AddRange was used to add the sub items to an item in a WinForms ListView, a bug in the .NET Framework prevents text updates:
After updating the texts of sub items for a ListView item the new texts are not displayed in the ListView. Invalidating, Updating, Redrawing, Refreshing are not helping.

]]>Talk: „Relational database migrations (with Flyway)“https://www.onwerk.de/2017/02/16/talk-relational-database-migrations-with-flyway/
Thu, 16 Feb 2017 08:04:25 +0000https://www.onwerk.de/?p=2893The slides for the talk „Relational database migrations (with Flyway)“ (RheinNeckarJS Meetup|Twitter, 15.02.2017) are online at Speakerdeck. I will show how you can use Flyway to have a consistent way to migrate databases (MySql, Maria, Oracle, SQL Server and a lot more) automatically during deployment/as a part of your continuous deployment process. This can also help you with your integration tests. Content: What are migrations? What are no migrations? Why should I automate it? How does Flyway work? Flyway in an automated deployment process Live Demo]]>

I will show how you can use Flyway to have a consistent way to migrate databases (MySql, Maria, Oracle, SQL Server and a lot more) automatically during deployment/as a part of your continuous deployment process. This can also help you with your integration tests.

Content:

What are migrations?

What are no migrations?

Why should I automate it?

How does Flyway work?

Flyway in an automated deployment process

Live Demo

]]>Minimizing your Docker imageshttps://www.onwerk.de/2017/02/05/minimizing-your-docker-images/
Sun, 05 Feb 2017 19:30:26 +0000https://www.onwerk.de/?p=2863As your project grows it can be challenging to make sure that only the necessary files get sent to the Docker daemon. There are several tips to help you with this. Use a „build“ directory Using a .dockerignore file to ignore all files that should not be sent to the docker daemon sounds easy but in large projects this can be quite challenging task, especially if your build tasks get more and grow. I often found it easier to create a single „build“ director during the build process (like grunt, gulp or similar), build and prepare everything into […]]]>

As your project grows it can be challenging to make sure that only the necessary files get sent to the Docker daemon.

There are several tips to help you with this.

Use a „build“ directory

Using a .dockerignore file to ignore all files that should not be sent to the docker daemon sounds easy but in large projects this can be quite challenging task, especially if your build tasks get more and grow. I often found it easier to create a single „build“ director during the build process (like grunt, gulp or similar), build and prepare everything into this directory and just use this „build“ directory as the root directory to build the Docker image.

Optimize your .dockerignore file by negation

Sometimes it is not possible to use such a dedicated „build“ directory. In that case you can use a .dockerignore file to specify all files that should not be sent to the docker daemon and thus be ignored during the docker build step. In this .dockerignore file you can list files and directories that should be ignored. But it is very easy to forget to add new files as your project is evolving.

I think it is better to explicitly specify which files should be sent to the docker daemon. You can do so, by adding a negated entry to the .dockerignore for each file or directory that should be included.
It might sound strange to add a negative entry into an ignore-file but it works pretty well.

# ignore everything
*
# do NOT ignore docker-entrypoint.sh
!docker-entrypoint.sh
# do NOT ignore everything in the build
!build/
# but DO ignore the nodemon.json file in the build directory
build/nodemon.json

Check the amount of data you send to the Docker daemon

You can use a special Dockerfile to check which files and directories are sent to the Docker daemon. Create a file Dockerfile-checksize in the directory of your Dockerfile and use the following content:

]]>(Incomplete) List of Android app permissions requiring a privacy policyhttps://www.onwerk.de/2016/12/21/incomplete-list-of-android-app-permissions-requiring-a-privacy-policy/
Wed, 21 Dec 2016 10:22:56 +0000https://www.onwerk.de/?p=2835„Your app has an apk with version code xyz that requests the following permission(s): …. Apps using these permissions in an APK are required to have a privacy policy set.“ This is a notification display in the Google Play Developer Console, when an Android app is requesting one or more certain permissions. Google states that every app that „handles personal or sensitive user data (including personally identifiable information, financial and payment information, authentication information, phonebook or contact data, microphone and camera sensor data, and sensitive device data)“ is required to have a „privacy policy in both the designated […]]]>

„Your app has an apk with version code xyz that requests the following permission(s): …. Apps using these permissions in an APK are required to have a privacy policy set.“

This is a notification display in the Google Play Developer Console, when an Android app is requesting one or more certain permissions. Google states that every app that „handles personal or sensitive user data (including personally identifiable information, financial and payment information, authentication information, phonebook or contact data, microphone and camera sensor data, and sensitive device data)“ is required to have a „privacy policy in both the designated field in the Play Developer Console and … [in] the Play distributed app itself“ (Developer Policy Center)

Unfortunately I wasn’t able to find a list of Android permissions that trigger this statement, so here is an incomplete list of Android app permissions requiring a privacy policy:

android.permission.GET_ACCOUNTS

android.permission.READ_CONTACTS

android.permission.CAMERA

android.permission.RECORD_AUDIO

android.permission.READ_PHONE_STATE

Please drop us a line at blog@onwerk.de if you have more permissions to add.

]]>Configure nginx for SSLhttps://www.onwerk.de/2016/12/02/configure-nginx-for-ssl/
Fri, 02 Dec 2016 14:24:59 +0000https://www.onwerk.de/?p=2798When setting up nginx to use HTTPS we checked the site with the SSL Server Test of Qualys. The result page showed several hints to improve security. SSL Configuration Key Exchange / DHE (Ephemeral Diffie-Hellman) parameters The default nginx configuration will use a key that is too weak. To generate and use a stronger key, first generate a stronger DHE parameter: sudo openssl dhparam -out /etc/ssl/private/dhparams.pem 2048 This will create a new file dhparams.pem in /etc/ssl/private/, containing the new key. The key file can be reference in the nginx configuration with the ssl_dhparam configuration parameter. […]]]>

When setting up nginx to use HTTPS we checked the site with the SSL Server Test of Qualys.
The result page showed several hints to improve security.

SSL Configuration

Key Exchange / DHE (Ephemeral Diffie-Hellman) parameters

The default nginx configuration will use a key that is too weak. To generate and use a stronger key, first generate a stronger DHE parameter:

sudo openssl dhparam -out /etc/ssl/private/dhparams.pem 2048

This will create a new file dhparams.pem in /etc/ssl/private/, containing the new key.
The key file can be reference in the nginx configuration with the ssl_dhparam configuration parameter.

Common SSL Configuration for all virtual hosts

Usually the SSL configuration is done within every server block/virtual host configuration block.
If you’re having more than one server/virtual host you would have to do it for every server. As software developers we do not like repetitions (don’t repeat yourself…).
The include files save us from configuring SSL multiple times.
We are using (at least) two include files for SSL configuration:

an include file for the generic SSL settings like SSL protocols, used ciphers, etc.

one include file for every used SSL certicate

The following steps assumes that the ssl certificate files are prepared and in the directory /usr/local/nginx/conf/current/.

]]>.NET Framework: Directory- and File-methods trim pathshttps://www.onwerk.de/2016/11/23/net-framework-directory-and-file-methods-trim-paths/
Wed, 23 Nov 2016 15:44:23 +0000https://www.onwerk.de/?p=2758A generic error occurred in GDI+, that was the very generic error message when using System.Drawing.Image.Save to save an image to a previously created directory. Further investigations showed that it actually was not an error with Image.Save, which in fact was behaving correctly (let aside the cryptic error message). The „error“ was not an error per se, but more an unexpected behaviour of the method used to create the directory: Directory.CreateDirectory. This method has been called erroneously with a path having a space character at the end of one of path specification parts: private static void Test(Image newImage) […]]]>

A generic error occurred in GDI+, that was the very generic error message when using System.Drawing.Image.Save to save an image to a previously created directory.

Further investigations showed that it actually was not an error with Image.Save, which in fact was behaving correctly (let aside the cryptic error message).
The „error“ was not an error per se, but more an unexpected behaviour of the method used to create the directory: Directory.CreateDirectory.

This method has been called erroneously with a path having a space character at the end of one of path specification parts:

The previous code instructs the framework to create a path „Space “ (with trailing space) under „C:\“. Instead a directory „Space“ (without trailing space) is created, this can be checked with Windows Explorer or the command line.

Although it makes sense to trim the single parts of a path it is still somewhat unexpected and may lead to undesired effects

We checked some more Directory and File class methods and at least those we check have all the same behaviour.

]]>Docker Security: Check file checksum before you add an apt-keyhttps://www.onwerk.de/2016/11/23/docker-security-check-file-checksum-before-you-add-an-apt-key/
Wed, 23 Nov 2016 15:08:32 +0000https://www.onwerk.de/?p=2759If you’re adding additional sources for apt-get in your Dockerfile you should make sure that the correct key is added, otherwise the integrity of your Docker image may be violated. You can do so by using sha256sum to generate the checksum of the downloaded file and compare it to a given checksum. That checksum could be listed on the web page where you download the file from or you can create it by yourself with sha256sum: $:~/Docker-apt-key-security$ sha256sum archive.key 191f801a17273f25b781c580c2900d2fd58064554220ad6e18698aeb3c3afe70 archive.key In that case "191f801a17273f25b781c580c2900d2fd58064554220ad6e18698aeb3c3afe70" is the checksum of the file archive.key. Use that checksum in your Dockerfile, once the […]]]>

If you’re adding additional sources for apt-get in your Dockerfile you should make sure that the correct key is added, otherwise the integrity of your Docker image may be violated.
You can do so by using sha256sum to generate the checksum of the downloaded file and compare it to a given checksum. That checksum could be listed on the web page where you download the file from or you can create it by yourself with sha256sum:

]]>With Docker in 5 minutes from developer to test serverhttps://www.onwerk.de/2016/11/05/with-docker-in-5-minutes-from-developer-to-test-server/
Sat, 05 Nov 2016 18:33:02 +0000https://www.onwerk.de/?p=2733An on-premise „Docker Cloud“-like workflow from repository to Jenkins to test server As a software company for individual software solutions we are developing software in highly diverse settings, in means of programming languages, databases and environments: Node.JS, PHP, C#, MySQL, MongoDB, MS SQL, Windows, Ubuntu, Debian, you name it. That makes it a challenging task to provide test servers or acceptance test servers for fellow developers, project managers and customers. We used to solve this by spinning up multiple virtual machines or cloud servers. This became more and more complicated, extensive to maintain and resource consuming. Furthermore, it had […]]]>

An on-premise „Docker Cloud“-like workflow from repository to Jenkins to test server

As a software company for individual software solutions we are developing software in highly diverse settings, in means of programming languages, databases and environments: Node.JS, PHP, C#, MySQL, MongoDB, MS SQL, Windows, Ubuntu, Debian, you name it. That makes it a challenging task to provide test servers or acceptance test servers for fellow developers, project managers and customers.
We used to solve this by spinning up multiple virtual machines or cloud servers. This became more and more complicated, extensive to maintain and resource consuming. Furthermore, it had several unwanted side effects:

A deployment was usually done on top of an existing installation, sometimes leading to side effects because of leftover installed items

The preparation of a virtual machine was usually done manually, making it difficult to repeat the exact same steps with the exact same versions on the final target machine

To face these shortcomings we completely redesigned our workflow for developing and distributing web applications by using a combination of our continuous integration server Jenkins, the containerization software Docker and several other open-source tools. The result is a workflow enabling us to start a web application including all its dependencies as a part of the build job without any prerequisites on the server. The configuration just takes 15 minutes per build job, the deployment to the target machine is done in about 5 minutes.

Our new workflow in a nutshell

Pushing code changes to the version control server will trigger a build on our continuous integration system Jenkins

After the usual build steps a new Docker image will be built containing the latest version of the application, which will then be pushed to an internal Docker registry running on a CoreOS server

A new Docker container will be started, based on the newly created image

The web application is fully up and running. It is accessible with a meaningful name and web address

The server provides an overview of the currently available test systems via a web page

With this procedure we always have fully working test environments for every web application, without the need to administrate the server or even logging in onto the host system.

The little extra effort we have to do is just the definition of the Dockerfile, the preparation of the docker-compose file and adding two shell script calls in a Jenkins job. Pretty neat, hu?
And the best: the shell scripts are open-source and available as open-source from our GitHub-Account.

Let’s dive into the details…

Creating the Docker image

As version control server, e.g. repository host, we use Phabricator. Phabricator will notify Jenkins about every push on the „default“ or „develop“ branch, which will trigger the execution of a Jenkins job. During this job a Docker image will be built by calling a shell script. This script is part of a collection of little helper tools, we call them the Dorie-Tools, because, you know, we speak whale… The dorie-tools can be freely downloaded from GitHub.
The script will automatically tag the image by certain tags:

"default" or "develop", based on the branch the current build is based on

a combination of the current date, build number and the branch the build was started upon; the result is something like "20161020_Build30_develop".

"latest" if the build is based on the „develop“ branch

After building the Docker image the script will also generate a docker-compose.override.yml file which only contains a reference to the just created image and tag:

This file will later be used to precisely identify which version of the Docker image should be used to start the container, thus avoiding the "latest" tag.
The newly created image will be pushed to a private Docker registry running on an internal server.

Starting the web application in a Docker container

After pushing the image the Jenkins job will call a shell script that copies several files to the server: the docker-compose file, optional additional deployment items (both checked in into version control) and the docker-compose.override.yml file which was created by the build job just a few steps before to the server. After the copy operation the script starts the multi-container Docker application described by the docker-compose.yml file. The additional override file is automatically loaded and respected by docker-compose and thus specifying the exact version of the Docker image to use.
The web application is up and running.

Accessing the web application

After these steps, the Docker container is up and running. Pretty good already.
But: We are working not only on one project but on multiple projects for different customers. We would like to have test systems for all of them. But if you specify an exposed port in the docker-compose file you would have to make sure that the external port is still available. Docker can start multiple containers using the same internal port, but it cannot start multiple containers using the same exposed (external) port. Tracking any used external ports would result in massive overhead of administrative work; of course we would like to avoid that.

This is solved by using a reverse proxy that will automatically be reconfigured as soon as a container is started or stopped.
We are using the great nginx-proxy by jwilder, which also runs in a Docker container (of course…) on the same server and is connected to the Docker daemon. The start or stop event of a container will trigger a reconfiguration of nginx. All Docker containers using the environment variable "VIRTUAL_HOST" specifying the virtual host name of the container will be available by that name for regular HTTP/HTTPS access; nginx will automatically forward any web request based on the used server name to the exposed port of the container. By using a reverse proxy we don’t need any external ports. That means we are avoiding any potential port collisions. A developer does not need to know which ports are already in use when creating a new web application and writing a new docker-compose file, that’s great!

DNS name resolution setup

nginx is responding to any server name that is specified via the environment variable in a docker-compose file. But a client would still not be able to open a website by using the server name since the web browser would not be able to do the DNS resolution determining the IP address of a given web address. This is solved by a wildcard entry on our DNS server: Any request to resolve *.testserver.ourdomain.local.de will result in returning the same IP address, directing all requests to the nginx reverse proxy.
To avoid the need to remember the web addresses for the web applications, we use another handy tool: texthtml/docker-vhosts, which will generate a small web site, listing all available Docker virtual hosts. docker-vhosts is running in a Docker container on the same server, too.

Summary

By this combination of multiple tools we have a nice test system delivery pipeline:

Developers writing code and defining the environment to run the application via Dockerfile and docker-compose.yml

Pushing the code to version control server will build the application, statically analyse the code and run unit tests

After the build step a Docker image will be generated using the well-defined environment; the image is stored in an internal Docker repository

A Docker container is started using the newly created image, bringing up a testable system

The test system is instantly available by a web address, which is easy to remember

A web based overview of the test systems is available

All the necessary steps and the complexity are wrapped in just two shell scripts. It only takes minutes to add these calls to a Jenkins job.

…and external deployments?

But it does not stop there… For security reasons we don’t want to expose the Docker registry to the whole world. But we also use external accessible cloud servers for staging and for acceptance tests for our customers. The previously described workflow would only cover our internal test server with access to the internal Docker registry. For external deployment we use a special Docker feature to save and load images to flat files. This is also done by several shell scripts. The Docker image is exported and zipped, transferred to the cloud server, extracted and imported; the container is also started. On the external server, we have no automatic configuration of nginx for security reasons, but it takes just 10 minutes to create a new virtual host.
This deployment to staging or acceptance server is usually done automatically on any commit to the „default“ branch.

Continuous deployment. With no cost and no effort. A workflow to love.

There is also a presentation for free download available at Speakerdeck. The shell scripts are available as open-source from our GitHub-Account.

]]>Corrupt PATH after installation of MySqlhttps://www.onwerk.de/2016/10/07/corrupt-path-after-installation-of-mysql/
Fri, 07 Oct 2016 11:40:25 +0000https://www.onwerk.de/?p=2728The installation of MySql on Windows may lead to invalid PATH environment variable. The installation also installs some tools in a sub directory „MySQL Fabric 1.5 & MySQL Utilities 1.5“. This path is also added to the PATH environment variable which leads to something like PATH=C:\Python27\;C:\Python27\Scripts;C:\Program Files\nodejs\;C:\Program Files (x86)\MySQL Fabric 1.5 & MySQL Utilities 1.5\;C:\Program Files (x86)\MySQL\MySQL Fabric 1.5 & MySQL Utilities 1.5\Doctrine extensions for PHP\;C:\Program Files\Microsoft\Web Platform Installer\;C:\Program Files\Git\cmd; The ampersand characters in the path entries are not escaped and thus will be evaluated when the PATH variable will be used in shell commands, for instance when the PATH […]]]>

The ampersand characters in the path entries are not escaped and thus will be evaluated when the PATH variable will be used in shell commands, for instance when the PATH variable is extend like set PATH=%WindowsSdkDir%bin\x86;%PATH%. This is used in Visual Studio 2015 Developer Command line (VsDevCmd.bat) and this is the reason for errors when running the „Developer Command Prompt for VS2015“:
'MySQL' is not recognized as an internal or external command,
operable program or batch file.

The solution is modify the path in the system settings and use 8.3 paths for the problematic paths. To get the 8.3 path open a command shell, change to the containing directory „C:\Program Files (x86)\MySQL\“ and execute dir /x. In my case the short name for „C:\Program Files (x86)\MySQL Fabric 1.5 & MySQL Utilities 1.5\“ was „C:\Program Files (x86)\MySQLMYSQLF~1.5\“, on your system it is probably different.

]]>Running Windows Docker imageshttps://www.onwerk.de/2016/10/05/running-windows-docker-images/
Wed, 05 Oct 2016 10:34:41 +0000https://www.onwerk.de/?p=2716In the recent days I was rather confused about native Docker for Windows. What are the conditions to run a Docker image on Windows? Linux image or Windows image? Docker on Windows needs Hyper-V, but on my development box I need VMWare Workstation to run testing virtual machines, VMWare does not play with Hyper-V, so how can I use Docker on Windows for the cool new things? I found out that a lot of my confusion had to do with bad product naming… There is the Docker Toolbox for Windows (Docker up to 1.11) which is basically a […]]]>

In the recent days I was rather confused about native Docker for Windows. What are the conditions to run a Docker image on Windows? Linux image or Windows image? Docker on Windows needs Hyper-V, but on my development box I need VMWare Workstation to run testing virtual machines, VMWare does not play with Hyper-V, so how can I use Docker on Windows for the cool new things?

I found out that a lot of my confusion had to do with bad product naming…

There is the Docker Toolbox for Windows (Docker up to 1.11) which is basically a Windows Docker client talking to a Docker server (daemon) running in a Linux VirtualBox environment executing Linux images. I found that rather confusing and unnecessary, I thought it was easier to set up the Linux virtual machine by myself and just use Docker for Linux within the virtual machine.

With Docker 1.12 native support for Windows was announced, requiring an enabled Hyper-V role on the Windows machine. This is a native Windows Docker Client talking to a native Windows Docker Daemon executing Linux images. Wait! Linux images on Windows? Yes, Docker uses Hyper-V to run a minimal Alpine Linux distribution, also known as the „MobyLinuxVM“. In Docker for Linux any container shares the Linux kernel with the host. In Docker for Windows 1.12 the container does not share the kernel with the host (obviously, since this is Windows) but shares the kernel with the Alpine Distribution running under Hyper-V.

Running native Windows Docker images is currently possible with the beta version 26 and higher of Docker, as a small footnote on the download page announces. With this version it is possible to switch between Linux containers and Windows containers by right clicking the Docker Whale systray icon and selecting the „switch to Windows containers“ or „switch to Linux containers“.
Choose „Windows containers“ run native Windows images.

Make sure the Windows Update 3194496 is installed. If this update is not installed any Docker command in a shell will just freeze and never return. (at least in the beta version I used)

Make sure the Windows features „Containers“ and „Hyper-V“ are activated. This can be done by opening an elevated PowerShell session and using the following commands:Enable-WindowsOptionalFeature -Online -FeatureName containers –All
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All
Reboot your machine now.

Right click the Docker Whale systray icon and select „switch to Windows containers“. If you fail to do so any „docker pull“ or „docker run“ command will result in several retries to download the image and finally fail with the message „unknown blob„

Pay attention to the very last line „OS/Arch“ in the „Server“ section. This should read „windows/amd64„. If this reads „linux/amd64“ you need to „switch to Windows containers“ before you can use Windows images.
Switching the containers type seems kind of unstable in the beta, sometimes I experienced errors and crash notices and I had to reboot for the new setting to take effect.

Two types of Windows containers

There are two different types of containers that can be used to run a native Windows image:

Windows server containers: They share their kernel with the host system (like Docker for Linux) using process and namespace isolation. This type of container is very light weight. Despite their name they are not limited to Windows Server but run on Windows 10 Professional or Enterprise just as good as on Windows Server. Unlucky naming…
This is the default isolation type on Windows Servers.

Hyper-V containers: These containers do not share their kernel with the host, instead each containers run in an own Hyper-V container. They provide a higher isolation level.
This is the default isolation type on Windows Workstations.

The isolation level can be selected when starting a container based on a Windows image by using the command line parameter „—isolation„, possible values are „default“, „process“ (Windows Server containers) and „hyperv“ (Hyper-V containers)

Running native Docker for Windows in a virtual machine

The native Docker for Windows requires Hyper-V. I cannot enable Hyper-V on my developer machine since I am using VMWare Workstation to run several virtual machines for testing and for development. On a machine you can either use VMWare Workstation OR Hyper-V, but not both. So if you enable Hyper-V you will not be able to run any VMWare Workstation virtual machine on that computer. A solution is to create a VMWare Workstation virtual machine and install any of the supported operating system. Before powering on the virtual machine you may need to change a virtual hardware for this machine. Open the settings of the virtual machine, go to „Processors“ and check „Virtualize Intel VT-x/EPT or AMD-V/RVI“ in the „virtualization engine“ settings box. Now you may boot the machine and install the Windows operating system. Remember to follow the steps provided earlier.

Creating Windows Images

Currently there are two main base images you can build your own Docker image upon:

The product names are rather confusing since Windows Server Core does not have .NET Core Framework but instead comes with the full blown .NET Framework. Unlucky naming, again…

I found it helpful to install Chocolatey in an image based on microsoft/windowsservercore and use „choco install“ to install additional packages. Unfortunately Chocolatey is not available in microsoft/nanoserver, since it requires the regular .NET Framework.