Let thinking be a continuous function.

Mar 14, 2015

The context: In my current role, we use Azure Cloud Services for hosting our solutions. This is due to a decision made a long time ago for out-dated reasons. A decision to move to Azure Websites for next phases is under investigation. Due to this, sometimes I take time to write samples using Azure Websites to address new features & differences between Azure Websites & Azure Cloud Services.

This case happened by accident & I did try not reproduce it. Few days ago, I created a sample Azure Website to experiment the Deployment Slots feature. I added a Deployment Slot, called UAT. From my experiment, I see that a deployment slot is actually another Azure Website but linked, "in a way", to the parent Azure Website, in order to enable the Azure Portal to list them together as a site & deployment slots and also to enable swapping deployment between them. When I finished playing with the site, I deleted all resources, not sure about the order but I did this using the portal "Delete" buttons so what could go wrong?!

After this, Azure Portal started to fail loading the Websites section! I tried the Preview Portal, it was working normally but I have a Web Hosting Plan that I cannot delete! I tried to delete it by Azure xplat-cli and Azure PowerShell but that wasn't possible too! At the beginning, I had doubts that I deleted a required Resource Group during the clean up & the Azure Portal needs this Resource Group to list websites, since that Azure Portal doesn't have the concept of maintaining Resource Groups. I tried to recreate the website again but this didn't help also. Finally, using Azure PowerShell, I was able to list all websites & deployment slots, and I can see that the UAT deployment slot hasn't been deleted but the parent website has been deleted! Using Remove-AzureWebsite <uat-deployment-slot> I managed to delete this orphan slot. After this, portals returned to work as expected and I was able to delete the web hosting plan also.

First, I had build scripts that runs on TeamCity server hosted in a VM on Microsoft Azure that I talked about it in a previous post. Then I found AppVeyor, a very interesting Continuous Integration (CI) service that simply will provision a fresh VM when GitHub changes are detected & run your build configuration in that VM. So I decided to move to AppVeyor. AppVeyor was actually able to do all my needs just via UI configuration, I even didn't need my build scripts. It can restore nuget package, build solutions, detect & run unit tests, pack nuget packages, and publish them to nuget.org.

I tried it for one release and result was good. But I thought more about this and I found that I'm mixing different concerns by this practice. I should be able to build solution in my local development environment exactly as I can do on the build server. So, I returned to using build scripts. My build script was based on PSake which is my favoriate at the moment. It is very readable & small & easy to use method. Here's my current script as an example.

The next experiment was changing my Git workflow. I haven't been following any specific Git workflow, just the common practice of branching & committing. Recently, I read about GitFlow and GitHubFlow; very interesting ideas. If this is new to you, maybe you need also to read Atlassian documentation about workflows.

I decided to use GitHubFlow because my project is very simple one at the moment. I combined this with GitVersion project to calculate version number based on Git flow. Also, I used GitVersionTask nuget package to dynamically calculate assembly version numbers. All worked together very smooth.

I used Chocolately to install PSake & GitVersion & Xunit binaries on the VM. The good news, GitVersion can automatically detect both TeamCity and AppVeyor, and then alter the version & build numbers. You can check last build log for more details.

Mar 18, 2014

My history with Build Servers & Continuous Integration Servers consisted of two phases, first when I was working for employers who aren't aware of benefits of CI process, I used to have my own servers & tools. I used to treat myself as a Team, first they used to laugh at this attitude until they realised how helpful it can be. Later, when CI & CD processes became more popular and I joined more skilled teams, it was normal to find such services already available, so I didn't bother to have mine.

Recently, I decided to have my own build server again to use it for both experiments & OSS. So I needed it to be accessible from `anywhere`. Obviously the word `anywhere` immediately triggers the Cloud muscle in our heads & that what happened :)

First, I thought about a very economic option, The Amazon Web Services - Free Tier. I found that I can have a micro VM instance for free that hosts Windows Server 2012. I gave it a try. I believe that Amazon has a wide range of cloud services but it's not very popular for us, .Net developer, because of our focus on Windows Azure & Microsoft technologies which integrate very well with tools we use everyday. I created the instance & installed TeamCity & configured a project but that took a huge amount of time! The micro instance is very slow (by design). It was 100% CPU most of the time although I'm waiting. Then TeamCity started to behave in a weird way. I couldn't install a Build Agent & when I install it I cannot find it to assign it a job! It think that was a result of TeamCity Java applications won't crazy because of lack of resources. At this point I decided this is not a good experience & will cause issues in future, I'd better to try a paid service & see how much it costs. I believe this is what is call it "What You Pay Is What You Get" I don't blame Amazon or AWS at all.

Next day, I decided to try same steps on Windows Azure Virtual Machine (small 1.75GB Ram), back familiar land, which was much faster than the micro instance obviously. I configured a TeamCity project for my recent pet project ReliableUnitOfWork.SqlAzure but the build failed! Though It builds successfully "on my machine" (pardon me for this!) The build was complaining about those errors

Debug\DebuggingCommandInterceptor.cs(16, 17): error CS0012: The type 'System.Object' is defined in an assembly that is not referenced. You must add a reference to assembly 'System.Runtime, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'.

Debug\DebuggingCommandInterceptor.cs(16, 17): error CS0012: The type 'System.Exception' is defined in an assembly that is not referenced. You must add a reference to assembly 'System.Runtime, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'.

I tried to solve this at the beginning but but then I guessed that this not the real cause of failutre. By digging further in the build log I found the error hiding in a warning message!

C:\Windows\Microsoft.NET\Framework\v4.0.30319\Microsoft.Common.targets(983, 5): warning MSB3644: The reference assemblies for framework ".NETFramework,Version=v4.5" were not found. To resolve this, install the SDK or Targeting Pack for this framework version or retarget your application to a version of the framework for which you have the SDK or Targeting Pack installed. Note that assemblies will be resolved from the Global Assembly Cache (GAC) and will be used in place of reference assemblies. Therefore your assembly may not be correctly targeted for the framework you intend.

I couldn't find a way to reinstall .Net 4.5, the installer says there's an already installed recent version. I tried to use .Net Framework Repair Tool but the tool couldn't solve the issue & just generated a mass of logs! Finally I found the solution after jumping between different links & questions on StackOverflow, I found it here. Briefly, I have to copy the .net framework assemblies folder "C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETFramework\v4.5" from my machine to new the build server where the MSBuild expects to find it. During this search, I saw many developers mentioned that they gave up & installed VS2012/13 Express though they don't like this. The folder was about 100 MB and takes 17 MB after compression. I shared the zipped file via Google Drive to download it from the build server & finally the shiny Green moment of truth happened! :)

I think Microsoft is already aware of such issues in Windows Server 2012 & they have a list of known issues already. Also releasing the repair tool, 5 days ago, is a good step though it seems it doesn't catch all issues yet but hopefully in future.

At the end, I hope this may save someone's time until a proper fix is ready.

Mar 09, 2014

It’s a light-weight implementation of UnitOfWork pattern. I found all published solution aren’t really transactional & mix between the role of Repository & the role of the UnitOfWork itself. Recently, Rob Conery explained this in a very good way here. He offered other solutions to avoid this broken non-transactional UoW kind of implementation.

Why I did this?

I didn’t build this just because I needed to build a better implementation. I was refactoring a legacy web application to be ready to be published on Windows Azure. The problem was that the application had an architectural anti-pattern, Transactional Integration. It was built depending on heavy usage of TransactionScope almost everywhere. Scopes were its unit of work to integrate any two components or types, not only services! Since that this project has a limited time, so building from scratch with proper architecture & design isn’t an option. So I needed to find a new way to define unit of work that can accommodate both business & database transactions/layers. At the same time, it was built with Linq2SQL! Decision was made to move Entity Framework 6 since that we are going to use Windows Azure SQL Databases (SQL Azure). We needed to benefit from its new Connection Resilience feature. At the same time, EF6 + SqlAzureExecutionStrategy doesn’t accept the use of TransactionScope or any form of user provided transaction, list of limitation here. Which means clearing all TransactionScope statements from solution otherwise it won’t work on cloud. Because of the design of the application, there’s a need to share a UoW between different components so they can run in context of a single transaction without wrapping them in one TransactionScope.

What is in ReliableUnitOfWork.SqlAzure?

This the solution for my problem & I think it can help in other different scenarios when you need to share different component in one transaction. The solution is based on wrapping a DbContext instance in a UnitOfWork instance and provides generic way of passing this it to different players who are going to share in this unit until it disposed. There are 3 ways to make use of this.

Before talking about this way, what are changes do you need to implement in your current EF-based solution to benefit from this implementation? Almost nothing serious, you need to inherit from type UnitDbContext. I’ll use snippets from Contoso University, the Microsoft sample for MVC5 & EF6. It needs long time to convert the whole solution to use this implementation so I’ll just try to explain how anyone can achieve this.

The first way is the simplest that almost doesn't give something new. It enables you to create a new UoW where you're going to directly access the DbContext inside it to perform anything you need. So it's just a mechanism to get a fresh DbContext when you need it & dispose it immediately after using it.

The above example makes use of UnitOfWorkFactory directly with shortest overload of StartNew().

You may think that for one of the controllers that has many Read-Only Action, it could be suitable to create a single UoW in controller’s constructor & dispose it while disposing the controller. Just make sure you name it differently, like _readOnlyUnitOfWork, so you don’t use it to persist something unintentionally. Later, I’m thinking to add the feature of to create ReadOnlyUnitOfWork that overrides SaveChanges() & SaveChangesAsync() to throw exception when someone unintentionally call them.

To see how you can save changes, see the next snippet. It's just straight forward.

Way Two, using UnitOfWorkFactory with Repository & DomainService abstract classes.

In other scenarios, you may need to build your own repositories & aggregates & domain services, if you have a rich domain. Let’s say we need to build StudentRepository & CourseRepository & DepartementRepository & CatalogueService. The CatalogueService will consume mentioned repositories to perform different tasks, so you need a way to share the DbContext between them & make sure to SaveChanges() in a single Transaction.

The implementation has the concept of UnitOfWorkPlayer, and also Repository which inherits from UnitOfWorkPlayer. When create a new UoW using StartNew() method in UnitOfWorkFactory you can pass all CatalogueService dependencies so you get a UoW that's aware of all of them & take their changes via a single instance of UnitDbContext. Types that inherits from UnitOfWorkPlayer are required to implement one method, the HandlePlayerJoinedUnit(). What for? if this type has also a dependency on types that inherit from UnitOfWorkPlayer it'll also join the same UoW. So the player isn't a must to be a repository only. It could be another type, like RiskCalculator, that needs access to UoW and/or other repositories to perform some task as part of a wider context.

I believe it's getting more complex talking about it & a short sample should be clearer than talk.

At this point, I'll fake up some example to show how to write the code until I got more time to write a complete scenario. Initially, this is how we can define skeleton of our repositories.

I ripped this, without tests & examples, from a real-world solution after I read Rob's article that I liked too much & I hope that this implementation avoided some of mentioned pitfalls & hopefully doesn't introduced any new pitfalls :) Later on, as soon as I have a chance, I'll add a complete example of how it works for me. What's available online now is GitHub repository here and a NuGet package here.

Nov 30, 2013

Recently, someone hacked twitter & facebook accounts for an Egyptian activist, so I decided to share those tips on facebook, maybe it help someone or at least make it more difficult to be hacked.Remember, there's nothing called 100% secure, but let's learn and do our best.

1. Use a password managers if you don't, e.g. KeePass, and store an additional copy of database & key files on cloud, e.g. a secure Dropbox or SkyDrive account.

2. Activate Two-step authentication in any site supports this feature, e.g. Google, Dropbox, and Facebook. It's also called Two-factor, Second-factor, 2FA, or Multi-factor autentication abbreviated as MFA.

3. Use App Passwords after activating MFA which available, like on Google or Facebook.

4. Use Google Authenticator on Android, or Authenticator on WP8, for security codes generation better than receiving Text messages.

5. It'll be easir to manage if you use OAuth as much as you can, than generating new passwords for every new site or service but make sure to secure this OAuth account tightly or it'll be the weakness point to endanger all your accounts.

This is not a simple thing to achieve, if you're not familiar with these concepts and it requires time to learn about it first then apply it gradually, account by account, until you secure all your accounts without accidentally locking/losing an account.