So Xamarin and I have this love-hate relationship. I love it because it makes it super productive for a C# developer to build iOS and Android apps with one codebase but on the other hand, when using third party libraries its an absolute confusing mess. It’s really a non-trivial thing to figure out which version of a Nuget package you should use in which project this got even worse with the introduction of .NET Standard as an additional (and preferred) option for the shared code library. So now we have to navigate several variants of Xamarin projects:

Xamarin.Forms PCL

Xamarin.Forms .NET Standard

Xamarin.Android

Xamarin.iOS

Often times this is not very well documented.

I’ve been testing this ZXing (Zebra crossing) library which has a .NET port called “ZXing.net”. I started off with this Nuget package but couldn’t get it to work.

This package gets added to the shared .NET Standard project (where the lion’s share of your code is created / maintained).

In the Xamarin.Forms project it was relatively easy to create a simple barcode. There is a control in the “ZXing.Net.Mobile.Forms” namespace which you can add to any XAML view by adding an xml namespace like so:

Then you can define the control within your content like so:

One challenge is the BarcodeOptions property which is not easily set from pure XAML alone. I had to use code behind to initialize it.

Of course, much of this could be setup using data bindings to a view model but in my simple demo I just wanted to get it working. The results are pretty sweet!

]]>https://skycliffs.com/2018/07/12/zxing-qr-code-display-in-xamarin-forms/feed/02f00e1425b5a422d883a4c44702d0901theheimdallSwitching from “Azure” to “AzureRM” Terraform Backendhttps://skycliffs.com/2018/06/27/switching-from-azure-to-azurerm-terraform-backend/
https://skycliffs.com/2018/06/27/switching-from-azure-to-azurerm-terraform-backend/#respondWed, 27 Jun 2018 12:25:19 +0000http://skycliffs.com/?p=2151Continue reading Switching from “Azure” to “AzureRM” Terraform Backend]]>Terraform no longer supports “azure” as a backend. So if you have a backend configuration that makes reference to the “azure” backend provider you will get the following warning:

All you need to do is change the following property in your backend configuration file:

To the following:

Nothing else has to change. Just re-initialize your terraform project and it will automatically migrate your states to the “new” provider. If you left your storage account the same it won’t change anything and continue to use the storage account you originally setup.

Done!

]]>https://skycliffs.com/2018/06/27/switching-from-azure-to-azurerm-terraform-backend/feed/0theheimdallSetting up Packer and Azurehttps://skycliffs.com/2018/05/22/setting-up-packer-and-azure/
https://skycliffs.com/2018/05/22/setting-up-packer-and-azure/#respondTue, 22 May 2018 19:34:48 +0000http://skycliffs.com/?p=2144Continue reading Setting up Packer and Azure]]>You should follow the documentation here. However, one thing is missing. When you use the Azure CLI to generate a service account you are not provided with the accurate ObjectId for the Active Directory Service Principal.

As a result, if you put the wrong value in the “object_id” field in the packer variables file you will invariably run into this error:

“ERROR: -> Forbidden : Access denied”

“…failed to get certificate URL, retry(0)”

“…failed to get certificate URL, retry(1)”

“…failed to get certificate URL, retry(2)”

“…failed to get certificate URL, retry(3)”

Until it finally bombs out.

I haven’t figured out how to get this value from the Azure CLI but I have using PowerShell.

First login.

Then display a list of the Azure AD Service Principals:

Find the Service Principal with an ApplicationId that matches the field found in the Azure Portal highlighted below:

Your packer variables file should have the following:

client_id: Service Principal’s Application ID

client_secret: Password you setup for this Service Principal

tenant_id: Azure Active Directory tenant ID

subscription_id: Azure Subscription ID

object_id: Service Principal’s Object ID

Resource_group_name: Name of an existing resource group where packer can deploy the machine images to

After grabbing the ObjectId, everything works simply by running this command to build my image in azure:

packer build -var-file=variables-prod.json active-directory-dc.json

]]>https://skycliffs.com/2018/05/22/setting-up-packer-and-azure/feed/0theheimdallUsing an Azure Marketplace Image with Packerhttps://skycliffs.com/2018/05/18/using-an-azure-marketplace-image-with-packer/
https://skycliffs.com/2018/05/18/using-an-azure-marketplace-image-with-packer/#respondFri, 18 May 2018 03:54:56 +0000http://skycliffs.com/?p=2137Continue reading Using an Azure Marketplace Image with Packer]]>Packer is a fantastic tool that lets you automatically build AMIs on AWS or machine images on Azure. It’s common to start from an operating system machine image published by Microsoft but sometimes you may want to try out (and eventually use) a marketplace image. If you’re not familiar with Azure Resource Group Templates or at least to the point where you are intimate with every JSON property for every resource type (nobody is) then you might miss the subtle nuance required for Marketplace images.

I grabbed the publisher information by going through the motions in the Azure Portal.

When I ran the build I got this error:

Turns out you need the following information:

I found the plan information that I needed also in the RGT that Azure generated for me:

You will need to ensure that nuget.exe is in your path. It’s as simple as creating a folder and dropping the exe in there then adding that folder to your PATH environment variable.

Then simply open the Visual Studio Developer command prompt, navigate to the folder of the project you want to create a NuGet package for and then execute the command ‘nuget spec’.

Pack the NuGet Package

In VSTS, there are two ways you can Pack a NuGet Package. Using either the ‘NuGet’ action which works for .NET Framework assemblies or ‘.NET Core’ action which works for .NET Core and .NET Standard assemblies.

It’s important that you select 2.* (preview) otherwise the ‘pack’ command is not available.

Make special note that the .NET Core icon changes when you do this.

You can use the following syntax to select:

A specific project “**/ProjectName.csproj”

A group of similarly named projects “**/*DataContracts.csproj”

You can target either the csproj or the nuspec. I usually target the csproj file but if you do ensure that you have a nuspec file of the same name.

Push the NuGet Package to your repository

In a previous life, I used myget. It was relatively cheap and was far superior to hosting your own NuGet server to setup a private repository. However, now VSTS has this functionality built in.

You might not see ‘Packages’ here at first. It’s actually a Visual Studio Team Services marketplace item but fear not you can get it for free if you have an MSDN subscription.

Once you do that your project will have a feed but you can add more if you want…

Once you do that you can use the ‘NuGet’ action’s push command to push all *.nupkg files to your target feed.

The target feed drop down will not be populated unless you have setup the marketplace item in VSTS. Alternatively you can use an external NuGet server but why bother?

I literally generated a client using the ‘Add REST API Client’ in the .NET Framework project and the PCL project and then linked them into a .NET Standard project and neither codebases threw compiler errors.

However, this is no reason for the good folks on the Visual Studio tooling for Azure team not to update the ‘REST API Client’ wizard to support .NET Standard assemblies. Clearly it would be very, very close to a recompile.

You just need to add a few NuGet packages and you’re all set!

Microsoft.Rest.ClientRuntime v2.3.10

Newtonsoft.Json v.10.0.3

]]>https://skycliffs.com/2018/01/03/easy-hack-to-get-a-net-standard-rest-api/feed/0theheimdalldotnetstd_restapiGenerating a REST API in Visual Studio with the many flavors of .NEThttps://skycliffs.com/2018/01/03/generating-a-rest-api-in-visual-studio-with-the-many-flavors-of-net/
https://skycliffs.com/2018/01/03/generating-a-rest-api-in-visual-studio-with-the-many-flavors-of-net/#respondWed, 03 Jan 2018 05:25:30 +0000http://skycliffs.com/?p=2103Continue reading Generating a REST API in Visual Studio with the many flavors of .NET]]>I remember a time when there was just .NET. .NET Framework 1.0, 1.1, 2.0, 3.5, 4.0 etc. but all .NET.

I just want to build a WebAPI client that I can use in a Xamarin app. Apparently Xamarin supports .NET Standard now instead of PCLs ([the infamously terrible] Portable Class Libraries). However, .NET Standard is not supported by the Visual Studio ‘Add REST API Client’.

.NET Core

Supported? NO

.NET Framework

Supported? But of course!

.NET Standard

Supported? No

Portable Class Library (Legacy)

Supported? Yes

Well there you have it. I can use .NET Framework (but not usable in Xamarin) or PCL (supported in Xamarin but now frowned upon due to the .NET Standard integration).

]]>https://skycliffs.com/2018/01/03/generating-a-rest-api-in-visual-studio-with-the-many-flavors-of-net/feed/0theheimdallSwagger UI on Service Fabric with Stateless ASP.NET Core WebAPIhttps://skycliffs.com/2018/01/02/swagger-ui-on-service-fabric-with-stateless-asp-net-core-webapi/
https://skycliffs.com/2018/01/02/swagger-ui-on-service-fabric-with-stateless-asp-net-core-webapi/#respondTue, 02 Jan 2018 21:12:37 +0000http://skycliffs.com/?p=2097Continue reading Swagger UI on Service Fabric with Stateless ASP.NET Core WebAPI]]>I just found that I needed to add the following NuGet package in order to get this to work:

Microsoft.AspNetCore.StaticFiles

I was using a vanilla configuration in my code where I setup swagger endpoint:

]]>https://skycliffs.com/2018/01/02/swagger-ui-on-service-fabric-with-stateless-asp-net-core-webapi/feed/0theheimdallStandalone Cluster On a Single Machine: Same Rules Applyhttps://skycliffs.com/2018/01/02/standalone-cluster-on-a-single-machine-same-rules-apply/
https://skycliffs.com/2018/01/02/standalone-cluster-on-a-single-machine-same-rules-apply/#respondTue, 02 Jan 2018 21:07:06 +0000http://skycliffs.com/?p=2089Continue reading Standalone Cluster On a Single Machine: Same Rules Apply]]>I am running a stand alone service fabric cluster on a single, albeit beefy, physical server. This cluster has three virtual nodes. That’s because I am using a modified version of the Unsecure.DevCluster configuration template. There are other templates that will allow you to deploy to multiple physical (or virtual) machines but I don’t see the need for what I’m trying to accomplish.

I guess this is common sense but when hosting a web project like a stateless ASP.NET Core Web API because you are deploying to a single machine you can only deploy to one node because when it tries to open the same port on the other nodes it will error out saying the port is already opened.

Normally, when you deploy to a cluster hosted on Microsoft Azure you want the number of instances for a stateless service to be set to “-1”. This will deploy the stateless service to ALL nodes. However, in Azure, each of your nodes being a separate virtual machine means that you can open the same port on every node.

So I’ll create a copy of the Cloud deployment profile and modify it to only deploy to one node.

The easiest way I’ve found is to add the profile from the Publish Service Fabric Application screen. It will modify all the configuration files on your behalf. You can access it from the “Target profile” drop down and the “Application Parameters File” drop down.

Select Cloud.xml and hit “Create Copy”. Do this for both the “Target profile” and the “Application Parameters” just to keep everything clean.

Change the instance count from “-1” to “1”. Then you will only see it deployed to one node and won’t get any errors about not being able to open up ports.

There you go. Now you can still deploy to your Azure cluster when you are good and ready but you can take advantage of your stand alone cluster for the time being.

]]>https://skycliffs.com/2018/01/02/standalone-cluster-on-a-single-machine-same-rules-apply/feed/0theheimdallMy Home Labhttps://skycliffs.com/2018/01/02/my-home-lab-2/
https://skycliffs.com/2018/01/02/my-home-lab-2/#respondTue, 02 Jan 2018 04:26:15 +0000http://skycliffs.com/?p=2079Continue reading My Home Lab]]>Here is my home lab setup. I was able to re-wire everything over the holidays so things are a bit more tidy thanks to extensive use of Velcro cable management strips and the cable management arms that came with my StarTech rack.

Dumbledor, aptly named, is my PowerEdge R820 I am planning on using to experiment with Azure Stack. It’s rocking quad-8 core Xeons so that’s 32-cores, 128GB RAM, and 7 SATA disks which should put me on the playing field! Right now I’m using it for a Stand Alone Service Fabric Cluster.

Harry and Ron, PowerEdge 850s I use mainly for Docker because they are x64 and can actually run it!

Hermione, Fred, and George are my old PowerEdge 750s are x86 therefore, they can’t run modern Windows Server OS so I have them all running Ubuntu 17.04 and they are utility players and mostly do the menial labor of SMB which supports my Plex server running on Harry. Don’t let them fool you however, they are rocking pretty significant storage.

Not a whole lot going on back hear. I mounted the networking gear on the rear of the rack to hide all the spaghetti. I connected my two 24-port gigabit routers using a 10Gb/s SFP+ cable. It was super plug and play. I was a bit intimidated as it appears the SFP cables are a bit bifurcated between the networking gear brands. This implied incompatibility which is also a reason I stuck with two D-Link routers. That gave me slightly more confidence that a SFP cable marked “for D-Link” would work.

I try and keep the power cables routed through the center to keep this area focused on the network as the power strips have outlets internally facing as well. All power routes down to the UPS’s and then to several dedicated 20A outlets on the wall. Now all I need is a generator to keep my Plex server running during the Zombie Apocalypse.